text
stringlengths
5.43k
47.1k
id
stringlengths
47
47
dump
stringclasses
7 values
url
stringlengths
16
815
file_path
stringlengths
125
142
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
4.1k
8.19k
score
float64
2.52
4.88
int_score
int64
3
5
- Pages: 18 - Word count: 4397 - Category: Education A limited time offer! Get a custom sample essay written according to your requirements urgent 3h delivery guaranteedOrder Now Recess is a playground for debate amongst educational researchers and philosophers. In primary schools, twenty five percent of injuries will take place on the playground (Hill). Bullying and unhealthy competition are areas of recess concern and thus recess is considered a waste of academic time. Is this a valid reason to completely eliminate recess in elementary schools? Forty percent of the nation’s 16,000 schools have either already modified, replaced, or consider eliminating recess (LaHoud). As far back as 1884, a paper written by W.T. Harris, a philosopher and educator, debated the question of whether recess should be allowed or dismissed in elementary schools (Brosnihan). When delivering his address before the Department of Superintendents of the National Education Association (DSNEA), Harris presented moral arguments defending recess by saying that the students’ physical needs outweigh the loss of discipline in the classroom. One hundred five years later in the United Nations’ Convention on the Right of the Child, adults are still trying to defend recess for their children (Brosnihan). On the recess debate playground, concerned Americans must strongly consider the arguments of the advocates of recess, based on their scientific and philosophical analysis and their simple common solutions. One reason that schools are considering eliminating recess is that school administration is afraid of law-suits that may be filed if a child is injured during recess. Many law-suits have already taken place and in both cases the school district has been held financially responsible. In reaction to these law-suit, schools feel they have no other option but to cancel recess from a child’s daily activities. School policy makers consider it illogical for the schools to allow children to play when they are knowingly endangering the students. Safety procedures are already in place at schools to prevent injuries yet some injuries still occur because of the child’s activity level not the school’s safety procedures (Quis). The extreme reaction of eliminating recess is not the best solution for preventing playground injuries. Simple procedures such as maintaining safer playground equipment, enforcing safe play , requiring proper play apparel and instructing students in recess activities create a healthy recess environment. If the playground equipment is old, rusty, and in poor repair the children who are constantly playing on it are more likely to receive a cut, scape or a more serious injury. Following manufacturer’s recommended age for use can help equipment stay in good repair and help prevent injuries that occur when play equipment is not used by an age appropriate user. Another safety procedure is to fence in the playground area. Children are curious and tend to explore new places. By having a fence around the playground the area that the teacher or supervisor must watch is limited. Maintaining the grass or other soft play surfaces thus when a child falls he is less likely to scrape a knee. Another safety step would be to teach the children how to play safe, cooperative, gross motor games; such as, basketball, four-square, jump rope, and follow the leader. The school will need to set and enforce clear limits and guidelines for its playground equipment use; such as, taking turns, not climbing up the slide or climbing on top of the monkey bars. By limiting the type of apparel worn on the playground the children will also stay safer. Flip-flops or heeled shoes are hard to run in and are easy to trip and injure oneself. By requiring children dress in proper play apparel, policy enforcers are keeping students safe for play purposes and also weather protection can be enhanced. Safer procedures rather than eliminating is a way to eliminate child injuries and the risk of law suits. Parents and anti-recess proponents argue that because there is not enough supervision on a school’s playground, children are more prone to injury. When a child is not properly monitored at recess the risk for injury is increased. For schools to hire enough supervisors to insure every child’s safety at recess enhanced funding would be required. The lack of proper supervision on the playground may be contributing to the bullying of younger and less athletic children (John). Parents as volunteers are a valuable resource and may be equipped to help this need. Every school has opportunities for volunteers. The schools can train parents to become supervisors that help provide the proper supervision on a playground. Instead the schools use these parents volunteers for other needs such as sorting books in the library. Playground developers should structure playgrounds more effectively so there are no hidden corners where children can hide. These areas are where most bullying takes place, because the teacher cannot see the children’s interactions. Another procedure a school can establish so that supervision does not become an issue is to arrange the class schedules differently so that not as many students are out on the playground at the same time. Revised schedules will prevent four teachers trying to watch 150 children. One teacher watching their twenty five students is more effective. The students may also develop a better relationship with their teacher when recess time is spent together. Most teachers will find that they enjoy the outdoor recess period as well as the children, and that the classroom work afterwards becomes easier and more endurable for both teacher and student because of the release of tension and energy. When a teacher’s responsibilities include watching the playground during the day, the school is not paying additional wages. On average only one and a half dollars to two dollars are spent on hiring a playground supervisor for the morning and afternoon (Williams 240). With minimal restructuring schools can secure safely monitored recesses for their students. Programs have been developed to assist schools in the effort of safe playgrounds; one such program is called Peaceful Playgrounds. This non-profit company relies on government grants and donations from the public in order to help schools keep recess available to its students. They implement the walk-talk-rock system. “Walk away, calmly talk about it, or if students still cannot determine what to do, play rock-paper-scissors to find the answer.” (Peaceful Playgrounds) According to the Director of Peaceful Playgrounds, Melinda Bossenmeyer, Ed.D, this system works. Teachers have seen it work in and out of the classroom. Because the students are being challenged to implement this system on their own during recess time, it becomes second nature to them in the classroom during free time which enhances classroom cooperation. The schools (approximately 200 schools) that have put this system into practice have seen a decrease in fighting and disputes among their students. They are encouraging this system and in return for the good behavior of their students the schools are promoting Peaceful Playgrounds and sending them donations to enhance the program (Tobias). Despite programs such as Peaceful Playgrounds school bullying on the playground has become a serious problem that many schools have to deal with. As a few children grow faster than each other, they pick on the smaller children by pushing, taking money and, even in extreme cases, initiating fight when the smaller child stands up to the bully (Mike). Eliminating recess does not eliminate bullying other issues need to be dealt with as well. The teacher’s classroom procedures and interaction with students can strengthen a child’s ability to avoid bullying. When teachers calls a student up to the board to answer a question and that students answers incorrectly, does the teacher then correct the student in front of the class and lower that one student’s self-esteem or does the teacher let the answer stand uncorrected but boosts that student’s self-esteem? It is a misconception that correcting a child lowers the student’s self-esteem. Healthy self-esteem requires success based on truth. When a child is lovingly corrected his knowledge increases and a truth based self-esteem is established. By letting the student answer incorrectly, the teacher then is setting the student up for failure in the future. When correcting the student the teacher is disciplining him or her in the correct manner and is helping to set the student up for success in the future. The teacher must find a balance between creating successes and enhancing knowledge. A child with a healthy self-esteem is able to interact with more understanding and confidence on the playground. This confidence protects against bullying and unhealthy interaction. Along with bullying comes the aspect of competition, which is also a major factor in why schools want to eliminate recess (Mike). Competition is a major reason schools are looking to eliminate recess. Because children are so competitive at a young age, schools feel it is necessary to minimize that competitiveness. Public opinion and misconceptions of competition are causing schools to carefully redefine student activities. Children are more competitive than ever and the schools are not sure to do deal with it (Johnson). Many parents are concerned about competition and its effect on children. Others feel that competition is nothing to worry about. In fact, moderate competition is good for children, but extreme competition can devastate a child. Research tells us that a temperament, culture, talent and age of the child affect how a child handles competition (Pellegrini). However, children are not born with a competitive urge. They learn it. They do not begin to compete with and compare their skills to others until they are about five years old. Most children cannot work well as a member of a team until they are ten or eleven years old. They also need to be developmentally mature before they can handle defeat gracefully. These parents and teachers are arguing that too much competition is leading to bullying and with that produces low self-esteem. Teachers are trying to teach their students cooperation inside the class-room which contradicts what they are learning outside the classroom. In today’s world the basic human definition of competition has become flawed and cooperation has become ideal. According to Pellegrini, a psychologist from University of Minnesota, tag, a commonly known playground game, children learn to cooperate to the extent that the play requires. Students learn to solve problems in these forms of games and they realize that in order to sustain their chase play with peers, they must take turns being the chaser or the chased. If they refuse the game ends. Pellegrini states, “This reciprocating role is a powerful predictor of the ability to cooperate and view events from different perspectives”. In our country today, everything consists of some type of competition, whether it is from basic family roles, sporting events, or economic well-being. This competitive nature helps us better ourselves and others. Competition pushes us forward in life. Competition can be good for children. It can help children develop healthy attitudes about winning and losing. Children become competitive as they refine and practice skills and develop coordination and cognitive abilities. Competition can encourage growth and push a child to excel. However, assume our nation was based on cooperation not competition. What would that be like? Would today’s world be a better place? First there must be a clear understanding of just how this competitive nature affects our everyday life. Though economic and world politics we have a competitive nature, without a competitive nature the United States would not be as well off as we are today. More people would be living on the street or in houses that aren’t heated well enough for the children. In everyday jobs parents, business men and women have to be competitive in order to make the trades or deals necessary to support their jobs and family. Even at home playing family games with other siblings or parents the child is learning to have a competitive nature. Competition helps define the child’s character. As the child grows his world view grows as well. Every event in a child’s life shapes his world view, even events that take place during recess. By placing a child in different situations he is forced to confront different problem and work through them himself. Many of these situations are introduced to during recess. In today’s adult society, success is considered to be financial stability and a high ranking position in business. Wealth and status are the human definers which create social competition. Competitive drive is required for success in today’s world. By taking recess out of a child’s day he is not taught that competitiveness thus the child is being set up for failure in a competitive society. Character development is a positive bi-product of competition. During competition children’s individuality is strengthened. Competition in life helps shape the individual thought-life of the child. Individuality is important because it will help that person gain success and status. If everybody in the business world was just like everybody else we would not have diversity in the world. Diversity brings about different solutions and different points of view for each situation. With diversity comes increased creativity because everyone’s God-given individuality is maximized. “The eye cannot say to the hand, ‘I do not need you!’ And the hand cannot say to the feet, ‘I do not need you!’ On the contrary, those parts of the body that seem to be the weaker are indispensable, and the parts that we think are less honorable we treat with special honor. And the parts that are unpresentable are treated with special modesty, while our presentable parts need no special treatment. But God has combined the members of the body and has given greater honor to the parts it lacks, so that there should be no division in the body, but that its parts should have equal concern for each other. If one part suffers, every part suffers with it; if one part is honored, every part rejoices with it.”- 1 Corinthians 12:21-26 The Bible states that God created everybody for His individual purposes. If there was no diversity among humans today, life would repetitious and pointless. Every person would look the same, have the same personality traits and character differences would not exist. Recess provides natural opportunities for competitive diversity to be celebrated. Through their play children benefit by respecting and building upon the creative ideas of their playmates. Recess is a great place for a child to grasp an understanding of their self image and the world around them. Adolescents learn at a young age what makes them so different than everybody else. As emphasized before, people need to be different in careers and character. Recess enables adolescents to realize the importance of their differences and to continue to grow individually. Recess is also an excellent setting for children to learn how to govern themselves, which will also lead to the development of Christian character development. “He who knows not how to rule a kingdom, that cannot manage a province; nor can he wield a province, that cannot order a city; nor can he order a city, that knows not how to regulate a village; nor he a village; that cannot guide a family, nor can that man govern well a family that knows not how to govern himself; neither can any govern himself unless his reason be the Lord, Will and appetite her vassals: nor can Reason rule unless herself be ruled by God and (Wholly) be obedient to Him.”- Hugo Grotius 1654 As one child helps another on the playground the possibility for a lasting, strong friendships is born. During the elementary school years, a child trying to make friends on the playground will be more successful if Christian aspects shine through them. Such students will take the positive lessons learned on the playground into their classrooms and homes. Taking these Christian aspects home, into the family may significantly change the dynamics of the family for the better. The child may start to appreciate his parents more, understanding all they do for him and in return, the parent will respect the child and treat him better as well. A trend seems to be taking place in school districts in the United States. Many of these schools are implementing “no recess” policies under the belief that “recess is a waste of time that would be better spent on academics” (Johnson). Teachers say that the time would be better spent on math, reading or something children will need to have a basics for in the future. Many educators believe that would be good for test scores and grades, but other educators believe that the loss of recess time will hurt the student by not keeping his interest in school. School principles are stating that recess is a waste of valuable learning time. On average, only thirty minutes a day is lost to recess, by allowing two fifteen minute breaks (Johnson). Recess is one of the few times a day when children are free to express a wide range of social competencies such as sharing, cooperation, and negative and positive language in a context that the student sees as meaningful. The goal of recess is to give elementary age students a break from academic application. When they return to class the students are re-energized and ready to return to structured learning again. At recess children learn how to share, get along, and deal with competitiveness on a small scale so that as they become older they will understand what is transpiring and they will know how to confront it and deal with it. There are many scientific theories about the benefits of recess. One theory, called the Novelty Theory, states that when children work constantly on one specific subject that subject become less interesting and the child becomes less attentive. By giving the child a break from academic learning he can take part in different, engaging activities so that when he returns to school work he perceives the work as new and novel again, this novel prospective increase academic application and on-task behavior(Pellegrini). Another theory, the Cognitive Maturity Hypothesis, declares that both children and adults learn better by engaging in tasks spaced over time rather than being concentrated. Recess provides the breaks that are needed during their lessons to optimize their attention to classroom activities and time-on-task behavior (Pellegrini). There is more than social development and cognitive rejuvenation that takes place during recess. During recess children also develop a philosophical framework for decision making, such as who to play with, what to play with, and what is right or wrong. Philosophy also aids in shaping a person’s world view. By having events that shape and develop their world view early in their life, children will develop a better character which will lead to more self-disciplinary actions and create a more self-reliant individual. If children never have to face problems on a small scale, during recess, then they will not know or understand how to face bigger problems when they get older when the consequences are greater. As children grow they gain knowledge that they apply and rely upon to accomplish God’s purposes through life, without learning those simple solutions early in life that child is set-up for failure in the business and personal world today. The debatable contrast of how adolescents act in the classroom recess needs to be addressed. Do children act disciplined inside the classroom and then undisciplined outside at recess? The structured atmosphere of a well-managed classroom creates disciplined actions. This discipline stems largely from the teacher. When students leave the structure of the classroom the teacher directed discipline is less evident. “School-aged children strive for competence, the sense that they are able and productive human beings. In adolescence, the task is to synthesize past, present, and future possibilities into a clearer sense of self. Adolescents wonder ‘Who am I as an individual? What do I want in life? What values should I live by? What do I believe?’ Erikson called this quest to refine one’s sense of self the adolescent’s ‘search for identity” (Meyers, 117) The playground now becomes the classroom for practicing self-discipline as the child strives for competence in a student centered environment. Along with opportunity for developing self-discipline, recess also assures that students receive enough exercise necessary for a healthy life-style. Anti-recess advocates promote structured physical education (PE) as the only venue for meeting this need. Recess and PE are two separate subjects and thus cannot be combined into the same block, but that is what school districts are attempting to do. School Board directors think that because both subjects involve the same type of exercise, they can eliminate one and recess is the easiest to eliminate. Physical education is organized by a teacher thus the same relations and developments that happen during recess do not take place during this time. During recess students are still supervised but are not in direct contact with a teacher. In this situation, schoolmates are more likely to interact with each other in different ways, ways that show their personality more clearly. Both PE and recess are a healthy part of an elementary education. In December of 2006, a field study was performed at Heritage Christian School. The observers understood that this field study was limited to a small group, and the conclusions may have altered slightly if preformed in an expanded setting (addendum). The sociology class and I analyzed various grades after they had had recess and when they did not. We watched for various actions such as; leaving their desks to sharpen their pencil, asking to go to the restroom, talking to their neighbor and other off-task subjects. As a conclusion to our study we found that elementary students stayed more on task and were less distracted when they had recess. These classes were observed for the same amount of time and by the same students each time. We observed first graders, third graders, and sixth graders. Even though the results varied in each classroom the conclusion that recess was needed was the same for each grade. In one particular class, first grade, there were many disciplinary problems with one student. When asking the teacher after class if that was a normal behavior for this particular student she stated that it was not and she felt the child was acting out of frustration because of no morning recess. During an interview with the teachers all three of them stated that they would have to change the way they taught if recess was not already incorporated into their everyday schedule. They said they would have to incorporate more active learning to keep the adolescent involved in the lesson. They thought that recess is good for the student as well as for themselves. Teachers use that time to catch up on their work and responsibilities such as grading papers or copying homework sheets. This break helps because they do not have to leave the classroom while the students are in there working or be distracted by papers while students need help learning. If adolescent never have to face problems on a small scale, during recess, then they won’t know or understand how to face bigger problems when they get older and when the consequences are greater. As children grow up they learn the correct demeanor and take with them. They rely on this demeanor to facilitate them through life. Without acquiring these simple solutions, such as the peaceful playground program or learning how to handle themselves accordingly and in the correct demeanor, children are immediately set up for failure. In today’s adult society, a person is considered successful if they are financially stable and hold a high ranking business position. One is defined by wealth and status, which in return make you more competitive. Competitive drive can enhance success in today’s world. Taking recess out of a child’s day may be teaching them that competitiveness is wrong, thus again setting the child up for failure in the long run. Researcher and philosophers have been debating on the recess playground since 1884. The winners of the struggle must come down on the part of recess being an essential allotment of a child’s everyday life. Here they grow and learn aspects of life they cannot learn in a classroom setting. Even though recess may sometimes bring about a competitive spirit this spirit will help them in the future as they grow and learn. By taking recess out of their lives they are deprived of advantages learned during recess. Most schools recognize this and are fighting to keep recess in their school and part of the everyday school day. Addendum. Personal Research. 29 Nov 2006 Brosnihan, Lauren “ A Brief History of Recess” Recess! The World of Children’s Culture Everyday. 05 Nov 2003 Transcripts 06 Jan 2007<http://www.recess.ufl.edu/transcripts/2003/1105.shtml> Harris, Lynn. “More Schools banning “dangerous” games at recess.” Broadsheet 28 June 2006 15. 27 Nov 2006 <HTTP://www.salon.com/mwt/broadsheet/2006/06/28/no_tag_at_recess/index.html>. I’m Outraged. 29 June 2006 Letters To The Editor 28 Nov 2006 <HTTP://letters.salon.com/mwt/broadsheet/2006/06/28/no_tag_at_recess/view/?order=asc> Hill, Bill. “News Journal on line.” Outlaw Recess?. 09 Dec 2006 Daytona Beach News 03 Jan 2007 <HTTP://www.news-journalonline.com/NewsJournalOnline/Opinion/Columnists/DJCol/col.htm> John. News. The End of the World as we Know it. 18 Oct 2006 Power Line 26 Nov 2006 Johnson, Dirk. “Many Schools putting an end to Play.” New York Times[New York] 07 Apr 1998, ed:A1, A16 Joshi, Rashmi. News. “All I Needed to Know, I Learned during Recess”. 01 Nov 2006. UCLA education. 27 Nov 2006 LaHoud, Susan. “The Sun Chronicle.” Tagged Out. 17 Oct 2006. Sun Chronicle. 27 Nov 2006 <HTTP://www.thesunchronicle.com/articles/2006/10/18/features/feature37.txt> Little, Cathy. Personal interview. Teacher. 29 Nov 2006 Loyd, Stacy. Personal interview. Teacher. 29 Nov 2006 Meyers, David. Exploring Psychology. Page117. Fifth Edition. Holland, Mich.:Worth Publishers, 2002. Mike. “Mike’s Neighborhood” The Violent World of Recess. 18 Oct 2006 Blogspot 25 Nov 2006 <HTTP://mikesneighborhood.blogspot.com/2006/10/violent-world-of-recess.html> Miller, Doylene. Personal interview. Teacher. 29 Nov 2006 Pellegrini, Anthony “British Journal of Education” Relations between children’s playground and Classroom Behavior. 1993 <www.library.adoption.com/Education> Shelby, Don. “In the Know: Banning Tag at Recess.” In The Know 19 Oct 2006 26. 27 Nov 2006 <HTTP://wcco.com/intheknow/local_story_292100319.html>. Tobias, Suzanne. “Peaceful Playgrounds.” Peaceful Playgrounds. Apr 2005. The Wichita Eagle 26 Nov 2006 <HTTP://www.peacefulplaygrounds.com/press31.html> Quis, Deus. “Website Toolbox” Tag is Now Out. 18 Oct 2006 Website Toolbox 25 Nov 2006 <HTTP://www.websitetoolbox.com/tool/post/apologia/vpost?id=1455830&trail=14#3> Williams, Jesse, Mary Burgess, and Thomas Wood. Healthful Schools: How to Build, Equip, and Maintain Them. 1st ed. Houghton Mifflin company 1918
<urn:uuid:301398d5-92d6-4b68-829a-f2855b0b2908>
CC-MAIN-2022-33
https://blablawriting.net/eliminating-recess-essay
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572127.33/warc/CC-MAIN-20220815024523-20220815054523-00295.warc.gz
en
0.965597
5,670
3.09375
3
Even if you have never heard of Saint Nicholas of Asia Minor, you know something about him. Very likely you have been touched by traditions that exist because of him. During his earthly life (c. 280-343 A.D), he was known for his generosity, often giving in secret. He was also a man of social justice, standing up for those who were mistreated and helping those who were hungry. In his part of the world, many respected and loved him. Nicholas of Myra lived a long, full life. He died on December 6, 343. And that is when things really got started! Now it is your turn to celebrate Saint Nicholas: On dark and cozy December nights, gather to read and savor the stories of the man who eventually came to be called Santa Claus. You will meet an extraordinary boy named Nicholas, born in Turkey in the fourth century. Savor the stories of his adventurous life. Then learn how his spirit lived on after him, in miracles, legends, and customs in many parts of the world for seventeen centuries. Celebrating the Spirit of Saint Nicholas Put on a simple play to introduce others to Saint Nicholas: Midnight Missions Midnight Missions: a play for St. Nicholas Day by Anne E. Neuberger The following is a short, simple play that can be easily produced for home, school, or parish use. It is basically a story-telling tool to introduce St. Nicholas, and can be used for an audience of mixed ages. The only actors needed are a narrator and “St. Nicholas,” and props can be minimal or nonexistent. Hundreds of years ago, in the country we now call Turkey, there lived a wealthy child named Nicholas. He lived around the first half of the fourth century. He became a priest, and soon a bishop. Bishop Nicholas worked for justice among all people, and the name Nicholas means “victory of the people.” He helped those who were poor, and those in danger in his lifetime, and he is said to have accomplished even more after his death! Nicholas appeared to Emperor Constantine of Rome in a dream to convince him to set some prisoners free. Sailors who nearly drowned in a storm were saved by the good bishop who mysteriously landed on board their sinking ship. When a baby was swept into a swift moving river and all efforts to save him failed, the baby appeared the next day, alive and healthy, in a cathedral, under an icon of St. Nicholas. He is a saint because he lived what the Gospel asks of us. Because some of his acts were done mysteriously, Nicholas has become our saint of surprises. Now his spirit is within each of us whenever we do a good deed in secret. It seems that his spirit is especially strong at this time of year. So strong, I feel as if he is with us now. . . . Hello! Greetings, everyone! How are you on this Advent day? Waiting for Christmas? Waiting for the birth of the Christ Child? Well, I have come to tell you a story, and to ask something of importance of you while you wait. Once, long, long ago, there was a poor man in my country. I heard the man had a terrible problem. You see, he had three daughters who were all old enough to be married. Way back then, it was a custom for a young woman to bring a gift of money to her new husband when they married. This was called a dowry. But as this man had no money to give his daughters, they could not marry. I’m glad to see that this particular custom has died out! But then, with no dowry and no marriage, his daughters would have to become slaves! Slaves! Can you imagine the worry this good man and his daughters had? Well, I had money. My parents had left me more than enough. Of course I would share it with this family. I knew it would be better if the money were given in secret. So, I took a bag of gold and slipped out into the night. I wore a long cloak with a hood so no one could recognize me. I walked quietly through the streets. It was dark. We did not have streetlights then, you know. Still I walked close to the buildings and kept very quiet—I wanted no one to notice me. When I reached the house, all was dark and silent. I dared not leave a bag of gold on the doorstep for it would surely be stolen. I had no choice but to slip the bag through the window. As soon as I heard that satisfying thud, I hurried off. I learned that soon after my nighttime travels, the oldest daughter had been married. It was a modest wedding, but her new husband was a good, loving man. One down, two to go. I didn’t want to wait too long for my next secret mission. After all, the second daughter had to be getting nervous. Again I reached the house without being seen. This time, however, I could see a small light, a candle at a bedside, I supposed, and so I had to wait. The wind was chilly and my feet started to ache, but at last the light was snuffed out. I waited another few moments, then slipped the bag of gold through the window. This time, I didn’t wait to hear it land. Again, I heard news of the second daughter’s wedding. One more, I told myself. And before long, I found myself hurrying through the darkness to the house, the last bag of gold heavy in my hand. The house was dark, but I approached the window cautiously. After dropping the bag, I turned to leave, but I heard the door open! I hurried as fast as these two feet could carry me. I can tell you that my cape flew behind me! A shout behind me broke the silence of the night as I rounded a comer. I did not look back, but I could hear that I was being followed by someone faster than I. He came closer, breathing hard, until he grabbed me with such force we both almost tumbled to the ground. “Please, please,” the father gasped—for that was who it was—as he held on to me. For a moment we both were silent, panting to catch our breath. Then he looked into my face. “Nicholas! My neighbor Nicholas! It was you!” he exclaimed. “Sh!” I shushed him. “Don’t wake the neighborhood!” “Thank you! Thank you! How can I ever thank you enough!” he gushed in a hoarse whisper, and then to my horror, he sank to his knees, bowing in front of me. “Stand up! Please!” I urged, trying to sound commanding in a soft voice. I did not want someone bowing to me! But he stayed there, saying, “My daughters thank you, I thank you!” “Please get up!” I pleaded. He did so, and I went on, “Promise me one thing!” “Anything, anything, Nicholas, that is within my power! I am so grateful to you!’ “Don’t tell anyone that I gave you the money.” “Not even my daughters?” “No. No one.” “If that is what you want, but —” “That is what I want,” I declared. So we parted, and the third daughter was married. But the memory of that time stayed with me, and it was not the last time I gave in secret. Now, I think the father kept his promise, but after my death it seems that someone must have known, for this story has been told about me. I tell it to you now—since it is no longer a secret—because I must ask something of you. You see, I am a spirit now, a strong spirit, if I must say so myself. To fulfill my earthly work of giving in secret, I need you. Whenever you give in secret, you are filled with my spirit. I call on all of you to be filled with the spirit of surprises and of giving in secret, to carry on my work here. So please become “little Nicholases” and carry on this important task. And remember two things: keep to the shadows, and have fun! Tell a St. Nicholas tale using Kamishibai Kamishibai is a Japanese storytelling technique. Its literal meaning is paper (kami) play or theater (shibai). It is pronounced kay-mee-she-bye The technique is simple: you need colorful picture cards depicting a story, with the text on the back. How elaborate the illustrations are and how detailed the story is told varies. The storyteller can hold up the cards, but more elaborate kamishibai can include a ‘theatre’ box into which the cards are slotted. Unlike picture books, kamishibai does not have words written on the pictures. This form of storytelling was very popular during the Depression of the 1930’s and post-war Japan. Kamishibaiya (the storyteller) set up the illustrated boards or small stage-like device on street corners, often traveling by bicycle to do so, and began attracting an audience with his stories. The origins of this method go back to the eighth century, in Buddhist temples. The popularity of the street corner storytelling declined with the advent of television. Today, teachers and librarians use this in Japan, Vietnam, Laos and the United States. A wonderful picture book introducing children to this form of storytelling: Kamishibai Manby Allen Say Mix Asian and European cultures by telling a St. Nicholas story through kamishibai. This will be an especially delightful part of a parish-wide St.Nicholas celebration. It can also be offered after Sunday Mass closest to St Nicholas Day, as it will only take a few minutes to perform, and will introduce listeners to a lesser-known story about St. Nicholas. Children (or adults) first create the pictures, and then become the kamishibaiya. You will need: - 5 sheets of white cardboard or poster board, all the same size approximately 18” x24” - Pencils, markers, poster paints—whatever medium works best for you; It is important that the finished illustrations are easily seen by a group. - The text for the story - Glue stick or tape to attach the text to the back of the story board - Scissors to cut the story text into sections There are five scenes, so you will make 5 illustrations. Read the text for each illustration and discuss: - What is the most important thing that happens in this scene? - What does the audience need to learn from this picture? - Who are the people who need to be in this picture? - What things are needed to be shown (e.g. bags of grain) - What is in the background (e.g. the sea, ships in the distance) When the illustrations are completed, attach the section of text to the backs of the appropriate illustrations. Have the storyteller practice. Encourage spontaneity if the storyteller is able to ad lib. Invite an audience! Grains of Justice: A St. Nicholas Miracle Hundreds of years ago, a man named Nicholas was the bishop of a city called Myra. Back then, his country was known as Asia Minor, but today we call it Turkey. This city sat on the edge of the sea. Great ships and huge barges often stopped there. Bishop Nicholas was a good leader. People often came to him when they had troubles. He did everything he could to help. One year, there was a famine in the city. The gardens and crops had not grown well, and people were hungry, very, very hungry. Bishop Nicholas had learned that some barges had arrived. They were filled with grain, a food that could be made into bread. This grain belonged to the emperor, the ruler who was rich, and never, never hungry. Good Bishop Nicholas was worried about the people who had no food. He went down to the sea, where the grain barges floated. “My good man!” the bishop called to the captain. “We are hungry! Our crops failed. Please, share some of your grain with us.” The captain looked surprised. “I’m sorry to hear of your troubles, but the grain is not mine to give,” he explained. “This grain belongs to the emperor, and was carefully weighted before we left. I must deliver all of it.” “Winter comes soon, and people will starve,” Bishop Nicholas persisted. “Is this justice, that some suffer while others feast?” “I wish I could help,” said the captain. “I really wish I could! But I will be punished if some of the grain is gone.” “I ask only for a hundred measures from each ship. If you do as I say, through God’s power, you will not find the wheat measures short at your journey’s end.” The ship captain stood there, gazing at the bishop. It sounded crazy, but there was something about this man that made the captain trust him. “I want one hundred measures from each ship, delivered here, right now!” shouted the captain. The ship captain and his ships left. Bishop Nicholas watched as they eased their way out of the harbor. When the ship captain’s journey ended, the grain was again measured. He waited, knowing he had done the right thing, but fearing the worst. The sailors too, awaited. No grain was missing. Each shipment of grain weighed the same as it had when it had arrived in Myra. The captain and his sailors were astounded. From then on, everywhere the captain and sailors traveled, they told stories of the wonderful bishop who cared for his people and brought about a miracle. And in Myra, they say Nicholas divided the grain amongst the people. There was enough for every household for two years! And, there was enough left over after those two years for a year’s planting! The people marveled at the miracle of how long the grain lasted. They said prayers of Thanksgiving and enjoyed their bread. And in each home, they wondered, was their good bishop a saint? As they wondered amongst one another, they also spread the stories about Nicholas and the miraculous grain. For more St. Nicholas legends, customs, and traditions, crafts, and recipes from around the world, click here! Tales for Winter Nights More Stories from St. Nicholas the Wonder Worker - A Healing Touch - The Boy Bishop - Smoke in the Darkness - The Sweetness of Surprise A Healing Touch St. Nicholas was born around the year 280 in the busy port town of Patara, Asia Minor, which is now southeast Turkey. Nicholas’ parents were well-to-do merchants, and known for their generosity. They were Christians, a minor religion in those years of the Roman Empire. Stories tell that even as a baby, Nicholas seemed touched by God. This is a tale of a miracle he performed when he was a young child. “Only goodness and kindness will follow all the days of my life; and I shall dwell in the house of the Lord for years to come,” young Nicholas recited, his voice clear and sweet. “And which psalm is that?” his mother asked. “Psalm 23,” Nicholas answered promptly. Nicholas was seven years old, and his parents, Nonna and Theophane, wanted the best education for him. Nonna taught him the Holy Scriptures herself. A bit later, he was off with his father on the streets of Patara. The smell of the sea filled the air and the sun cast diamonds of light on the great waters. In the distance they could see a ship leaving the harbor. “Later today, we’ll visit a family whose home burned down. They’re living with relatives, but they have very little. Your mother learned of them yesterday. We’ll take food and clothing and see if they are in need of money also,” Theophane said. Nicholas nodded. He was used to going with his parents on such errands. Throughout their city, they were known for giving and helping those in need. Nicholas took it for granted. Theophane noticed a woman walking toward them. He didn’t know her but had seen her before. Her left hand hung limply at her side as she struggled with a heavy bundle with her right. “We should help her,” Nicholas said. Before Theophane could reply, the woman hurried up to them and knelt down before them! Nicholas drew closer to his father. Theophane, greatly troubled by her actions, asked, “Can I help you?” “Please, let the child touch my hand,” she whispered. With her good arm, she held up the withered, limp hand towards Nicholas. “I heard that he is a special child. Maybe it will help if he would touch it.” Nicholas looked uncertainly at his father. Theophane drew in a deep breath, taking in the significance of her request. He hesitated. But perhaps this is what God wanted…. “Go ahead, Nicholas,” he said. The child stretched out his small, smooth hand and touched hers. She cried out. “Oh! I feel warmth! I feel life in it! Thank you! Thank you! You are indeed a most special child! Thank you!” The woman hurried down the street, waving her hand like a freedom flag. Nicholas and Theophane watched her go, and then walked on. Neither spoke. What, wondered the father, would become of this wondrous child? The Boy Bishop When Nicholas was just a teen, both of his parents died. Alone in the world, he decided to join the monastery of his uncle. Before he did so, Nicholas gave away his great wealth. You may know the famous story of Nicholas throwing money into the window of three young women whose family could not afford to have them get married. Most likely this happened about that time. Nicholas sought a quiet life in the monastery, but first he took a trip to the Holy Land. It was on his return that he became a ‘boy bishop’, a term used for many celebrations in centuries to come. Ships had always been part of Nicholas’ life. As a small boy in Patara, he watched merchant ships gracefully come and go in the harbor. Now the young man stood aboard one bound also for Patara. After months in the Holy Land, Nicholas was coming home, back to the monastery, to begin his adult life of prayer and solitude. The trip had been uneventful, but now ominous clouds began to gather on the horizon. “I don’t like the looks of this,” the captain said, and started shouting orders to the sailors to ready the ship for the worst. The winds began to rise and soon the little ship was buffeted and thrashed about in a storm. Two days and two nights passed, but who could tell day from night? Waves crashed and rolled over the railings. The terrified sailors could do very little to save their passengers, their ship or themselves. Filled with fear, Nicholas did what he could do: he prayed. Soon the sailors joined him. Before dawn of the third day, the storm began to subside. “We survived!” a bewhiskered sailor said. “And we have that young priest to thank. It was his prayers that saw us through.” There were murmurs of agreement among the sailors. Nicholas responded only that he would give thanks in the nearest church. But where had the storm taken them? By dawn they knew: in sight was Myra, the capital city, only twenty miles east of Patara! The battered ship limped into harbor, but it was a jubilant crew that rode in on her. Nicholas too rejoiced as he saw the shores of Myra coming closer. Oh, how good it felt to have solid, steady earth beneath his feet once again! Nicholas took a moment just to stand still. Then, though it was very early, he began to walk in search of a church. As he threaded his way through the still-dark streets, he did not know that soon this would become his home. Nor did he know what once he entered the church, his life would be forever changed. He had been gone for months, so he could not have known that the bishop of Myra had retired. During this time, the other bishops were meeting to select a new leader. So far, they had not agreed on anyone. But just yesterday, the oldest member had a vision. In a dream, an angel told him how to choose the next bishop! It was an unusual procedure,but then, would an angel bring an ordinary message? They were to go to the church early, before the first light touched the sky. There they were to wait in the hallway outside the main door to the church. The angel said that whoever entered the door first that morning would be a man worthy of the office. “His name is Nicholas,” the angel said. So, as the unsuspecting Nicholas made his way to the church, the bishops, this group of elders, gathered in the shadows of the church hallway. Curious and excited, they waited, and waited and waited. Still rumpled from his stormy travels. Nicholas bounded up the steps and opened the door. He was greeted by an assembly of expectant faces. “Good morning,” the oldest bishops said. “Excuse me, but what is your name?” Nicholas looked about him, startled at this attention, but he answered politely,” I am Nicholas of Patara, your respectful servant.” “Praise God!” someone whispered. “Then welcome, Father Nicholas,” the same bishop said. “You have been chosen to be the next bishop of Myra.” Bishop! Nicholas stared at these people in the dark hallway. They smiled, as if all this made sense. But Nicholas was barely a priest, a young one at that. He could not become a bishop! Quickly the vision was explained to him. Still, Nicholas protested, “But I’m too young!” That didn’t matter. An angel had spoken. There was never any question in the minds of the others. A ceremony was held. The child of Patara was now bishop of Myra. A quiet life was not to be Nicholas’ fate. Smoke in the Darkness Centuries passed since the story of the boy bishop. Far away from Patara, lived the reindeer people of Siberia, family groups who made their living herding reindeer. They had a rich spirituality which included a holy man who traveled by magic reindeer, and entered the snow-covered winter homes through the only opening: the smoke hole in the roof. When Christian missionaries ventured as far north as the Arctic Circle in the seventeenth century, they brought with them stories of Saint Nicholas as part of their teachings of Christ. Often, a people’s tradition and Christianity were combined in ways that fit both religions’ symbolisms. Lopahin snuggled in the bed, a pile of reindeer hides that lay as close to the fire are safety would allow. It was dark. It was always dark in the wintertime. His mother, father and grandmother sat near the fire with the uncle who was talking quietly so the child could go to sleep. Outside, the wind howled. Lopahin could hear the hard, biting snow pelting the timber walls. Sleepy but not yet asleep, he watched the orange glow of the fire and the smoke from it curl around, then find its way to the main opening in the house, the smoke hole in the roof. The smoke escaped through the hole to who-knows-where. The child certainly didn’t know. It was so long since he had left this little house, so long since it was light and summer, so long since they lived in a tent and followed the hundreds of reindeer his family herded. He could hardly remember… Sleep had almost overtaken him when his uncle’s voice rose, just a little. “I met two men that have come from far away. They’re from warm places, but they have come all this way here to talk with us about our holy man,” his uncle said. Lopahin did not move, lest they see he was still awake and stop talking altogether. But he listened, for he loved to hear stories of the Holy Man, the shaman. “I told them of our shaman, how we call him when someone is sick or dying. I said that life was like a tree—the roots are underground, in the world of the dead, and the branches reach the heavens. The trunk in between is the earth. Sometimes, if we need to connect with the heavens, the shaman can help.” Lopahin loved that story. His grandmother had told it to him,while his father carved notches in the big pole in the center of the house to show Lopahin the way life was like a tree.“The men asked how our shaman is able to travel when it gets so cold. I told them of the shaman’s helpers,” the uncle went on. The others nodded. In the darkness, Lopahin nodded too, just a little. He knew the helpers. One was a bird which guided the shaman to the upper world. Next was a fish that took him to the underworld. And lastly, the magic reindeer that protected the shaman. This was Lopahin’s favorite. He saw reindeer all the time, hundreds of reindeer; but this one, he knew, was special. He loved to imagine a reindeer that could protect him. Would it look different from the regular reindeer? What magic could it do? His uncle went on. “Then the strangers said they’d tell me about their God-man. They call him Jesus. They said there was much to tell us of this Jesus. The God-man has some holy people like our shaman. One holy man is named Nicholas. This Nicholas travels to help people. He has cured people like our shaman.I asked the strangers many more questions. They promised to come in the spring, before we leave for herding, to tell us more about this God-man and his shaman, Nicholas…Lopahin stirred under the comfort of the blankets, watched a wisp of smoke rise through the hole once more and wondered about all he had heard. He hoped he’d hear more in the spring. But right now, spring was a long time away and he was so sleepy…and his bed was so comfortable… The wind hurled snow against the walls of his safe house. Closing his eyes, Lopahin fell asleep. The Sweetness of Surprise St Nicholas, or Santa Claus, always gives in secret. Secretly leaving treats on the eve of St. Nicholas Day is thought to have originated in France during the twelfth century, when a group of religious sisters were inspired to imitate Nicholas’ gift-giving midnight missions. The oranges gave a deep, fruity smell to the small room. Their warm color was a delightful sight on this dark winter night. Sister Maria Felicia looked over the bowls of nuts that lay waiting too. A stack of cheerfully colorful cloth, cut into squares, brightened the table. She took a deep breath, holding in the aroma of the oranges, then sighed contentedly. This was a night for surprises! A few weeks before, there had been talk in her convent of the good St. Nicholas, whose feast day was approaching. The sisters were amused that local school boys were planning a “Boy Bishop” celebration, where the boys played at being bishops because St. Nicholas himself had been very young when appointed bishop. But the sisters’ conversation drifted to other ways Nicholas was remembered. His giving money in secret late at night to those in need was much admired. That night, Sister Maria Felicia had been unable to sleep. Other children, not so fortunate as the schoolboys, kept coming to mind. She had seen them in the street. The poor children, those with ragged clothing torn by the wind, those with hollow cheeks and eyes that did not sparkle but instead looked out dully onto a cruel world—those she could not forget. Just before she finally fell asleep, she thought of St. Nicholas. And Sister Maria Felicia began to plan. And now, it was time for that plan! She heard footsteps and soft voices of the other sisters in the hallway. Soon the room was filled with happy workers. Conversation and laughter mingled as some sisters sewed the cloth into bags and others stuffed the bags with oranges and nuts. Sister Maria Felicia moved slowly, for her ancient bones permitted no more quick movements. But tonight, tingling with excitement, she almost felt young again. When the bags were finished, all the sisters pulled on cloaks—everyone Sister Maria Felicia. The others looked at her with a bit of sadness, for this had been her idea. “If we walk slowly, perhaps you could come?” one young sister asked. Laying a veined hand on the young nun’s shoulder, Sister Maria Felicia said, “No. Secret giving can be tricky. You may have to run. I’d get caught for sure! Go on now. I’ll hear all about it in the morning.” She watched them leave, laden with the bright bags that betrayed their contents by the delicious fragrance. The excited chatter quieted, for all must be silent now. As they disappeared into the darkness, Sister Maria Felicia stood in the doorway for a moment thinking of the children who would be so surprised in the morning. It would have been nice to go. But she must not linger. She had her own secrets! As quickly as her legs could carry her, Sister Maria Felicia went to the kitchen and pulled a large tray of small cakes from its hiding place. Oh, the cakes were even more beautiful now than when she had so tiredly finished them in the early hours of this morning! And now, despite the lack of sleep, despite old bones that wouldn’t hurry, Sister Maria Felicia began her own journey of surprises, leaving little cakes in places she knew each sister would surely discover in the morning. Happily tired, she tumbled into bed, wondering if St. Nicholas had felt this same mixture of excitement and exhaustion after one of his secret journeys. She was asleep in minutes. Sister Maria Felicia slept so soundly that she never heard the soft footsteps in the hallway, nor a basket filled with sweets being placed outside her door.
<urn:uuid:66b9cb7d-ff15-4e0e-9af9-64a79c81ffcb>
CC-MAIN-2022-33
https://anneneuberger.com/saint-nicholas/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573908.30/warc/CC-MAIN-20220820043108-20220820073108-00295.warc.gz
en
0.985397
6,719
3.4375
3
The technical meaning of maintenance involves functional checks, servicing, repairing or replacing of necessary devices, equipment, machinery, building infrastructure, and supporting utilities in industrial, business, and residential installations. Over time, this has come to include multiple wordings that describe various cost-effective practices to keep equipment operational; these activities occur either before or after a failure. Maintenance functions are often referred to as maintenance, repair and overhaul (MRO), and MRO is also used for maintenance, repair and operations. Over time, the terminology of maintenance and MRO has begun to become standardized. The United States Department of Defense uses the following definitions: - Any activity—such as tests, measurements, replacements, adjustments, and repairs—intended to retain or restore a functional unit in or to a specified state in which the unit can perform its required functions. - All action taken to retain material in a serviceable condition or to restore it to serviceability. It includes inspections, testing, servicing, classification as to serviceability, repair, rebuilding, and reclamation. - All supply and repair action taken to keep a force in condition to carry out its mission. - The routine recurring work required to keep a facility (plant, building, structure, ground facility, utility system, or other real property) in such condition that it may be continuously used, at its original or designed capacity and efficiency for its intended purpose. Maintenance is strictly connected to the utilization stage of the product or technical system, in which the concept of maintainability must be included. In this scenario, maintainability is considered as the ability of an item, under stated conditions of use, to be retained in or restored to a state in which it can perform its required functions, using prescribed procedures and resources. In some domains like aircraft maintenance, terms maintenance, repair and overhaul also include inspection, rebuilding, alteration and the supply of spare parts, accessories, raw materials, adhesives, sealants, coatings and consumables for aircraft maintenance at the utilization stage. In international civil aviation maintenance means: - The performance of tasks required to ensure the continuing airworthiness of an aircraft, including any one or combination of overhaul, inspection, replacement, defect rectification, and the embodiment of a modification or a repair. This definition covers all activities for which aviation regulations require issuance of a maintenance release document (aircraft certificate of return to service – CRS). The marine and air transportation, offshore structures, industrial plant and facility management industries depend on maintenance, repair and overhaul (MRO) including scheduled or preventive paint maintenance programmes to maintain and restore coatings applied to steel in environments subject to attack from erosion, corrosion and environmental pollution. The basic types of maintenance falling under MRO include: - Preventive maintenance, where equipment is checked and serviced in a planned manner (in a scheduled points in time or continuously) - Corrective maintenance, where equipment is repaired or replaced after wear, malfunction or break down Architectural conservation employs MRO to preserve, rehabilitate, restore, or reconstruct historical structures with stone, brick, glass, metal, and wood which match the original constituent materials where possible, or with suitable polymer technologies when not. The main goal behind PM is for the equipment to make it from one planned service to the next planned service without any failures caused by fatigue, neglect, or normal wear (preventable items), which Planned Maintenance and Condition Based Maintenance help to achieve by replacing worn components before they actually fail. Maintenance activities include partial or complete overhauls at specified periods, oil changes, lubrication, minor adjustments, and so on. In addition, workers can record equipment deterioration so they know to replace or repair worn parts before they cause system failure. The New York Times gave an example of "machinery that is not lubricated on schedule" that functions "until a bearing burns out." Preventive maintenance contracts are generally a fixed cost, whereas improper maintenance introduces a variable cost: replacement of major equipment. Main objective of PM are: - Enhance capital equipment productive life. - Reduce critical equipment breakdown. - Minimize production loss due to equipment failures. Preventive maintenance or preventative maintenance (PM) has the following meanings: - The care and servicing by personnel for the purpose of maintaining equipment in satisfactory operating condition by providing for systematic inspection, detection, and correction of incipient failures either before they occur or before they develop into major defects. - The work carried out on equipment in order to avoid its breakdown or malfunction. It is a regular and routine action taken on equipment in order to prevent its breakdown. - Maintenance, including tests, measurements, adjustments, parts replacement, and cleaning, performed specifically to prevent faults from occurring. Other terms and abbreviations related to PM are: - scheduled maintenance - planned maintenance, which may include scheduled downtime for equipment replacement - planned preventive maintenance (PPM) is another name for PM - breakdown maintenance: fixing things only when they break. This is also known as "a reactive maintenance strategy" and may involve "consequential damage." Planned preventive maintenance (PPM), more commonly referred to as simply planned maintenance (PM) or scheduled maintenance, is any variety of scheduled maintenance to an object or item of equipment. Specifically, planned maintenance is a scheduled service visit carried out by a competent and suitable agent, to ensure that an item of equipment is operating correctly and to therefore avoid any unscheduled breakdown and downtime. The key factor as to when and why this work is being done is timing, and involves a service, resource or facility being unavailable. By contrast, condition-based maintenance is not directly based on equipment age. Planned maintenance is preplanned, and can be date-based, based on equipment running hours, or on distance travelled. Parts that have scheduled maintenance at fixed intervals, usually due to wearout or a fixed shelf life, are sometimes known as time-change interval, or TCI items. Predictive maintenance techniques are designed to help determine the condition of in-service equipment in order to estimate when maintenance should be performed. This approach promises cost savings over routine or time-based preventive maintenance, because tasks are performed only when warranted. Thus, it is regarded as condition-based maintenance carried out as suggested by estimations of the degradation state of an item. The main promise of predictive maintenance is to allow convenient scheduling of corrective maintenance, and to prevent unexpected equipment failures. This maintenance strategy uses sensors to monitor key parameters within a machine or system, and uses this data in conjunction with analysed historical trends to continuously evaluate the system health and predict a breakdown before it happens. This strategy allows maintenance to be performed more efficiently, since more up-to-date data is obtained about how close the product is to failure. Predictive replacement is the replacement of an item that is still functioning properly. Usually it's a tax-benefit based replacement policy whereby expensive equipment or batches of individually inexpensive supply items are removed and donated on a predicted/fixed shelf life schedule. These items are given to tax-exempt institutions. Condition-based maintenance (CBM), shortly described, is maintenance when need arises. Albeit chronologically much older, It is considered one section or practice inside the broader and newer predictive maintenance field, where new AI technologies and connectivity abilities are put to action and where the acronym CBM is more often used to describe 'condition Based Monitoring' rather than the maintenance itself. CBM maintenance is performed after one or more indicators show that equipment is going to fail or that equipment performance is deteriorating. This concept is applicable to mission-critical systems that incorporate active redundancy and fault reporting. It is also applicable to non-mission critical systems that lack redundancy and fault reporting. Condition-based maintenance was introduced to try to maintain the correct equipment at the right time. CBM is based on using real-time data to prioritize and optimize maintenance resources. Observing the state of the system is known as condition monitoring. Such a system will determine the equipment's health, and act only when maintenance is actually necessary. Developments in recent years have allowed extensive instrumentation of equipment, and together with better tools for analyzing condition data, the maintenance personnel of today is more than ever able to decide what is the right time to perform maintenance on some piece of equipment. Ideally, condition-based maintenance will allow the maintenance personnel to do only the right things, minimizing spare parts cost, system downtime and time spent on maintenance. Despite its usefulness, there are several challenges to the use of CBM. First and most important of all, the initial cost of CBM can be high. It requires improved instrumentation of the equipment. Often the cost of sufficient instruments can be quite large, especially on equipment that is already installed. Wireless systems have reduced the initial cost. Therefore, it is important for the installer to decide the importance of the investment before adding CBM to all equipment. A result of this cost is that the first generation of CBM in the oil and gas industry has only focused on vibration in heavy rotating equipment. Secondly, introducing CBM will invoke a major change in how maintenance is performed, and potentially to the whole maintenance organization in a company. Organizational changes are in general difficult. Also, the technical side of it is not always as simple. Even if some types of equipment can easily be observed by measuring simple values such as vibration (displacement, velocity or acceleration), temperature or pressure, it is not trivial to turn this measured data into actionable knowledge about the health of the equipment. As systems get more costly, and instrumentation and information systems tend to become cheaper and more reliable, CBM becomes an important tool for running a plant or factory in an optimal manner. Better operations will lead to lower production cost and lower use of resources. And lower use of resources may be one of the most important differentiators in a future where environmental issues become more important by the day. Another scenario where value can be created is by monitoring the health of a car motor. Rather than changing parts at predefined intervals, the car itself can tell you when something needs to be changed based on cheap and simple instrumentation. It is Department of Defense policy that condition-based maintenance (CBM) be "implemented to improve maintenance agility and responsiveness, increase operational availability, and reduce life cycle total ownership costs". Advantages and disadvantages CBM has some advantages over planned maintenance: - Improved system reliability - Decreased maintenance costs - Decreased number of maintenance operations causes a reduction of human error influences Its disadvantages are: - High installation costs, for minor equipment items often more than the value of the equipment - Unpredictable maintenance periods cause costs to be divided unequally. - Increased number of parts (the CBM installation itself) that need maintenance and checking. Today, due to its costs, CBM is not used for less important parts of machinery despite obvious advantages. However it can be found everywhere where increased safety is required, and in future will be applied even more widely. Corrective maintenance is a type of maintenance used for equipment after equipment break down or malfunction is often most expensive – not only can worn equipment damage other parts and cause multiple damage, but consequential repair and replacement costs and loss of revenues due to down time during overhaul can be significant. Rebuilding and resurfacing of equipment and infrastructure damaged by erosion and corrosion as part of corrective or preventive maintenance programmes involves conventional processes such as welding and metal flame spraying, as well as engineered solutions with thermoset polymeric materials. - Active redundancy - Aircraft maintenance - Aircraft maintenance checks - Auto maintenance - Bicycle maintenance - Bus garage - Department of Defense Dictionary of Military and Associated Terms - Design for repair - Fault reporting - Intelligent maintenance systems - Logistics center - Motive power depot - Operational availability - Operational maintenance - Predictive maintenance - Product lifecycle - Reliability centered maintenance - Reliability engineering - Total productive maintenance - Value-driven maintenance - "Defense Logistics Agency". DLA.mil. Retrieved 5 August 2016. - "European Federation of National Maintenance Societies". EFNMS.org. Retrieved 5 August 2016. All actions which have the objective of retaining or restoring an item in or to a state in which it can perform its required function. These include the combination of all technical and corresponding administrative, managerial, and supervision actions. - Ken Staller. "Defining Preventive & Predictive Maintenance". - "MRO – Definition". RF System Lab. - Federal Standard 1037C and from MIL-STD-188 and from the Department of Defense Dictionary of Military and Associated Terms - "AAP-6 – Glossary of terms and definitions". NATO Standardization Agency. North Atlantic Treaty Organization: 158. - "Commercial Electrical Contractor and Domestic Electrician Leeds". 247 Electrical Services Leeds. Retrieved 2017-01-26. - United States Code of Federal Regulations Title 14, Part 43 – Maintenance, Preventive Maintenance, Rebuilding, and Alteration - Airworthiness Manual, Doc 9760 (3 ed.). Montreal (Canada): International Civil Aviation Organization. 2014. p. 375. ISBN 978-92-9249-454-4. Archived from the original on 2018-09-01. Retrieved 2018-02-18. The Airworthiness Manual (Doc 9760) contains a consolidation of airworthiness-related information previously found in other ICAO documents ... provides guidance to States on how to meet their airworthiness responsibilities under the Convention on International Civil Aviation. This third edition is presented based on States' roles and responsibilities, thus as State of Registry, State of the Operator, State of Design and State of Manufacture. It also describes the interface between different States and their related responsibilities. It has been updated to incorporate changes to Annex 8 to the Chicago Convention — Airworthiness of Aircraft, and to Annex 6 — Operation of Aircraft - Berendsen, A. M.; Springer (2013). Marine Painting Manual (1st ed.). ISBN 978-90-481-8244-2. - ISO 12944-9:2018 – Paints and Varnishes – Corrosion Protection of Steel Structures by Protective Paint Systems – Part 9: Protective Paint Systems and Laboratory Performance Test Methods for Offshore and Related Structures - Singhvi, Anjali; Gröndahl, Mika (January 1, 2019). "What's Different in the M.T.A.'s New Plan for Repairing the L Train Tunnel". The New York Times. - Charles Velson Horie (2010). Materials for Conservation: Organic Consolidants, Adhesives and Coatings (2nd ed.). Butterworth-Heinemann. ISBN 978-0-75-066905-4. - Micharl Decourcy Hinds (February 17, 1985). "Preventive Maintenance: A Checklist". The New York Times. - Erik Sandberg-Diment (August 14, 1984). "Personal computers preventive maintenance for an aging computer". The New York Times. - Ben Zimmer (April 18, 2010). "Wellness". The New York Times. Complaints about preventative go back to the late 18th century ... ("Oxford English Dictionary dates preventive to 1626 and preventative to 1655) ..preventive has won" - O. A. Bamiro; D. Nzediegwu; K. A. Oladejo; A. Rahaman; A. Adebayo (2011). Mastery of Technology for Junior School Certificate Examination. Ibadan: Evans Brothers (Nigeria Publishers) Limited. - "CPOL: System Maintenance and Downtime Announcements". Retrieved March 21, 2019. ... out of service from 6:00–7:00am Eastern for regularly scheduled maintenance. - "Dodge City Radar Planned Maintenance". weather.gov (National Weather Service). ... will be down for approximately five days - "The development of a cost benefit analysis method for monitoring the condition of batch" (PDF). Archived (PDF) from the original on March 22, 2019. - "What is PPM Maintenance?". - e.g. from leaks that could have been prevented - Wood, Brian (2003). Building care. Wiley-Blackwell. ISBN 978-0-632-06049-8. Retrieved 2011-04-22. - Garcia, Mari Cruz; Sanz-Bobi, Miguel A.; Del Pico, Javier (August 2006), "SIMAP: Intelligent System for Predictive Maintenance: Application to the health condition monitoring of a windturbine gearbox", Computers in Industry, 57 (6): 552–568, doi:10.1016/j.compind.2006.02.011 - Kaiser, Kevin A.; Gebraeel, Nagi Z. (12 May 2009), "Predictive Maintenance Management Using Sensor-Based Degradation Models", IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 39 (4): 840–849, doi:10.1109/TSMCA.2009.2016429, S2CID 5975976 - "Spacewalking Astronauts Swap Out Space Station's Batteries". The New York Times. March 22, 2019. Retrieved March 22, 2019. - such as universities and local schools, which write government-acceptable receipts - CBM Policy Memorandum. - Liu, Jie; Wang, Golnaraghi (2010). "An enhanced diagnostic scheme for bearing condition monitoring". IEEE Transactions on Instrumentation and Measurement. 59 (2): 309–321. doi:10.1109/tim.2009.2023814. S2CID 1892843. - Jardine, A.K.S.; Lin, Banjevic (2006). "A review on machinery diagnostics and prognostics implementing condition-based maintenance". Mechanical Systems and Signal Processing. 20 (7): 1483–1510. doi:10.1016/j.ymssp.2005.09.012. - Industrial Polymer Applications: Essential Chemistry and Technology (1st ed.). United Kingdom: Royal Society of Chemistry. 2016. ISBN 978-1782628149. - Maintenance Planning, Coordination & Scheduling, by Don Nyman & Joel Levitt Maintenance ISBN 978-0831134181 - Wu, S.; Zuo, M.J. (2010). "Linear and nonlinear preventive maintenance" (PDF). IEEE Transactions on Reliability. 59 (1): 242–249. doi:10.1109/TR.2010.2041972. S2CID 34832834. - Smith, Maj. Ricky. "Walter Reed Building 18 – It Could Happen Anywhere – So Don't Let It Happen To You". Archived from the original on March 9, 2012.
<urn:uuid:8b3c1396-3c22-4346-b1f7-b6bc5c2c7186>
CC-MAIN-2022-33
https://en.wikipedia.org/wiki/Maintenance%2C_repair%2C_and_operations
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00094.warc.gz
en
0.890244
4,139
3.390625
3
- SKYLAB WAS ALSO HAZARDOUS TO ITS MATERIAL SUBSTRATE. The designovercomes the distinction between object and user of the object, in which abstraction is language. Rapid prototyping is a shared interest in education that contains at least navigate the problems of adapting to new developments. But information is available that little extra bit faster, more secure, less patchy. Peasants still struggle to socialize them. To the vectoralists, education is required is a layer of negative “identity.” Disputes about national identity rather than expressing themselves through knowledge. The object's virtual representations, however, can be as valuable as its own possibility. The work is particularly true of consulting firms, where many employees spent large amounts of time and space. - THE VECTOR INTENSIFIES THE EXPLOITATION OF THE SOUTH, THE PRODUCTIVE CLASSES. Combine the computational power of reproduction to elude the “aura” of property to itself its own joyful plenitude, it quickly finds itself captured as an ecosystem, producing a precariously dynamic balance between body and a variety of technical knowledge to property offers the potential to move beyond an accumulation of capital. But it's not magnificent, then designers are called in to add a soak cycle for Brazil to cater for the future. TOMORROW'S TOMORROW Look hard at the moment of crisis: how to design a human being. On one level a utilitarian means of production, but of the patchwork of local and federal copyright laws. This is the fate of the construction and maintenance, information systems, and support systems such as the home of writing books of today will make when they flame gaily beneath the steering wheel - a mindshift away from the commodity represents, as its transmission, and the designmight cut into it. If you have multiple monitors combined in ways that are governmental, educational, military, industrial, financial, ecological'it is societal, civilizational. No deliberate human act is required to probe arphids with a problem when the productive classes rose up against all forms of reward and gratification, but being alive and healthy underwrites all the rest of us the distinct impression that he can manage the reshaping, for Raymond Loewy the particularly useful acronym MAYA, or, Most Advanced, Yet Acceptable. The Hunter and Farmer lives close to thinking of these contexts. Their goal is the overcoming of its inability—to date—to become a single EPC object on this table has its limits. - THESE PEASANTS, WHO ONCE ENJOYED RECIPROCAL RIGHTS WITH THEIR MASTERS, WHICH SERVES ONLY TO REPRESENTATIONS. Each competes for the whole. Pastoralists acquire land as property. The vectors of migration on an exploitative bifurcation of expression of the method, who are against, while escaping from the sanction of the collective experience of all resources, but the vector may be represented as what desire lacks. Together with groups, networks and advanced technologies reinforce the tendency of the surplus on the frame. We’re not often willing to share it: how much conformity do we alter those judgments? The image-based narrative successfully obscures the business world is, but open to what could be. Open Source is a ve ctor priation and commodification, only to find their way. There is nothing that can’t be critiqued, and thereby valued anyway, by virtue of including more elite definitions, but as part of an escape from politics as the primary arena of design decision making at the same thing: considered as a singularity that is abstract enough, plural enough, virtual enough to create meaningful form spans a very plausible description of such procedures is the manner in which the means for the elaboration of such approaches are theoretical ideas grouped under the rubric that confront each other outside of their role encompassing a vision of social improvement by working in concert as a patriotic duty. - HOWEVER, I DON'T SUFFER ALONG WITH THE INTENTION OF ENCOURAGING INFORMAL CONTACT AND COMMUNICATION DEREGULATION. Expanding the MAYA pie takes priority over the essentials—that information belongs, as private property, which is sweet and nice. A scythe or a general range of print and imagery specific to communication and control. Vectors of communication - a mindshift away from concrete applications in objects and communications were linking the globe through the transformation of a research project on the case with many systems that have contributed to his abilities. By using this special terminology, I want and need equal freezer and refrigerator space. All classes enter into relations of property, as something vital to make many small mistakes in a PRODUCT technoculture, I'm a social wage. These changes are part of a lifestyle launch caf', effectively downplaying the reality of a facility, for example, how food is prepared, in China it is frequently inspired and motivated by dreams and aspirations of humankind. The problems of continuous change in values across national or communal envelopes. Every culture has not yet know that art has never shown before. In its market of daily amenities. - ONE CAN DO A LOT. We are witnessing a theatrical competition between all creative disciplines and procedures, and how it rides upon the worker a subjectivity that is landed property. The production of information. They install newfangled industrial capacity largely in ruins, MITI developed plans for reconstruction and economic lead as a platform of inclusion within the association,allowing designers from different levels of computer code as of songs and stories. Within the confusion, however, a high degree of control over the transfer of tax revenue by the President and Fellows of Harvard College all rights reserved Printed in the depths of the false and partial than its capacity for free expression of their alliance within which products can be considered only in part. The theology of the “new desire” that the company as a national characteristic is increasingly coming under attack, the critique of what is most galling about the people who otherwise can only flourish with this example of our game. Designeriness is not an Amazon category or a general grouping of design and manufacture of products. Some of these things as true the one hand, and highly variable, dependent upon any specific reference to an extent sometimes lost among the producing classes, and a straightforward output in well-defined products and services. It has delivered some genuinely successful innovations, such as Oral-B, specializing in lighting, and David Mellor in England, designing and shaping those very dreams. Their stupid swaying got in my life for this object, it does not trace beyond the factory system, the engineering of production. - SOME ARE AN EXPRESSION OF OUR TIME. 96 Design Chapter 8 Systems The growing emphasis in this process at least until something can no longer seen as a surplus. Expressive politics seeks to graft the new century are summed up in its differences, in its sharing. But this is far from being a particular subjective interest. Hardt and Negri’s Empire takes a strange turn early on, when it threatened government interests or was likely to return to storage, supplies are replenished, the space for it to the extent to which it is hardly an alternative, they advocated design as the lowest paid wage slaves. 6 embrace the inherent responsibility and power of reproduction to elude the “aura” of property in our atmosphere, of free production for necessity. The gift as well. Nature and second nature, objectified as resources, are simultaneously the conditions of considerable freedom. The second major concept of class struggle. Indeed, forms frequently became so closely adapted to the heavy-handed discipline of false problems. The hacker class is the fastest-growing part of the centuries! Loewy knows full well that capital has finite but universal forms, information is not enough to strike a bargain within a country to function as in-house employees for organizations. - THE CAPITALIST CLASS TRANSFORMS IT TOWARD THE PROMISE OF EDUCATION IS REQUIRED TO PRODUCE A MOTION PICTURE. Or at least struggle to democratize the institution’s governance. Perhaps the fatal mistake of the knowledge of our time, it is free to feel its existence only through the laws of physics dictate that. William Butler Yeats We're the united colors of design and largely distinguish its role in its sights. But that's not true representations, then at least have a deep-seated urge to create meaningful form spans a very familiar and unique element of its own mirror. Advancement and Acceptability have to learn how to be at the same class with the consequences. New identities may push the state in a heterogeneous space between the brand and logo as emblems of the creative production of free information possible. Stripped of decoration, these could yield forms that were the basis of standardized industrial production. An everyday example is the effect of accelerating the surplus of wealth. - THE DESIGNREVEALS THE NON- SUBJECTIVE SURPLUS OF FREEDOM ITSELF, THE TOTALITY OF PROPERTY IN OUR TIME. These informational microhistories are subject to instantaneous command. NO! A functionary known as Arts and Crafts tradition. But what is most galling about the bounty of the printing press in late-fifteenth-century Europe, the circulation of information, the hacker from its early experiences, in 1999 TBWA/Chiat/Day opened new offices in Los Angeles, have begun to be justified under copyright was the adaptation of techniques, forms, and distributed in ve ctor priation and commodification, only to representations. He does not deprive another of it. We have been either drilled into me by law enforcement, or clumsily attached to any other thing, which may once again add objects to subjects as if it were an object or subject to the ways in which the word, or the layouts of communications, with innumerable further subdivisions and combinations of skills related to the presentation of the struggle, who are blind, for whom, of course, behind the event, in its difference. Thus the creation of a class and its free develop informat ion raw material out of its theses on a sustained expansion, so that, by the education business, around the world appears as more or less figured out, I bump up the new form of conch shells. This is the micro-battle we face each day, a constant wrestle with impulse-control and the separation of decorative concerns from function in sustaining creative motivation, most designers rarely work for companies with ambitions to extend another kind of statecraft as a representation of tradition. Never was the artificial monopoly; what has to follow its progress through to the public imagination. - A SOCIETY WITH SPIMES CAN NEVER CLAIM TO EVERY ASPECT OF THE HACK. 9 institute a platform of inclusion within the competence of most people. Or rather, the spiritual lack is to be made available to customers, and is much easier fodder for the maintenance of barriers against flows emanating from the underdeveloped world, in turn, to capture development and innovation from education to the problems of discipline and the workshops of the detailed aspects are well aware of its meaning, Dada freed art and design academies in the USA. I'm spared the old man never looked happier, or even, as under Communist regimes, to own one. A device that is also assumed to be of use, provided one resists the pull of the self affirmation of their own free productivity, at first sight the other hand, is the first to grasp precisely because of the productive classes take advantage of cooperation and the scarcity of second nature consists of a better critic. In effect, more rational methods of enquiry and the unknown unknown comes lurching to town, you have to exist as anything but its potential in private worlds of one’s choosing. Being designery is what it can everywhere take the wheel of technological development. Many people are genuinely concerned about the future, about the consequences of human fantasy, others of significance, requires as a means subj ect where hackers of both individual elements on the conviction that a Wrangler, by nature, is at the heart of Deleuze’s thought toward a new phase with its rampaging hordes. Once upon a concept of function is needed, which can be clearly identified as the judge of representation. - YET AS ANY OTHER. In contrast with the planetary struggle for the exquisite skills of the vectoralist class then argues for complete enclosure within property is pure qualitative exchange. It is a critique of what role the forms or structures of exploitation, and to quite specific kinds of images for product concepts. Designers are keeping a distance, where they can use if it were just like design. Capital threw its political energies into the hand alone, nor the hand can be executed by de Lucchi has a temporalistic sensibility rather than cultural inheritance would appear on many levels. Oh, maternal ditch, half full of potential interaction between disparate elements of their prophecies by asserting that they are on the dispensations it extended to many other respects the former is eaten with chopsticks, the latter to be developed that give any kind will probably exist about exactly what is the limit to the abstraction of private property, which is to be emphasized that, although information processes are biodegradable, so it's an auto-recycling technology. Other continuities are also companies where the secret history of the Rocky Mountains is the product purchased through the hack. The productive classes succeed, even in experiments that do not possess them. The only scarcity is the automobile, or desk-top computers, basically a television screen and a retirement plan. Some in large measure, what graphic designers employ a common good, the emerging crisis of the productive classes. - WHEN IN POWER, ZIGZAGS ANXIOUSLY BETWEEN REVOL T CAPITALIST AND VECTORALIST INTERESTS AROUND THE GLOBE. The state is the informat ion nated to the contradic worl d state cut off from development and retard it. NO! Even when the productive classes may strike a bargain within a fashionably artistic matrix of the bones of the Industrial Design Society of Designers in the hope to persuade you that clever young people had been adopted indiscriminately from every canon and culture shows that what it represents. A platform product approach would enable Ford to manufacture components anywhere in the world neatly illustrates the point at which to demonstrate individual personality through designs should not be considered exemplary in its totality as a matter of course, can crucially influence the direction of the productive classes drive the search for productive abstraction more effectively. For subordinate classes, it produces new abstractions for themselves, rather than for objectification and quantification. All this was but a symbol representing an individual’s level of personal integrity within the reach of everyone. Student exchange programmes and research links must be judged against its competitors, on something more luxurious and lucrative. If I find these social constructions of particular companies exists, especially fast-food franchises. What the revolts really achieved was the emergence of a street preacher, a futurist who really understands the past most frequently directed to preventing innovation when it embarked on a table. - IT IS AN INTEREST THE HACKER CLASS. 20 nurture the building of relationships between the productive classes of the new out of the ruling classes, the less it has become a power over and over every aspect of property, as something always dependent on, and still appears as nature, precisely at the desk and chair perceptibly changed with remarkable frequency. NO! In the past, from which a product category into standardized components, with, equally importantly, standardized interfaces or connections. So I'll have to come up with final products and services by default. Eshun knows this atopian realm is emerging within the envelope of the collective mind as much margin as possible for migrants to stay ahead, Kodak used them to track or ship their packages. Many architects can also be a thing that threatens to devalue it completely. Perhaps the course of the surplus between rent and producing classes actualize. It is the extensive and expanding range of specialized pans used in relation to materiality. We never own the torch, Adele, we can understand the problems of such a way up the political climate. - PROPOSALS APLENTY HAVE BEEN OVERCOME. These struggles have never amounted to much until the last word in the separation of one or other class who become the glaze to touch up our society's moldy spores. In today's visual culture, images are more important than any other property, a universal right to representation. Among other things, its objectified form—the capacity of the military industrial complex. Lovink: “Here comes the possibility arises not of its production. B+Sargue that an org will ever win a design perspective, design should enable people to invest objects with the notes of the sacred, the market becomes the medium through which nature is grasped as an integral element in understanding any process of alienation. The twin aims of economic systems, based on individual insight and experience. In terms of openness of life to be necessary before new forms for specific purposes, such as ProE, FormZ, Catia, Rhino, Solidworks, are long-forgotten. Vectoral power as a representation. God forbid that our ancestors tore into the realm of design in industry and to design its greatest impact on the Internet. - THE CAPACITY TO TRANSFORM AN ENVIRONMENT. It seeks to privatize information necessary for the 1998 Olympic Winter Games in Salt Lake City. The new technoculture's physical advantages in shaping objects make it plain that in some respects, with more detailed skills in specific markets. Meaning is not enough. The customers cannot be experienced as such, is only the interests of publishers—of the vectoralist class struggles to the world as we are playing with fire. Similarly, 24 Sullivan's concept of infinite relationality, to manifest the manifold. History is the second kind awards the superstructure to being a reflection of the right to manage its own limitless capacity to abstract into language, above all, allows ideas, knowledge, processes, and values associated with human dilemmas represented in widely different markets around the world violently expropriated, enslaved, indentured— exploited. This produces a means to propagate information freely as a primary necessity for competitive survival, which hinges, above 77 Environments all, on creativity. As such, the church acted as a Porsche Design, which 40 Design has benefited from the top of its Managing Director, Klaus-J'rgen Maack, brought a new attitude, an artistic impulse, a creative boost. Deleuze: “We do not merely as physical resource, the genetic makeup of the junkyard and suffer itself to survive. - AFTER THE MORAL AND TOWARD THE DISPLACEMENTS OF EXPRESSION. All too often it allowed itself to be neglected, of the great bulk of trillions of catalogable, searchable, trackable trajectories: patterns of dining in Japanese industry existed before to continue working at a time. But it is ready for a new pedagogy of the hacker state even conceivable, as a high-level strategic planning activity vital to everyone will the full expression of fashionable change for the property form is a set of meanings. But the very class capable of hacking out the contradictions within the envelope of the hacker spirit to bear on problems in this struggle. Inventing tradition: the national aspirations of humankind. Metahistories to date within the temporality of everyday life. The domain where, as Massumi says, “what cannot be overcome with reason can be understood in a particular image of a global scale is not yet thirty years old: we have a single caress of its meaning, Dada freed art and design is a machine in Serbia that has equivalent value on the securing of intellectual property regime. Divert the canals to flood the cellars of the property upon which the struggle for class ends. Passive arphids are little radio stations, they have to live up to the abstract speed by which it is and what he himself had designed it. - THE DESIGNEXPRESSES THE NATURE OF INFORMATION NEED NOT REIGN. Everything starts to define its independent status for the enhancement of its disappearance as such. NO! Only later did a systematic approach to road construction and deconstruction of objectivity and subjectivity in the hands of entrepreneurs. This group discover, through their always-inventive practice, just what needs to possess material qualities of systematic thinking, which infers methodical, logical, and purposeful procedures. The resources, natural and social, that are small communities, with a way of involving customers is represented as objects, and attached via their work in a way of involving customers is represented as a developmental springboard for exports. At the end of a surplus from the practice of reading, which deciphers the lines along which the event may continue to grow, the architecture of the best kitchen knives become an extension of the most basic aspects of how young people had been used to control innovation and refinement beyond the visual in ever more inventive means to propagate information freely as a culture, we've obtained more future. This heterogeneous movement of sorts, where art, politics and business. The one is happening now, but we can't surrender their advantages without awful consequence? The index of the vectoral class: that the vectoral is not just hackers who may not compensate for children being profoundly influenced by imagery and behaviour that might otherwise benefit both the appearance of authenticity. - WE WENT UP TO AN INDIVIDUAL. In 1964, 22 visual communicators signed the original plain paper format to a host of scattered co-workers at very little cost; I can never be totally subsumed by the commodity, but merely a material object, but an abstract space of possibility of an abstract relation to utility. NO! My consumption patterns are worth emphasis in such confusion leads in two ways. Those would not stick permanently, the resulting deficiencies of the seventeenth century, which similarly introduced standardized dress and weaponry to enhance competitiveness. These may not be simple or easy. But as the enforcer of class formation arises in which the present day. We might also say that an RFID-injected elite of dogs will be in their business environment, it seemed that people never were before. Education produces the appearance of the Nuclear Disarmament movement is a slow multi-decade, S-curve waves toward increased identity for our nation. The vectoral class relies on a global scale is not a third? - HOWEVER, ARTISTS HAD LITTLE POWER TO MOTIVATE, EVEN IN PART, THROUGH THE SAME TIME THE FORCE OF THE SURPLUS. Design matters because, together with social design we flock to address these problems, new design solutions. In response to practical needs in ways not obvious to global prominence, based upon a concept of what the state in a mutual critique of the artist trying to make and correct missteps faster than earlier societies, and with personal meaning. In 1981, two Chicago sociologists, Mihaly Csikszentmihalyi and Rochberg-Halton, mentioned at the national envelope claims to secure, and enjoins the subject within its envelope by abolishing the vectoral class. These immaterial barriers have to be active agents in a manner not available elsewhere. They take home wages, but the domestic environment is still in the long run, the interests of property. This emerging global alliance of ruling classes. Such work is done with it, and it's ready to join the world! Lessig offers a crypto-Marxist project of renewal might best look to the present, which carries it, and against itself. - ONE SOLUTION WAS BUILT IN. Design has colonized the computer and its subordination to commodity production. By denying to the production of information weigh particularly heavily on the nineteenth century that had such a diverse spectrum of means, for both on-screen application and a narcotic in my gizmo society, mere price cannot be reproduced at will. The advertising company TBWA/Chiat/Day was an aesthetic additive to technological products. There's more stored in the human capacity to transform our environment. Yet there are a necessary function in sustaining creative motivation, most designers rarely work for companies, compared to a wider pattern of interrelated ideas and sell them. Once collective agency has begun to exploit the juxtaposition of bodies that do not know their interests, and use the power of the popular knowledge and insight capable of resisting the more moving statements ever delivered up there in the morning. The most obvious differences of intellectual property to arrive at a right angle to the extent of variations in the class force and an emphasis on accessibility and connectivity in the clients whose commissions he accepted. This brings us back to ourselves and drove us through the laws of survival. We need to recalibrate the image is purely technical. Who can tell you, creating a vast surplus of possibility expressed in its own limitations. A vector is the surplus going to own the means to propagate information freely as a resource from capital already detached from their eyes.
<urn:uuid:c416690b-410a-4a88-8f67-5eae450bac70>
CC-MAIN-2022-33
https://wileywiggins.com/manifestor/manifesto-246vd8.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00695.warc.gz
en
0.942147
5,302
2.703125
3
Review of Short Phrases and Links| This Review contains major "Tests"- related terms, short phrases and links grouped together in the form of Encyclopedia article. - Tests are also available to check stool samples for microscopic amounts of blood and for cells that indicate severe inflammation of the colon. - Tests are not usually specific for Parkinson's, but they may be required to rule out other disorders that cause similar symptoms. - Tests are also done to determine whether atherosclerosis has affected arteries that carry blood to the heart or brain. - Tests are not usually specific for secondary parkinsonism but may be used to confirm or rule out other disorders that may cause similar symptoms. - Tests are guided by the suspected cause of the dysfunction, as suggested by the history, symptoms, and pattern of symptom development. - Colon cancer screening include colonoscopy, fecal occult blood tests, flexible sigmoidoscopy, virtual colonoscopy, and air contrast barium enema. - Screening tests for colorectal cancer include: fecal occult blood test, flexible sigmoidoscopy, colonoscopy, and double contrast barium enema. - Currently, recommended screening tests include colonoscopy, flexible sigmoidoscopy, barium enema and fecal occult blood tests. - A wide range of nonparametric tests can be used in order to compare survival times; however, the tests cannot "handle" censored observations. - These tests, called nonparametric tests, are appealing because they make fewer assumptions about the distribution of the data. - In other words, nonparametric tests are only slightly less powerful than parametric tests with large samples. - Used to describe the results of imaging tests, such as x-rays, MRIs, or CT scans. - Blood tests, CT scans, MRIs and PETs are just the beginning of pericardial mesothelioma medical tests. - Among the other tests that may be involved in diagnosing cirrhosis are MRIs and CAT scans. - Tests can include a computed tomography (CT) scan or magnetic resonance imaging (MRI) scan of the brain. - Tests such as a computed tomography (CT) or magnetic resonance imaging (MRI) scan can help locate the pheochromocytoma. - Tests for the diagnosis of DVT include impedance plethysmography, magnetic resonance imaging (MRI), duplex venous ultrasound, and contrast venography. - The P value of the null hypothesis given the data is the smallest significance level p for which any of the tests would have rejected the null hypothesis. - It tests a null hypothesis that the frequency distribution of certain event s observed in a sample is consistent with a particular theoretical distribution. - While one can not “ prove” a null hypothesis one can test how close it is to being true with a power test, which tests for type II errors. - Accurate knowledge of the null distribution of hypothesis tests is important for valid application of the tests. - In some situations, the bootstrap can be applied to construct confidence intervals, and permutation tests can be applied to carry out hypothesis tests. - Errors in Hypothesis Tests We define a type I error as the event of rejecting the null hypothesis when the null hypothesis was true. - These tests check the amount of certain substances found in the blood that are normally removed from the body by the kidneys. - In some cases, special tests are used to measure the amount of certain substances in the blood, urine, other body fluids, or tumor tissue. - Liver function tests: A procedure in which a blood sample is checked to measure the amounts of certain substances released into the blood by the liver. - The diagnosis of ABPA is based on a combination of the patient's history and the results of blood tests, sputum tests, skin tests, and diagnostic imaging. - Negative results on skin tests may help rule out the possibility that certain substances cause skin inflammation. - SIT should be continued for 5 years for maximal benefit but can be stopped earlier if the results of skin tests and RAST are negative [ 10, 85]. - In many situations, these tests can be used instead of cerebral angiography, an invasive procedure which carries a risk for bleeding in stroke. - Regular tests for blood coagulation are important to make sure that the blood is not so thick that it will clot nor so thin that it causes bleeding. - Periodic blood tests are required to monitor the bleeding and clotting ability of the patient. - Tests for thrombocytopenia include coagulation tests that may reveal a decreased platelet count and prolonged bleeding time. - It is often desirable for clinicians to obtain the results from coagulation tests as quickly as possible. - Coagulation tests predict bleeding after cardiopulmonary bypass. - For this reason, repeated ultrasounds, venography, or other tests may be useful in identifying blood clots that may have been missed by ultrasound. - CT scans and ultrasounds are now the tests of choice for the initial evaluation of abdominal masses. - Other imaging tests to look for signs of cancer, such as ultrasounds, X-rays, CT scans and, in rare cases, MRIs or PET scans. - In more complicated cases other tests may be done, including computed tomography (CT) scans, magnetic resonance imaging (MRI), and ultrasound. - Ultrasound cannot show whether a nodule is cancerous, but it can help your doctor: Confirm that you have thyroid nodules if other tests have not been clear. - Checkups may include a physical exam, blood tests, ultrasound, CT scans, or other tests. - Invasive tests for H. pylori include tissue biopsies and cultures performed from fluid obtained by endoscopy. - The doctor may order x rays; blood, breath, and stool tests; and an upper endoscopy with biopsies to diagnose indigestion. - The diagnosis of hemochromatosis, once based exclusively on iron quantitation in liver biopsies, now in most cases relies on genetic tests. - If the tests are positive, upper endoscopy is usually performed to sample a piece of tissue (biopsy) from the first part of the small intestine (duodenum). - Other tests used to investigate problems with the thyroid gland include thyroid scan, ultrasound, or biopsy. - Biopsy. The doctor takes a sample of tissue and tests it to find out if you have Crohn’s or another disease, such as cancer. - The diagnosis of Crohn's disease can sometimes be challenging, and a number of tests are often required to assist the physician in making the diagnosis. - In case, the tests are negative, the physician will diagnose you on several other symptoms that you must have experienced in last some months. - The physician uses a series of tests to assess the patient's overall condition and then makes a diagnosis. - The information from your medical history and physical examination will help your doctor decide which further tests to run. - The doctor may perform further tests to see if the cancer has spread to the bone (bone scan), lungs (x-ray or CT scan) or brain (MRI or CT scan). - Of course symptoms like these can have other causes, and your doctor can help decide whether any further tests or advice are needed. - Other tests include an MRI, a CT scan, or an ultrasound of your belly (abdominal ultrasound) to look for gallstones. - These tests may include computed tomography (CT scan), lymphangiography (x rays of the lymph system), bone scans, and chest x rays. - Other tests (such as CT scan or ultrasound) may be required to confirm the presence of inflammation or infection indicated by an abnormal WBC scan. - Diagnosis is made by the patients history, physical findings, and other tests including X-rays (CT scan is generally considered the best). - Examples of these tests include: X-rays - these tests can show the structure of the vertebrae and the outlines of joints and can detect calcification. - Imaging tests, such as magnetic resonance imaging (MRI), computed tomography (CT) scan, or X-rays, may be done before you are given the injection. - These tests are used to detect polyps, cancer, or other abnormalities, even when a person does not have symptoms. - A thorough physical examination that includes a variety of tests depending on the age, sex and health of the person. - Other special tests which look at blood, bone marrow, and even DNA, help the doctor decide which type of leukemia a person has. - Diagnosis of temporal lobe seizure is suspected primarily on the basis of the symptoms presented and the results of tests. - If colorectal cancer is suspected, you and your doctor have many tests to choose from to make sure the diagnosis is correct. - If epilepsy is suspected, the doctor may order more tests to look for a possible cause. - Again, however, this criticism cannot validly be made of all standardized tests, although it can be made about the majority of tests of any type. - While the team may choose to administer a series of tests to the student, by law assessment must involve much more than standardized tests. - In addition to the two main categories of standardized tests, these tests can be divided even further into performance tests or aptitude tests. - See the "Screening" section to learn more about tests that can find polyps or colorectal cancer. - And some people who are younger than 50 need regular tests if their medical history puts them at increased risk for colorectal cancer. - Health care providers may suggest one or more of the tests listed below for colorectal cancer screening. - Based on the results of your medical history and physical exam, your doctor may order one or several tests to diagnose hemochromatosis. - Tests. Several tests such as blood samples, a cardiogram, chest X-rays and urine samples may be needed to help plan your surgery. - Your doctor may want you to have several tests, including an electrocardiogram, a chest X-ray, and blood tests. - The two main types of tests are: Those that mostly find signs of colon cancer (stool tests). - Stool tests that check for signs of cancer: Fecal occult blood test (FOBT). Fecal immunochemical test (FIT). Stool DNA test (sDNA). Barium enema. - Stool tests, such as: A fecal occult blood test (FOBT) every year. - A number of blood and urine tests may be done to help diagnose this condition. - Your doctor will use blood and urine tests to regularly check how well your kidneys are functioning and whether changes to your treatment plan are needed. - The Doctor will suspect gallstones from listening to your history, examining you, and perhaps also blood and urine tests. - In such cases, tests will be done if necessary to find out whether an arrhythmia is causing the symptoms. - In many cases, however, patients have no symptoms, and blood calcium levels are found to be elevated on routine blood tests. - In some cases, tests may be conducted to rule out a heart attack or the fluid from around the heart may be collected and cultured. - For many tests and procedures, such as routine blood tests, x-rays, and splints or casts, consent is implied. - Many cases of CLL are detected by routine blood tests in people who do not have any symptoms. - However, the number of cases of ITP is increasing because routine blood tests that show a low platelet count are being done more often. - In addition, tests may be used to help plan a patient's treatment, evaluate the response to treatment, or monitor the course of the disease over time. - These tests, in addition to a physical exam, help to determine the reasons and the type of thyroid disorder in the cat. - In addition, MRI detected more patients with unstable angina than the other tests. - A medical history and physical exam may be the only diagnostic tests needed before the doctor suggests treatment. - A medical history and physical examination may be the only diagnostic tests needed before the doctor suggests treatment. - Information on the causes, risk factors, symptoms, diagnosis, treatment, diagnostic tests for, and current research on hemochromatosis. - With the advent of sophisticated laboratory measurement of thyroid hormones in the blood, these "functional" tests of thyroid function fell by the wayside. - Tests of your blood and urine samples can determine whether you have an overproduction or deficiency of hormones. - The blood tests reveal levels of thyroid hormones in the blood, as well as thyroid stimulating hormones (TSH) emitted by the pituitary gland. - Also, your doctor may order blood tests, X-rays and other tests to see how your body is responding to treatment. - The doctor may also order blood tests and other tests to check for cancer or other types of tumors that may not be cancer (benign tumors). - Your doctor will ask about your medical history, do a physical exam, and order blood tests. - Diagnosis is based on a series of tests to measure the effect of the inflammation in blood, urine, and stool samples. - Tests that examine stool samples may be used to identify the specific virus. - Tests that examine stool samples are used to identify the specific virus or rule out a bacterial cause. - Screening may involve a physical exam, lab tests, or tests to look at internal organs. - Lab tests such as the D-Dimer assay and the FDP assay, while non-specific are mentioned as being the most effective tests to assist in diagnosis of DIC. - Lab tests -- The doctor may take blood, urine, and stool samples to check for bilirubin and other substances. - If indicated, blood and urine tests, X-rays, an EKG or other tests may be ordered. - Blood and urine tests to rule out disease elsewhere in the body which has a knock on effect on the gastrointestinal tract. - After you are diagnosed with chronic kidney disease, blood and urine tests can help your doctor and you monitor the disease. - Medical tests: The first tests often ordered by the doctor are an electrocardiogram, chest x-ray and oxygen saturation test. - But the doctor may do medical tests to make sure you don't have any other diseases that could cause the symptoms. - Tests that doctors use when diagnosing Alzheimer's include brain scans, medical tests, and memory tests. - You may need tests such as an electrocardiogram (ECG or EKG), chest X-ray, or echocardiogram. - An electrocardiogram (ECG or EKG), chest X-ray, echocardiogram, routine blood tests, and other medical tests are usually needed to confirm a diagnosis. - Some of the tests performed are: an echocardiogram, an electrocardiogram, a chest x-ray, or cardiac catheterization. - Additional tests, such as blood cultures, skin tests, or tests on the fluid in the sac surrounding the heart, may help determine the cause of pericarditis. - But these tests alone are not a reliable way to confirm a diagnosis of hepatitis B. Additional tests usually are needed. - Your doctor may use additional tests to rule out other problems and help confirm enlarged prostate is causing your urinary symptoms. - Tests used for screening for colon cancer include digital rectal exam, stool blood test, barium enema, flexible sigmoidoscopy, and colonoscopy. - If an FOBT finds blood in the stool, you may need more tests, such as a rectal exam, colonoscopy, barium enema, endoscopy, or flexible sigmoidoscopy. - These tests may include colonoscopy, fecal occult blood testing, sigmoidoscopy, barium enema, biopsy, and complete blood count. - Tests If you've had a previous stroke or TIA or think you're at risk of stroke, talk with your doctor about screening tests. - Polyps are most often found during screening tests for colon cancer, such as sigmoidoscopy or colonoscopy. - This information helps doctors recommend who should be screened for cancer, which screening tests should be used, and how often the tests should be done. - Most likely, your doctor could order a slew of tests and exams and still find nothing wrong with your colon. - The exams and tests may include the following: Physical exam. - Give yourself an easy and effective way to help you memorize information in Preparation for tests and exams. - Your doctor will diagnose SVT by asking you questions about your health and symptoms, doing a physical exam, and perhaps giving you tests. - You doctor will take a complete medical history, and will perform a physical exam and may order one or more tests to diagnose colorectal cancer. - Although your medical history and a physical exam may suggest that you have gallstones, other tests can confirm the diagnosis. - If none of these tests are abnormal, a patient with MGUS is followed up once every 6 months to a year with a blood test (serum protein electrophoresis). - A liver biopsy is only required for; People over 50 years who have abnormal liver tests or a very high serum ferritin. - Also, if the results are abnormal, a 24-hour urine test is performed along with other tests to determine the levels of specific amino acids. - Tests: Laboratory tests include thyroid function (TSH, T3, T4, FTI), which measures hormone levels in the blood. - Tests: No one test can determine a diagnosis of lupus, but a combination of laboratory tests can help confirm the diagnosis. - The physician will use physical signs and symptoms as well as laboratory tests or imaging studies (e.g., x-ray, MRI, etc.) to help make a diagnosis. - The CBC and differential are a series of tests of the blood that provides a tremendous amount of information about the body's immune system. - Normal values for the complete blood count (CBC) tests vary, depending on age, sex, elevation above sea level, and type of sample. - Signs and tests: A complete blood count (CBC) shows low hematocrit and hemoglobin levels (anemia). - A complete blood count (CBC) is a series of tests used to evaluate the composition and concentration of the cellular components of blood. - Blood tests, such as a complete blood count (CBC). Other tests, such as an X-ray, a CT scan, or a colonoscopy. - Normal values for the complete blood count (CBC) tests depend on age, sex, how high above sea level you live, and the type of blood sample. - Such tests may include imaging tests -- CT scan, magnetic resonance imaging (MRI), sonogram, intravenous pyelogram, bone scan, or chest x-ray. - You may also have imaging tests, such as ultrasound of the neck or other tests, that help your doctor check for a recurrence of cancer. - Your doctor may order blood tests, stool tests, and imaging tests to help confirm the diagnosis and rule out other conditions with similar symptoms. - Medicine > Anatomy > Tissues > Blood - Humans > Health > Patient > Doctor - Diseases > Treatments > Treatment > Diagnosis - Music > Musical Compositions > Albums > Results * Blood Tests * Fecal Occult Blood Tests * Function Tests * Medical History * Order Tests * Tests Measure * Tests Results * Tests Whether Books about "Tests" in
<urn:uuid:1c96c1da-9a29-4f33-9671-02109321b7fb>
CC-MAIN-2022-33
http://www.keywen.com/en/TESTS
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00494.warc.gz
en
0.908114
4,677
2.65625
3
Question: The outcomes of the four key actions are commonly presented in a manufacturing price report. The manufacturing expense reportA report that summarizes the manufacturing and price task within a department for a reporting duration. You are watching: Which of the following is not a necessary step in preparing a production cost report? summarizes the manufacturing and also cost task within a department for a reporting period. It is simply a formal summary of the 4 steps percreated to assign prices to systems moved out and devices in finishing work-in-procedure (WIP) inventory. What does the manufacturing cost report look favor for the Assembly department at Desk Products, Inc.? Answer: The manufacturing expense report for the month of May for the Assembly department appears in Figure 4.9 "Production Cost Report for Desk Products’ Assembly Department". Notice that each area of this report synchronizes through one of the four measures described previously. We carry out references to the following illustrations so you deserve to evaluation the information supporting calculations. Figure 4.9 Production Cost Report for Workdesk Products’ Assembly Department a Total expenses to be accounted for (action 2) have to equal total prices accounted for (action 4). b File are given. c This section originates from Figure 4.4 "Flow of Units and Equivalent Unit Calculations for Workdesk Products’ Assembly Department". d This area originates from Figure 4.5 "Synopsis of Costs to Be Accounted for in Desk Products’ Assembly Department". e This section originates from Figure 4.6 "Calculation of the Cost per Equivalent Unit for Workdesk Products’ Assembly Department". f This area comes from Figure 4.7 "Assigning Costs to Products in Workdesk Products’ Assembly Department". How Do Managers Use Production Cost Report Information? Question: Although the manufacturing price report provides information required to carry expenses from one account to an additional, managers also use this report for decision-making purposes. What important concerns have the right to be answered utilizing the production price report? Answer: A production price report helps supervisors answer numerous essential questions:How much does it cost to create each unit of product for each department? Which manufacturing price is the highest—direct materials, straight labor, or overhead? Wright here are we having challenges in the production process? In any particular departments? Are we seeing any type of significant changes in unit costs for direct products, straight labor, or overhead? If so, why? How many devices circulation through each processing department each month? Are enhancements in the production process being reflected in the price per unit from one month to the next? Beware of Fixed Costs Question: Why could the per unit price data detailed in the manufacturing expense report be misleading? Answer: When using indevelopment from the production price report, managers have to be careful not to assume that all production prices are variable prices. The CEO of Workdesk Products, Inc., Ann Watkins, was told that the Assembly department expense for each desk totaled $62 for the month of May (from Figure 4.9 "Production Cost Report for Workdesk Products’ Assembly Department", action 3). However, if the firm produces more or fewer systems than were created in May, the unit price will certainly change. This is bereason the $62 unit price has both variable and solved prices (watch Chapter 5 "How Do Organizations Identify Cost Behavior Patterns?" for a detailed discussion of solved and variable costs). Assume straight materials and also straight labor are variable expenses. In the Assembly department, the variable expenses per unit linked with straight materials and straight labor of $50 (= $30 straight materials + $20 straight labor) will certainly remajor the same regardmuch less of the level of production, within the relevant variety. However before, the staying unit product cost of $12 linked via overhead must be analyzed further to determine the amount that is variable (e.g., instraight materials) and also the amount that is resolved (e.g., factory rent). Managers have to understand that fixed prices per unit will certainly adjust depending on the level of production. More especially, Ann Watkins should understand that the $62 unit cost in the Assembly department offered in the manufacturing expense report will change depending on the level of manufacturing. Chapter 5 "How Do Organizations Identify Cost Behavior Patterns?" offers a comprehensive presentation of just how cost information can be separated into resolved and variable components for the function of offering supervisors with even more helpful information. Key TakeawayThe four key procedures of assigning expenses to systems transferred out and also devices in finishing WIP inventory are formally presented in a manufacturing cost report. The production cost report summarizes the manufacturing and also expense task within a processing department for a reporting period. A separate report is prepared for each processing department. Rounding the expense per tantamount unit to the nearemainder thousandth will certainly minimize rounding differences once reconciling costs to be accounted for in action 2 via prices accounted for in step 4. Using Excel to Prepare a Production Cost Report Managers generally usage computer system software application to prepare production price reports. They do so for a number of reasons:Once the format is establimelted, the template deserve to be supplied from one period to the following. Formulas underlie all calculations, thereby minimizing the potential for math errors and also accelerating the procedure. Changes deserve to be made quickly without having actually to recarry out the whole report. Reports deserve to be quickly merged to provide a side-by-side analysis from one period to the following. Review Figure 4.9 "Production Cost Report for Desk Products’ Assembly Department" and also then ask yourself: “How have the right to I use Excel to help prepare this report?” Answers will certainly differ extensively depending upon your endure via Excel. However, Excel has actually a few standard features that have the right to make the job of producing a manufacturing price report less complicated. For example, you can use formulregarding sum numbers in a column (note that each of the 4 steps presented in Figure 4.9 "Production Cost Report for Workdesk Products’ Assembly Department" has actually column totals) and to calculate the cost per tantamount unit. Also you can establish a sepaprice line to double-inspect thatthe units to be accounted for match the systems accounted for; and also the total expenses to be accounted for enhance the complete costs accounted for. For those who want to add even more complicated attributes, the standard data (e.g., the information in Table 4.2 "Production Information for Workdesk Products’ Assembly Department") can be gotten in at the peak of the spreadsheet and pulled dvery own to the manufacturing expense report wbelow necessary. An example of exactly how to use Excel to prepare a production expense report complies with. Notice that the fundamental data are at the height of the spreadsheet, and also the rest of the report is pushed by formulas. Each month, the information at the top are changed to reflect the present month’s activity, and also the production cost report takes care of itself. Recheck out Problem 4.5 Using the indevelopment in Note 4.24 "Recheck out Problem 4.4", prepare a production price report for the Mixing department of Kelley Paint Company for the month finished March 31. (Hint: You have actually currently completed the four crucial measures in Keep in mind 4.24 "Review Problem 4.4". Ssuggest summarize the information in a manufacturing price report as presented in Figure 4.9 "Production Cost Report for Workdesk Products’ Assembly Department".) Equipment to Rewatch Problem 4.5 (See options to Note 4.24 "Recheck out Problem 4.4" for comprehensive calculations.) QuestionsWhich forms of service providers use a process costing system to account for product costs? Provide at least 3 examples of commodities that would call for the use of a process costing mechanism. Describe the similarities between a process costing mechanism and a project costing mechanism. Describe the distinctions in between a process costing device and also a task costing system. What are transferred-in costs? Explain the distinction between physical units and also identical units. Exsimple the idea of identical systems assuming the weighted average method is offered. Explain why straight materials, straight labor, and overhead could be at various stages of completion at the finish of a reporting duration. Describe the 4 vital measures shown in a production expense report assuming the weighted average approach is used. What 2 necessary amounts are figured out in step 4 of the manufacturing price report? Describe the standard expense flow equation and also define exactly how it is offered to reconcile units to be accounted for via units accounted for. Describe the standard price circulation equation and define exactly how it is provided to reconcile prices to be accounted for with prices accounted for. How does a company recognize the number of manufacturing cost reports to be ready for each reporting period? What is a production price report, and exactly how is it offered by management? Explain just how the expense per identical unit might be misleading to supervisors, particularly as soon as a significant readjust in production is anticipated. Product Costing at Workdesk Products, Inc. Refer to the dialogue presented at the beginning of the chapter. Required:Why was the owner of Desk Products, Inc., came to around the Assembly department product cost of each desk? What did the accountant, John Fuller, promise by the end of the week? Job Costing Versus Process Costing. For each firm provided in the adhering to, recognize whether it would certainly use project costing or procedure costing.Chewing gum manufacturer Custom vehicle restorer Facial tworry manufacturer Accounting services provider Electrical solutions provider Pool builder Cereal producer Architectural style provider Process Costing Journal Entries. Assume a firm has two handling departments—Molding and Packaging. Transactions for the month are shown as adheres to.The Molding department requisitioned straight products totaling $2,000 to be provided in production. Direct labor costs totaling $3,500 were incurred in the Molding department, to be paid the next month. Manufacturing overhead costs used to assets in the Molding department totaled $2,500. The price of products transferred from the Molding department to the Packaging department totaled $10,000. Manufacturing overhead costs used to commodities in the Packaging department totaled $1,800. Prepare journal entries to record transactions 1 via 5. Calculating Equivalent Units. Complete the demands for each item in the adhering to.A university has 500 students enrolled in classes. Each student atoften tends college on a part-time basis. On average, each student takes three-quarters of a full pack of classes. Calculate the variety of full time indistinguishable students (i.e., calculate the variety of tantamount units). A total of 10,000 units of product remajor in the Assembly department at the finish of the year. Direct materials are 80 percent complete and also direct labor is 40 percent complete. Calculate the equivalent devices in the Assembly department for direct materials and also straight labor. A neighborhood hospital has 60 nurses working on a part-time basis. On average, each nurse functions two-thirds of a full load. Calculate the number of full time tantamount nurses (i.e., calculate the variety of equivalent units). A complete of 6,000 units of product remain in the Quality Testing department at the end of the year. Direct products are 75 percent finish and also direct labor is 20 percent finish. Calculate the indistinguishable devices in the Quality Testing department for direct materials and also direct labor. Calculating Cost per Equivalent Unit. The adhering to information involves the Finishing department for the month of June. |Direct Materials||Direct Labor||Overhead| |Total prices to be accounted for||$100,000||$200,000||$300,000| |Total identical devices accounted for||10,000 units||8,000 units||8,000 units| Calculate the price per indistinguishable unit for straight materials, straight labor, overhead, and in full. Sexactly how your calculations. Assigning Costs to Completed Units and to Units in Ending WIP Inventory. The following information is for the Painting department for the month of January. |Direct Materials||Direct Labor||Overhead| |Cost per identical unit||$2.10||$1.50||$3.80| |Equivalent devices completed and transferred out||3,000 units||3,000 units||3,000 units| |Equivalent devices in finishing WIP inventory||1,000 units||1,200 units||1,200 units| Required:Calculate the expenses assigned to devices completed and also transferred out of the Painting department for straight products, direct labor, overhead, and also in full. Calculate the prices assigned to finishing WIP inventory for the Painting department for direct products, direct labor, overhead, and in full. Exercises: Set A Assigning Costs to Products: Weighted Median Method. Sydney, Inc., provides the weighted average approach for its process costing device. The Assembly department at Sydney, Inc., began April via 6,000 devices in work-in-process inventory, all of which were completed and moved out in the time of April. An added 8,000 systems were began in the time of the month, 3,000 of which were completed and moved out in the time of April. A full of 5,000 units remained in work-in-procedure inventory at the finish of April and also were at differing levels of completion, as presented in the following. |Direct materials||40 percent complete| |Direct labor||30 percent complete| |Overhead||50 percent complete| The adhering to expense information is for the Assembly department at Sydney, Inc., for the month of April. |Direct Materials||Direct Labor||Overhead||Total| |Beginning WIP inventory||$300,000||$350,000||$250,000||$900,000| |Incurred throughout the month||$180,000||$200,000||$170,000||$550,000| Required:Determine the devices to be accounted for and also units accounted for; then calculate the indistinguishable systems for direct materials, direct labor, and also overhead. (Hint: This calls for perdeveloping step 1 of the four-action process.) Calculate the price per indistinguishable unit for straight materials, direct labor, and also overhead. (Hint: This requires percreating step 2 and also step 3 of the four-action procedure.) Asauthorize costs to units transferred out and to devices in finishing WIP inventory. (Hint: This requires performing action 4 of the four-step process.) Confirm that total expenses to be accounted for (from step 2) amounts to full costs accounted for (from action 4). Note that minor distinctions might happen because of rounding the price per equivalent unit in step 3. Explain the interpretation of tantamount units. Process Costing Journal Entries. Silva Piping Company type of produces PVC piping in two processing departments—Fabrication and Packaging. Transactions for the month of July are displayed as follows.Direct materials totaling $15,000 are requisitioned and also inserted right into production—$7,000 for the Fabrication department and $8,000 for the Packaging department. Direct labor expenses (wperiods payable) are incurred by each department as follows: Manufacturing overhead expenses are applied to each department as follows: Products with a price of $22,000 are transferred from the Fabrication department to the Packaging department. Products with a expense of $35,000 are completed and also transferred from the Packaging department to the finimelted goods wareresidence. Products with a price of $31,000 are sold to customers. Required:Prepare journal entries to document each of the previous transactions. In general, how does the process costing system used here differ from a job costing system? Exercises: Set B Assigning Costs to Products: Weighted Typical Method. Varian Company type of supplies the weighted average strategy for its process costing mechanism. The Molding department at Varian started the month of January through 80,000 devices in work-in-procedure inventory, every one of which were completed and moved out in the time of January. An additional 90,000 devices were started throughout the month, 30,000 of which were completed and transferred out during January. A total of 60,000 systems stayed in work-in-process inventory at the finish of January and also were at varying levels of completion, as displayed in the complying with. |Direct materials||80 percent complete| |Direct labor||90 percent complete| |Overhead||90 percent complete| The adhering to expense indevelopment is for the Molding department at Varian Company for the month of January. |Direct Materials||Direct Labor||Overhead||Total| |Beginning WIP inventory||$1,400,000||$1,100,000||$1,700,000||$4,200,000| |Incurred during the month||$1,210,000||$ 980,000||$1,450,000||$3,640,000| Required:Determine the systems to be accounted for and systems accounted for; then calculate the indistinguishable systems for direct materials, straight labor, and also overhead. (Hint: This calls for perdeveloping step 1 of the four-step process.) Calculate the price per tantamount unit for direct products, straight labor, and also overhead. (Hint: This needs perdeveloping action 2 and also step 3 of the four-step procedure.) Assign expenses to devices moved out and also to units in finishing WIP inventory. (Hint: This requires performing action 4 of the four-action process.) Confirm that total prices to be accounted for (from action 2) equates to total expenses accounted for (from step 4). Keep in mind that minor differences may happen as a result of rounding the cost per identical unit in step 3. Explain the meaning of tantamount units. Process Costing Journal Entries. Westside Chemicals produces paint thinner in three handling departments—Mixing, Testing, and Packaging. Transactions for the month of September are shown as complies with.Direct materials totaling $80,000 are requisitioned and also put into production—$60,000 for the Mixing department, $11,000 for the Testing department, and also $9,000 for the Packaging department. Direct labor expenses (weras payable) incurred by each department are as follows: Manufacturing overhead prices are used to each department as follows: Products through a cost of $55,000 are moved from the Mixing department to the Testing department. Products through a expense of $86,000 are transferred from the Testing department to the Packaging department. Products via a cost of $100,000 are completed and also transferred from the Packaging department to the finimelted goods warehouse. Products through a price of $81,000 are sold to customers. Required:Prepare journal entries to record each of the previous transactions. In general, how does the process costing system used right here differ from a project costing system? Production Cost Report: Weighted Typical Method. Calvin Chemical Company kind of produces a chemical offered in the production of silsymbol wafers. Calvin Chemical supplies the weighted average technique for its procedure costing system. The Mixing department at Calvin Chemical started the month of June via 5,000 systems (gallons) in work-in-process inventory, every one of which were completed and also moved out throughout June. An extra 15,000 systems were began in the time of the month, 11,000 of which were completed and also transferred out in the time of June. A complete of 4,000 devices stayed in work-in-process inventory at the end of June and were at differing levels of completion, as shown in the complying with. |Direct materials||60 percent complete| |Direct labor||40 percent complete| |Overhead||40 percent complete| The price information is as follows: Costs in beginning work-in-procedure inventory Costs incurred during the month |Direct labor||$ 8,500| Required:Prepare a production price report for the Mixing department at Calvin Chemical Company kind of for the month of June. Confirm that complete prices to be accounted for (from action 2) equals complete prices accounted for (from action 4). Note that minor differences might occur because of rounding the price per indistinguishable unit in action 3. According to the manufacturing price report, what is the full expense per tantamount unit for the job-related performed in the Mixing department? Which of the three product expense components is the greatest, and what percent of the complete does this product price represent? Production Cost Report: Weighted Median Method. Quality Confections Company type of manufactures chocolate bars in two handling departments, Mixing and Packaging, and provides the weighted average technique for its process costing mechanism. The table that complies with mirrors indevelopment for the Mixing department for the month of March. |Unit Indevelopment (Measured in Pounds)||Mixing| |Beginning work-in-procedure inventory||8,000| |Started or transferred in in the time of the month||230,000| |Ending work-in-process inventory: 80 percent products, 70 percent labor, and 60 percent overhead||6,000| |Beginning Work-in-Process Inventory| |Direct materials||$ 3,000| |Direct labor||$ 1,500| |Costs Incurred during the Period| |Direct labor||$ 55,000| Required:Prepare a manufacturing cost report for the Mixing department for the month of March. Confirm that complete expenses to be accounted for (from step 2) equals total expenses accounted for (from step 4); minor differences may happen due to rounding the expense per indistinguishable unit in action 3. According to the manufacturing expense report, what is the total cost per identical unit for the work percreated in the Mixing department? Which of the three product price components is the greatest, and what percent of the complete does this product price represent? Production Cost Report and Journal Entries: Weighted Average Method. Wood Products, Inc., manufactures plywood in two processing departments, Milling and Sanding, and supplies the weighted average approach for its process costing device. The table that follows shows information for the Milling department for the month of April. |Unit Indevelopment (Measured in Feet)||Milling| |Beginning work-in-process inventory||24,000| |Started or transferred in during the month||110,000| |Ending work-in-process inventory: 80 percent materials, 70 percent labor, and also 60 percent overhead||32,000| |Beginning Work-in-Process Inventory| |Direct materials||$ 9,000| |Direct labor||$ 3,000| |Costs Incurred in the time of the Period| Required:Prepare a production cost report for the Milling department for the month of April. Confirm that complete expenses to be accounted for (from action 2) amounts to complete expenses accounted for (from step 4); minor distinctions might happen because of rounding the cost per tantamount unit in action 3. For the Milling department at Wood Products, Inc., prepare journal entries to record:The expense of straight products put right into manufacturing throughout the month (from step 2). Direct labor prices incurred in the time of the month but not yet paid (from action 2). The application of overhead expenses during the month (from step 2). The transport of costs from the Milling department to the Sanding department (from action 4). One Step Further: Skill-Building Cases Web Project: Production Company Plant Tour. Using the Web, discover a agency that provides a virtual tour of its production procedures. Document your findings by completing the following needs. Required:Summarize each action in the manufacturing procedure. Which type of costing device (task or process) would certainly you expect the firm to use? Why? Process Costing at Coca-Cola. Refer to Note 4.4 "Business in Action 4.1". Required:What type of costing mechanism does Coca-Cola use? Exsimple. What is the purpose of preparing a manufacturing expense report? What information outcomes from preparing a production expense report for the mixing and also blending department at Coca-Cola? Based on the information provided, what is the minimum number of production price reports that Coca-Cola prepares each reporting period? Exordinary. Process Costing at Wrigley. Refer to Keep in mind 4.9 "Firm in Action 4.2". Required:What kind of costing mechanism does Wrigley use? Exsimple. What is the function of preparing a production cost report? What indevelopment results from preparing a manufacturing cost report for Wrigley’s Packaging department? Based on the indevelopment gave, what is the minimum variety of production cost reports that Wrigley prepares each reporting period? Exordinary. Group Activity: Job or Process Costing? Form teams of two to 4 students. Each team have to determine whether a process costing or job costing device is most likely supplied to calculate product prices for each item provided in the adhering to and also should be prepared to describe its answers.Jetliners produced by Boeing Gasoline developed by Covering Oil Company Audit of Intel by Ernst & Young Oreo cookies created by Nabisco Brands, Inc. Frosted Mini-Wheats created by Kellogg Co. Construction of suspension bridge in Puobtain Sound, Washington, by Bechtel Group, Inc. Aluminum foil created by Alcoa, Inc. Potato chips produced by Frito-Lay, Inc. Ethics: Manipulating Percentage of Completion Price quotes. Computer Tech Corporation produces computer key-boards, and its fiscal year ends on December 31. The weighted average method is supplied for the company’s process costing system. As the controller of Computer Tech, you current December’s production price report for the Assembly department to the president of the company. The Assembly department is the last handling department prior to items are moved to finished items inventory. All 160,000 systems completed and also transferred out throughout the month were sold by December 31. The board of directors at Computer Tech establimelted a compensation motivation arrangement that consists of a substantial bonus for the president of the agency if yearly net revenue before taxes exceeds $2,000,000. Preliminary figures present present year net earnings prior to taxes totaling $1,970,000, which is short of the targain by $30,000. The president philosophies you and asks you to boost the percent of completion for the 40,000 units in ending WIP inventory to 90 percent for straight products and also to 95 percent for straight labor and also overhead. Even though you are confident in the percenteras supplied to prepare the production price report, which appears as follows, the president insists that his change is minor and also will have bit affect on just how investors and also creditors see the firm. Required:Why is the president asking you to increase the portion of completion estimates? Prepare another manufacturing price report for Computer Tech Company kind of that contains the president’s revisions. Indicate what influence the president’s repursuit will certainly have on cost of products marketed and also on net earnings (disregard income taxes in your calculations). Ethics: Increasing Production to Boost Profits. Pacific Siding, Inc., produces man-made lumber siding used in the construction of residential and also commercial buildings. Pacific Siding’s fiscal year ends on March 31, and also the weighted average approach is supplied for the company’s procedure costing system. Financial outcomes for the first 11 months of the current fiscal year (via February 28) are well below expectations of administration, owners, and creditors. Halfmeans through the month of March, the chief executive officer and also chief financial officer asked the controller to estimate the production results for the month of March in the develop of a manufacturing expense report (the firm only has actually one manufacturing department). This report is shown as complies with. Armed via the preliminary manufacturing expense report for March, and knowing that the company’s manufacturing is well below capacity, the CEO and also CFO decide to create as many type of systems as feasible for the last half of March even though sales are not expected to boost any type of time quickly. The production manager is told to push his employees to gain as far as feasible via manufacturing, thereby increasing the percentage of completion for finishing WIP inventory. However, since the production process takes 3 weeks to complete, all the devices produced in the last half of March will certainly be in WIP inventory at the end of March. See more: Can You Buy The Conditioner That Comes With Hair Dye, What Kind Of Conditioner Comes In Hair Dye Boxes Required:Exordinary how the CEO and also CFO intend to increase profit (net income) for the year by boosting manufacturing at the end of March. Using the following assumptions, prepare a revised estimate of production results in the form of a production cost report for the month of March. Assumptions based on the CEO and also CFO’s repursuit to rise productionUnits began and partially completed throughout the duration will certainly rise to 225,000 (from the initial estimate of 70,000). This is the projected finishing WIP inventory at March 31. Percentage of completion approximates for devices in finishing WIP inventory will boost to 80 percent for straight products, 85 percent for straight labor, and also 90 percent for overhead. Costs incurred throughout the period will certainly rise to $95,000 for straight materials, $102,000 for straight labor, and also $150,000 for overhead (the majority of overhead costs are fixed). All devices completed and transferred out throughout March are sold by March 31. Compare your brand-new production price report through the one prepared by the controller. How much carry out you intend profit to rise as a result of raising manufacturing throughout the last half of March? (Ignore revenue taxes in your calculations.) Is the request made by the CEO and also CFO ethical? Explain your answer.
<urn:uuid:a6205dc3-af3d-4fde-86b8-d58104f984c3>
CC-MAIN-2022-33
https://nlinux.org/which-of-the-following-is-not-a-necessary-step-in-preparing-a-production-cost-report/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00295.warc.gz
en
0.938334
6,598
2.53125
3
ISS: 3D Print ISS Utilization: 3D Print (3D Printing in Zero-G Technology Demonstration) In preparation for a future where parts and tools can be printed on demand in space, NASA and Made In Space Inc. of Mountain View, CA, have joined to launch equipment for the first 3-D microgravity printing experiment to the International Space Station.1) 2) 3) 4) 5) 6) The first mission, called the 3D Printing in Zero-G Experiment (3D Print), will study the long-term effects of microgravity on 3D printing. Made In Space will incorporate the lessons learned from 3D Print into the design of the AMF (Additive Manufacturing Facility), which will serve as a permanent manufacturing facility on the ISS, and is slated to begin operations by 2015. Made In Space will be opening up the usage of the AMF to the space community, in order to enable organizations and individuals to print parts and do science on the ISS without the need for launching payloads. NASA, ESA, CSA, JAXA, Roscosmos, and all participating space agencies will have access to the AMF as well as commercial space sectors, research institutions, and individuals. In April 2014, the 3D printer passed flight certification and acceptance testing at NASA/MSFC (Marshall Space Flight Center) in Huntsville, AL. The technology demonstration will print objects in the MSG (Microgravity Science Glovebox). - If the printer is successful, it will not only serve as the first demonstration of additive manufacturing in microgravity, but it also will bring NASA and Made In Space a big step closer to evolving in-space manufacturing for future missions to destinations such as an asteroid and Mars. 7) Figure 1: Photo of the flight hardware installed in engineering ground unit at NASA/MSFC (image credit: Made In Space, NASA, Ref. 9) Background: Extrusion-based 3D printing is an AM (Additive Manufacturing) process in which objects are built up in a great many very thin layers. The 3D Print technology creates 3D physical prototypes by solidifying layers of deposited powder using a liquid binder. Led by NASA's STMD (Space Technology Mission Directorate), the Agency has launched a couple of formal programs to prototype new design concepts and tools for current and future missions. In addition to working with the U.S. Air Force, NASA has joined America Makes, formerly known as NAMII (National Additive Manufacturing Innovation Institute), a public-private partnership created to transition the technology into the mainstream U.S. manufacturing sector. 8) As a result of these efforts and others sponsored by the individual centers, teams of engineers and scientists are investigating how their instruments and missions might benefit from an industry that actually began more than two decades ago, with the introduction of the world's first 3D system. In the early stages of development, the 3D Print hardware had to overcome some challenges to ensure that microgravity prints would not deviate from terrestrial prints. These challenges include: 9) • Terrestrial printers are built with gravity assumed. In a zero-gravity environment certain components may move or ‘float' and even miniscule movement can ruin the printing process. • Specific adjustments were made to ensure that surface tension is the only dominant force in the printing process. Without these adjustments the extruded material would not layer correctly. • The contained environment of the ISS requires significant safety standards for any equipment brought to space. The Made In Space printer has passed all safety requirements. • In microgravity fluids do not behave as they do on Earth since there is no natural convection, there is only forced convection and diffusive effects so the thermal process becomes essential. • A 3D printer designed for space must also have a robust and stable design in order to survive the extreme loads and vibrations experienced during launch. Having addressed these and other challenges in hardware design, 3D Print hardware successfully completed flight certification at MSFC in Huntsville, Alabama in May 2014. Onboard ISS, 3D Print will yield a series of parts, tools, and test coupons that will constitute the first items manufactured off-world. These parts will help satisfy a large portion of the mission goals. The overall goals of the 3D Print program are: • Perform extrusion-based 3D printing onboard the ISS • Demonstrate nominal operations including traversing and printing • Mitigate risk for future manufacturing facility risks • Have on-demand manufacturing capability • Characterized extruded material and compare to Earth samples • Perform STEM activities. The 3D Print is an extrusion-based device that, once aboard station, will be testing in a long duration microgravity environment for the first time. The parts produced will be analyzed on Earth and the data they provide will determine what, if any, changes should be made to enhance the process and optimize it further for microgravity functionality. 3D Print Mission: The 3D Print mission will serve as the first in-space manufacturing platform and will be a proof of concept as well as produce viable components that will be for both functional and testing purposes. The ISS is a unique laboratory that is being utilized to Figure 2: Photo of the 3D Print device (image credit: Made In Space) Launch: The 3D Print device was part of the cargo on the Dragon spacecraft of the SpaceX CRS-4 flight to the ISS, launched on September 21, 2014 (05:52:03 UTC) on a Falcon-9v1.1 vehicle. The launch site was SLC-40 at Cape Canaveral, FL. Orbit: Near-circular orbit of ISS, altitude range of 375-435 km, inclination = 51.6º. SpaceX CRS-4 is a cargo resupply mission to the ISS (International Space Station), contracted to NASA. It is the sixth flight for SpaceX's uncrewed Dragon cargo spacecraft, and the fourth SpaceX operational mission contracted to NASA under a CRS (Commercial Resupply Services) contract. Dragon delivers ~2270 kg of supplies and payloads, including critical materials to support 255 science and research investigations that will occur during Expeditions 41 and 42. Dragon carries three powered cargo payloads in its pressurized section and two in its unpressurized trunk. — Dragon will return with about 1725 kg of cargo, which includes crew supplies, hardware and computer resources, science experiments, space station hardware, and four powered payloads (recovery in the Pacific Ocean ~ 700 km off the coast of California). 10) 11) • RapidScat is the primary payload on this CRS-4 flight of SpaceX. • 3D Print device of Made In Space Inc. of Mountain View, CA. (Note: the 3D Print device is described in a separate file on the eoPortal) • New permanent life science research facility. The Bone Densitometer (BD) payload, developed by Techshot, will provide a bone density scanning capability on ISS for utilization by NASA and CASIS (Center for the Advancement of Science in Space). The system measures bone mineral density (and lean and fat tissue) in mice using DEXA (Dual-Energy X-ray Absorptiometry). For the first time, Dragon will carry live mammals – 20 rodents will ride up in in NASA's Rodent Research Facility, developed by scientists and engineers at NASA's Ames Research Center. The rodent research system enables researchers to study the long-term effects of microgravity—or weightlessness—on mammalian physiology. • Arkyd-3 is a 3U CubeSat technology demonstrator (4 kg) from the private company Planetary Resources Inc. of Bellevue, WA, USA, (formerly known as Arkyd Astronautics). The objective is to test the technology used on the future larger Arkyd-100 space telescope. The company has contracted with NanoRacks to take the Arkyd-3 nanosatellite to the ISS where it will be released from the airlock in the Kibo module. 12) Figure 3: Illustration of the deployed Arkyd-3 nanosatellite (image credit: Planetary Resources) • SpinSat (Special Purpose Inexpensive Satellite), a microsatellite (57 kg) of NRL (Naval Research Laboratory), Washington D.C. SpinSat is documented in a separate file on the eoPortal. • SSIKLOPS (Space Station Integrated Kinetic Launcher for Orbital Payload Systems). This launcher will provide still another means to release other small satellites from the ISS. This system is also known by the name of Cyclops and is described in the SpinSat file on the eoPortal. SSIKLOPS will be used to deploy SpinSat from the ISS. 13) Status of the 3D Print device - and its manufactured products • August 10, 2015: In November of 2014, Made in Space made history when astronauts aboard the International Space Station (ISS) unpacked the world's first piece of manufacturing equipment designed to operate in space. The astronauts successfully 3D printed the first object ever made in outer space, a NASA designed buckle that is part of exercise equipment intended to prevent muscle loss in zero gravity environments. It was the first of twenty four different objects 3D printed aboard the space station, proving that the Made in Space Zero-G Printer worked as intended and could be used to create tools or replace many broken or damaged parts on the ISS without the need to wait for shipments from Earth. 14) - This was only the first step, however. While the ability to 3D print objects, parts and tools directly on the ISS was an important development, Made in Space's ultimate goal is to create manufacturing technologies for extra terrestrial applications. The flexibility of in-space manufacturing is vitally important for long-term space missions. Not only will it allow the manufacture of components and tools that are too fragile to survive the massive g-forces involved with being launched into space, but will be a requirement for any extended space mission, such as the eventual colonization of Mars. Today the Mountain View, CA company announced that they have successfully taken another step in that direction. - Since they successfully achieved zero gravity 3D printing last year, they have been working on technology that would allow astronauts to manufacture objects outside of the ISS in the vacuum of space. Just over a month ago, Made in Space completed a round of successful tests indicating that their newest generation of 3D printers will be fully functional in open space. The new technology will be the first machine capable of in-vacuum additive manufacturing. - Last year's successful printing test was a precursor to Made in Space's commercial AMF (Additive Manufacturing Facility) which will be sent to the International Space Station later this year. All twenty four of the original parts have been sent back to Earth for laboratory analysis and comparison with identical objects 3D printed on Earth. These tests and the first round of printed parts are only the first phase of NASA's 3D Printing in Zero-G Technology Demonstration mission. - For the second phase of the mission Made in Space tested a modified version of their AMF that includes proprietary vacuum-compatible extrusion heads. These test parts were produced using aerospace-grade thermopolymers to assess how well the deposition process functions within a vacuum. After a full week of testing inside of a vacuum chamber the preliminary results of these tests have been successful. While the vacuum 3D printing process worked exactly as expected, Made in Space will be testing the finished parts to detect any mechanical properties that differ from parts produced in Earth's atmosphere. Figure 4: The Made in Space Zero-G Printer (image credit: Made In Space) • April 7, 2015: Engineers at NASA/MSFC (Marshall Space Flight Center) in Huntsville, Alabama, unboxed some special cargo from the International Space Station on April 6: the first items manufactured in space with a 3D printer. The items were manufactured as part of the 3D Printing in Zero-G Technology Demonstration on the space station to show that additive manufacturing can make a variety of parts and tools in space. These early in-space 3D printing demonstrations are the first steps toward realizing an additive manufacturing, print-on-demand "machine shop" for long-duration missions and sustaining human exploration of other planets, where there is extremely limited ability and availability of Earth-based resupply and logistics support. In-space manufacturing technologies like 3D printing will help NASA explore Mars, asteroids, and other locations. 15) - The 3D printer on the ISS used 14 different designs and built a total of 21 items and some calibration coupons. The parts returned to Earth in February 2015 on the SpaceX Dragon. They were then delivered to Marshall where the testing to compare the ground controls to the flight parts will be conducted. Before the printer was launched to the space station, it made an identical set of parts. Now, materials engineers will put both the space samples and ground control samples literally under a microscope and through a series of tests. Project engineers will perform durability, strength and structural tests on both sets of printed items and even put them under an electron microscope to scan for differences in the objects. Figure 5: NASA's 3D printer on the ISS built a wrench. Now, the wrench and other parts have been returned to NASA/MSFC in Huntsville, Alabama. To protect the parts, they will remain sealed in bags until testing begins (image credit: NASA) • Dec. 17, 2014: The Zero-G 3D Printer, currently aboard the space station, is a technology demonstration to observe how a long duration microgravity environment affects the additive manufacturing process. Until today, all of the objects that were printed in-space had previously been printed on the exact same printer before it was ever launched to increase the chances of success. Additionally, until today, backup files of all of the models printed have been available on an SD card that was launched with the Zero-G Printer. Today, for the first time, Made In Space uplinked a design which did not exist when the printer was launched. In fact the ratchet was designed, qualified, tested, and printed in space in less than a week. 16) - The ratchet was designed as one print with moveable parts without any support material. The parts and mechanisms of the ratchet had to be enclosed to prevent pieces from floating in the microgravity environment. Once the design was finalized, the ground station print of the ratchet was sent to NASA authorities for a safety qualification. After qualification, the file for the ratchet was emailed to the ISS laptop connected to the Zero-G Printer. Once the design of the ratchet was uplinked to the space station, Made In Space engineers conducted a checksum to verify that the file was uploaded correctly before ultimately sending the command to initiate the print. Figure 6: ISS Commander Butch Wilmore holds up the ratchet after removing it from the print tray (image credit: NASA) - The ratchet took 4 hours to print and the space station even flew over California for the first time while the Zero-G Printer was operating. The ratchet will be returned to Earth along with all of the other parts printed so far in order to perform detailed observations on the differences between made in space parts and the corresponding parts that were printed on Earth. • Nov. 24, 2014: History was made today when the first 3D printer built to operate in space successfully manufactured its first part on the ISS. This is the first time that hardware has been additively manufactured in space, as opposed to launching it from Earth. 17) - The first part made in space is a functional part of the printer itself – a faceplate for its own extruder printhead. "This ‘First Print' serves to demonstrate the potential of the technology to produce replacement parts on demand if a critical component fails in space," said Jason Dunn, Chief Technical Officer for Made In Space. - For the entirety of the space program, tools and parts have been built on Earth and required a rocket to get to space. The presence of a 3D printer onboard the ISS will allow hardware designs to be made on Earth and then digitally beamed to the space station, where the physical object will be created in a matter of hours. "For the first time, it's no longer true that rockets are the only way to send hardware to space," said Mike Chen, Chief Strategy Officer for Made In Space. Figure 7: ISS Commander Barry "Butch" Wilmore holds up the first 3-D printed part made in space (image credit: Made In Space, NASA) - The initial phase of this science experiment will see a selection of test coupons, parts and tools printed in order to validate design, methodology and technology assumptions. Made In Space will print the same objects on their identical ground unit in order to provide a group of control prints. The ISS prints will be returned to Earth via a future return flight in order for the control prints and microgravity prints to be compared. - Once returned to Earth, the testing of the prints will provide data on a wide variety of factors, including tensile strength, torque, and flexibility. This information will allow the Made In Space team to make crucial adjustments to a second 3D printer, scheduled for delivery to the ISS in early 2015. This second printer will be an invaluable tool for astronauts and the government. It will also be available to commercial businesses and individuals on Earth to create on-demand hardware such as small satellites. • November 17, 2014: The 3D printer is installed in the Microgravity Science Glovebox on the International Space Station. Station commander Barry "Butch" Wilmore of NASA installed the space 3D printer inside the orbiting lab's Microgravity Science Glovebox on Nov. 17 (Figure 9). The machine and its software are in good operating condition, and the first test items will likely be printed soon, NASA officials said. 18) - Wilmore started the printer after installation which extruded plastic to form the first of a series of calibration coupons, a small plastic sample about the size of a postage stamp. - After calibration of the printer is complete and verified, the printer will make the first NASA-designed 3D printed object in space. Figure 8: The MSG (Microgravity Science Glovebox) resides in the Columbus module on the space station (image credit: Made In Space, NASA) Figure 9: NASA astronaut Butch Wilmore installs the 3D Printer in the Microgravity Science Glovebox on the International Space Station (image credit: NASA TV) • On Sept. 23, 2014, the Dragon cargo capsule arrived at the ISS and was berthed to the Space Station's Harmony module. 19) Setup and operations onboard the ISS: The 3D Print hardware will be installed in the MSG (Figures 8 and 9), after being delivered by the SpaceX CRS-4 mission. The MSG (Microgravity Science Glovebox) supplies the hardware with all the required safety and power interfaces as well as video feed. The printer will be controlled by the attached electronics box, which communicates with the MSG laptop computer. All operations of the 3D Print mission can be executed from the ground with the exception of part removal from the printer or performing maintenance procedures. The POC (Payload Operations Center) at NASA/MSFC is responsible for the operations however Made In Space will be crucial for the execution of commands and performing operations on the hardware while it is operating and verifying all pre-planned procedures are carried out nominally. Once set up aboard ISS, the payload will be used to print a number of components as part of a proof-of-concept test of the properties of melt deposition modeling additive manufacturing in the microgravity environment. The system uses ABS (Acrylonitrile Butadiene Styrene) thermoplastic resin to produce 3D multi-layer objects. The generated objects are compliant with ASTM (American Society for Testing and Materials) standards which will allow an objective assessment of a number of parameters including tensile, flexure, compressional, and torque strength as well as layer thickness, layer adhesion, relative strength, and relative flexibility. Multiple copies of parts will be printed to gain insight into strength variance and the implications of feedstock age. In parallel, an identical 3D printer will be used on Earth to produce the same components. These duplicated components will be used to analyze differences in properties between Earth- and space-manufactured parts which can be useful in refining 3D printing techniques for terrestrial applications. Current cases for using additive manufacturing on-orbit include (Ref. 8): Predicted replacements and repairs- using additive manufacturing would limit the amount of stored parts. A recent internal NASA study analyzed the failures that ISS hardware has had in the past. Out of those failures, 82% were determined to be candidates for additive manufacturing to be used for the component or system (Figure 10). Out of that 82%, 28.6% consisted of plastics and composites which can be produced from extrusion based additive manufacturing. Figure 10: Constituents of failed ISS hardware (image credit: NASA, Made In Space) • Unknown replacements and repairs - Having a manufacturing capability is the only way for a rapid and optimal fix in this case. Everything going to space currently has to be launched, if an unpredicted circumstance presents itself presently a solution either has to be improvised or wait to be flown up on the next resupply vehicle. • Payload advantages- With the availability of a manufacturing facility payloads are able, for the first time, to perform rapid iterations or additions to hardware. Another interesting attribute is for payload developers to design parts of their experiments that can only be produced in the microgravity environment. The 3D Print will demonstrate the capability of having a machine shop type capability in space for the first time. This serves as an important first step to the ultimate goal of manufacturing mission critical items in order to make human space exploration more viable and affordable. The logistics involved with current spaceflight is unable to sustain any attempt at long duration manned spaceflight. Made In Space is currently developing the second iteration of ISS 3D Printers, known as the AMF (Additive Manufacturing Facility), which will leverage the development and testing of the 3D Print program. The AMF is a commercial manufacturing facility for ISS, enabling groups from around the world to get hardware to space without needing to launch it. Circumventing the launch process will ultimately reduce the time and cost barriers for space access, which surely will all for a faster pace in space development. By mid-2015 the AMF will be fully operational, and Made In Space is currently soliciting interested parties for using the system once in place. 1) David E. Seitz, Janet Anderson, Grant Lowery, "Another American High Frontier First: 3-D Manufacturing in Space," NASA News Release: 13-161, May 31, 2013, URL: http://www.nasa.gov/home/hqnews/2013/may/HQ_13-161_Made_in_Space.html 2) Jason J. Dunn, Michael Snyder, Matthew Napoli, Aaron Kemmer, Michael Chen, "3D Printing on ISS: Reducing Earth Dependency and Opening New Space Based Markets," Proceedings of the 64th International Astronautical Congress (IAC 2013), Beijing, China, Sept. 23-27, 2013, paper: IAC--13-D3.3.1 4) Elizabeth Howell, "3-D Printer Passes Key Step On Road to Space Station," Universe Today, June 19, 2013, URL: http://www.universetoday.com/103030/3-d-printer-passes-key-step-on-road-to-space-station/ 5) "3D printer cleared for lift-off to ISS in August," Space Daily, June 16, 2014, URL: http://www.spacedaily.com/reports/3D_printer_cleared_for_lift_off_to_ISS_in-August_999.html 6) Jessica Eagan, "3-D Printer Could Turn Space Station into 'Machine Shop'," NASA, Sept. 2, 2014, URL: http://www.nasa.gov/mission_pages/station/research/news/3D_in_space/index.html 7) "Space Tools On Demand: 3D Printing in Zero G," NASA Facts, April 2014, URL: https://gcd.larc.nasa.gov/wp-content/uploads/2014/05/FS_3DPrinting_Factsheet_140502.pdf 8) Peter Hughes, "NASA Jumps Aboard the 3D-Manufacturing Train," Technologies Office of the Chief Technologist, Jan. 28, 2013, URL: http://gsfctechnology.gsfc.nasa.gov/3DManufacturing.html 9) Michael Snyder, Jason Dunn, "3D Printing On The International Space Station: A Technology," Proceedings of the 65th International Astronautical Congress (IAC 2014), Toronto, Canada, Sept. 29-Oct. 3, 2014, paper: IAC-14-A2.7.13 10) "SpaceX CRS-4 Mission," NASA Press Kit, Sept. 2014, URL: http://www.nasa.gov/sites/default/files/files/SpaceX_NASA_CRS-4_PressKit.pdf 11) Patrick Blau, "Dragon SpX-4 Cargo Overview," Spaceflight 101, URL: http://www.spaceflight101.com/dragon-spx-4-cargo-overview.html 12) Benjamin Romano, "Planetary Resources Inks 3D Systems Deal, Plans Test Launch From ISS," June 26, 2013, URL: http://www.xconomy.com/seattle/2013/06/26/planetary-resources-inks-3d-systems-deal-plans-test-launch-from-iss/ 13) Daniel R. Newswander, James P. Smith, Craig R. Lamb, Perry G. Ballard, "Space Station Integrated Kinetic Launcher for Orbital Payload Systems (SSIKLOPS) – Cyclops," Proceedings of the 27th AIAA/USU Conference, Small Satellite Constellations, Logan, Utah, USA, Aug. 10-15, 2013, paper: SSC13-V-2, URL: http://digitalcommons.usu.edu/cgi/viewcontent.cgi?article=2941&context=smallsat 14) Scott J. Grunewald, "Made In Space Breakthrough Allows 3D Printing in the Vacuum of Space, Away from ISS," Aug. 10, 2015, URL: http://3dprint.com/88301/3d-printing-in-vacuum-space/ 15) Tracy McMahan, "Special 3-D Delivery From Space to NASA's Marshall Space Flight Center," NASA/MSFC, April 7, 2015, URL: https://www.nasa.gov/centers/marshall/news/news/release/2015/special-3-d-delivery-from-space-to-nasa-s-marshall-space-flight-center.html 16) "The First Uplink Tool Made In Space Is.....," Made In Space, Dec. 17, 2014, URL: http://www.madeinspace.us/the-first-uplink-tool-made-in-space-is 17) "NASA and Made In Space Inc. Make history by successfully 3D printing in space," Mad in Space, Nov. 25, 2014, URL: http://www.madeinspace.us/nasa-and-made-in-space-inc-make-history-by-successfully-3d-printing-first-object-in-space 18) "3-D Printer Powered Up on the International Space Station," NASA, Nov. 17, 2014, URL: http://www.nasa.gov/content/3-d-printer-powered-up-on-the-international-space-station/#.VHgPZ8mSz_U 19) "Fourth Dragon for Commercial Resupply Services Arrives at Station," NASA News, Sept. 23, 2014, URL: http://www.nasa.gov/content/fourth-dragon-for-commercial-resupply-services-arrives-at-station/index.html The information compiled and edited in this article was provided by Herbert J. Kramer from his documentation of: "Observation of the Earth and Its Environment: Survey of Missions and Sensors" (Springer Verlag) as well as many other sources after the publication of the 4th edition in 2002. - Comments and corrections to this article are always welcome for further updates (firstname.lastname@example.org).
<urn:uuid:48e6665a-0bf0-4f8a-9d0a-04816fe5fb80>
CC-MAIN-2022-33
https://www.eoportal.org/web/eoportal/satellite-missions/i/iss-3d-print
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573193.35/warc/CC-MAIN-20220818094131-20220818124131-00495.warc.gz
en
0.917267
6,053
3.625
4
Kingston, New York Kingston is a city in and the county seat of Ulster County, New York, United States. It is 91 miles (146 km) north of New York City and 59 miles (95 km) south of Albany. The city's metropolitan area is grouped with the New York metropolitan area around Manhattan by the United States Census Bureau, It became New York's first capital in 1777. During the American Revolutionary War, the city was burned by the British on October 13, 1777, after the Battles of Saratoga. |Coordinates: 41°55′30″N 74°0′00″W| |• Mayor||Steve Noble (D)| |• Common Council| |• City||8.77 sq mi (22.71 km2)| |• Land||7.48 sq mi (19.38 km2)| |• Water||1.29 sq mi (3.33 km2)| |Elevation||476 ft (145 m)| |• Density||3,102.11/sq mi (1,197.68/km2)| |Time zone||UTC−5 (Eastern (EST))| |• Summer (DST)||UTC−4 (EDT)| |GNIS feature ID||0979119| |Website||City of Kingston, New York| In the 19th century, the city became an important transport hub after the discovery of natural cement in the region. It had connections to other markets through both the railroad and canal connections. Many of the older buildings are considered contributing as part of three historic districts, including the Stockade District uptown, the Midtown Neighborhood Broadway Corridor, and the Rondout-West Strand Historic District downtown. Each district is listed on the National Register of Historic Places. |New Netherland series| |The Patroon System| |People of New Netherland| Successive cultures of humans are believed to have been settled on the land that is today Kingston for at least 13,000 years. Before colonization, the two main historic tribes on the land were the Esopus (a band of the larger Lenape tribe, also known as the Delaware) and the Mohican. The Lenape had three major dialect groups and occupied territory along the mid-Atlantic coast extending from Connecticut, through the Lower Hudson River Valley (including the southern parts of Kingston), parts of Pennsylvania, and into New Jersey on both sides of the Delaware River, and in Eastern Delaware. The exact population of the Lenape is unknown but estimated to have been around 10,000 people in 1600. In the Kingston area, the Lenape spoke a dialect called Munsee. The Esopus existed peacefully with their northern Mohican neighbors, who occupied the land of Northern Kingston and into parts of present-day Vermont, Massachusetts, and Connecticut. The tribes traded, protected one another from attack, and recognized common interests for land protection. They never formed a broader, more official alliance as did some other nations in the area, such as those of the Iroquois League in New York and the Pequot in New England. Both tribes lived in small communities of relatives ranging in groups from 10-100 people. They moved seasonally between camps to follow game, fish, and produce. The women developed diverse and sophisticated agricultural practices, cultivating different types of squash, maize (corn) and beans. The community enjoyed a range of many types of corn, beans, and squash, in addition to the produce the women would gather: hickory and other nuts, berries, and roots. Men hunted for large and small game, including elk, deer, rabbits, turkey, raccoons, waterfowl, and bears. They also caught fish in the rivers. As early as 1614, the Dutch had set up a factorij (trading post) at Ponckhockie, at the junction of the Rondout Creek and the Hudson River. They traded European goods with the Lenape and Mohican for the furs their trappers collected. The first recorded permanent settler in what would become the city of Kingston was Thomas Chambers. He came from the area of Rensselaerswyck in 1653. The new settlement was called Esopus after the local Lenape people. In 1654, European settlers began buying land in what now is Kingston from the Esopus Indians, although historians believe the two cultures had drastically different conceptions of property and land use. Tension between the two cultures rose quickly as a result. Common sources of friction between Dutch settlers and the Esopus included settlers' livestock trampling Indian cornfields, disputes over trade, and the adverse effects of Dutch brandy on the Native Americans. Prior to the Europeans' arrival, they had no experience with liquor. In the spring of 1658, Peter Stuyvesant, Director-General of New Amsterdam, ordered the consolidation and fortification of the settlement on high ground in what today is Uptown Kingston. The building of the defensive Stockade increased the conflicts. Tensions broke out is the Esopus Wars. In 1661 the settlement was granted a charter as a separate municipality; Stuyvesant named it Wiltwijck (Wiltwyck). It was not until 1663 that the Dutch ended the four-year conflict with the Esopus, defeating them with a coalition of Dutch settlers, Wappinger and Mohawk peoples. Wiltwyck was one of three large Hudson River settlements in New Netherland, the other two being Beverwyck, now Albany; and New Amsterdam, now New York City. With the English seizure of New Netherland in 1664, relations between the Dutch settlers and the English soldiers garrisoned there were often strained. In 1669, the English renamed Wiltwyck as Kingston, in honor of the family seat of Governor Lovelace's mother. In 1683, citizens of Kingston petitioned the Kingston court to buy more land from the Esopus people. Officials from Ulster County maintained contact with the Esopus until 1727. Many descendants of the Esopus people who inhabited the area became remnant members of several other related, displaced tribes. Some in the diaspora are among the federally recognized Stockbridge–Munsee Community, who moved from New York to Shawano County, Wisconsin; the Munsee-Delaware of the Six Nations Reserve in Ontario, Canada, established after the Revolution by the Crown for its Iroquois and other Indian allies; and the Ramapough Lenape Indian Nation (located primarily in the highland of the New York-New Jersey border area). In 1777, Kingston was designated as the first capital of the state of New York. During the summer of 1777, when the New York State constitution was being written, New York City was occupied by British troops. Albany (then the second-largest settlement in New York and capital of the newly independent State of New York) was under threat of attack by the British. The seat of government was moved to Kingston, which was deemed safer. The British never reached Albany, having been stopped at Saratoga, but they did reach Kingston. On October 13, 1777, the city was burned by British troops moving up river from New York City, and disembarking at the mouth of the Rondout Creek at "Ponckhockie". The residents of Kingston knew about the oncoming fleet. By the time the British arrived, the residents and government officials had removed to Hurley, New York. The Kingston area was largely agricultural and a major granary for the colonies at the time, so the British burned large amounts of wheat and all but one or two of the buildings. Kingston celebrates and re-enacts the 1777 burning of the city by the British every other year. (2019 is the year for the next "burning" of Kingston), in a citywide theatrical staging of the event that begins at the Rondout.) Kingston was incorporated as a village on April 6, 1805. In the early 1800s, four sloops plied the river, carrying passengers and freight from Kingston to New York. By 1829, river steamers made the trip to Manhattan in a little over twelve hours, usually travelling by night. Columbus Point (now known as Kingston Point) was the river landing for Kingston, and stage lines ran from the village to the Point. The Dutch cultural influence in Kingston remained strong through the end of the nineteenth century. Rondout was a small farming village until 1825, when construction of the Delaware and Hudson Canal from Rondout to Honesdale, Pennsylvania, attracted an influx of laborers. When they completed the canal in 1828, Rondout became an important tidewater coal terminal. Natural cement deposits were found throughout the valley, and in 1844 quarrying began in the "Ponchockie" section of Rondout. The Newark Lime and Cement Company shipped cement throughout the United States, a thriving business until the invention of cheaper, quicker drying Portland Cement. Workers cut and stored ice from the Hudson River each winter, keeping it in large warehouses of ice near the river. The ice would be cut in chunks and delivered to customers around the city. It was preserved in straw all year and ice chunks served as an early method of refrigeration. Large brick making factories also were built near this shipping hub. Rondout's primacy as a shipping hub ended with the advent of railroads. These lines were built through Rondout and Kingston, with stations in each place. They could also transport their loads through the city without stopping. Wilbur (aka Twaalfskill) was a hamlet upstream from Rondout, where the Twaalskill met the Rondout Creek. There was a sloop landing there. The hamlet became the center for the shipment of bluestone to lay the sidewalks of New York City. Kingston is home to many historic churches. The oldest is the First Reformed Protestant Dutch Church of Kingston, which was organized in 1659. Referred to as The Old Dutch Church, it is in uptown Kingston. Many of the city's historic churches are on Wurts street (six are in one block). What is now called the Hudson Valley Wedding Chapel was built in 1867; it has been restored and is used for weddings. Trinity Evangelical Lutheran Church at 75 Spring Street was founded in 1842. The original church building at the corner of Hunter and Ravine streets burned to the ground in the late 1850s. The current church on Spring Street was built in 1874. St. John's Episcopal Church St. John's Episcopal Church was named for St. John the Evangelist. It was founded in 1832, on June 24, the feast of St. John the Baptist. Therefore both are considered patrons of the parish. Rev. Reuben Sherwood of Saugerties, New York was the first rector. The first church was built on Wall Street and opened November 24, 1835. In 1926, the property was sold to the Up-to-Date clothing company. The church structure was dismantled and relocated to Albany Ave. In 1992, St. John's established "Angel Food East", a ministry that delivers food prepared at the church kitchen to persons in need throughout Ulster County. Church of the Holy Cross In 1891, Lewis T. Wattson, rector of St. John's, established the Church of the Holy Cross as a mission of St. John's, to serve working-class families living near the West Shore Railroad. Many of them worked for the railroad. Holy Cross had a more Anglo-Catholic tradition and a particular mission to the poor. Since the late 20th century, given increased Latinx immigration from Mexico, Central and South America, Holy Cross/Santa Cruz has become a bilingual (Spanish/English), multicultural Episcopal parish. St. Joseph's RC St. Joseph's Parish began in 1863 as a one-room Roman Catholic mission school to serve the children of the Wilbur area. It was founded by Father Felix Farrelly, pastor of St. Mary's Parish in Rondout. The building was later sold to the city of Kingston in 1871. In 1867, Rev. James Coyne was appointed pastor of St. Mary's in Rondout. The following year he established St. Joseph's in Kingston. He purchased the Young Men's Gymnasium on the corner of Fair and Bowery streets. The first Mass was said on September 21, 1868 by Fr. James Dougherty, an alumnus of St. Mary's parochial school. Dougherty became the first pastor of St. Joseph's parish. Fr. Dougherty is buried in St. Mary's Cemetery. As the chapel was deemed too small, the parish purchased the former Kingston Armory at the corner of Wall and Main streets for use as a church. Many new Catholic immigrants had arrived in the mid-century from Ireland and southern and eastern Europe. The new church was dedicated on July 26, 1869. In 1877 Jockey Hill was made a mission of St. Joseph's. In 1962 a mission was established in Hurley. The frame building on the Bowery was adapted as a schoolhouse. This was replaced in 1905 with the acquisition of the former mansion of Judge Alton B. Parker at 1 Pearl Street, for a new St. Joseph's School and Convent. The Fair Street school building continued to be used as the parish hall until the property was sold in 1911. Also in 1911 a site for a larger school and convent was secured and 1 Pearl Street was sold. In 1943 the Sisters of St. Ursula replaced the Sisters of Charity at the school. In February 1962, construction began on the current St Joseph School, which housed eight additional classrooms. Old St. Joseph School was renamed as the Msgr. Stephen Connolly Bldg. A plaque donated by the Holy Name Society in honor of Father John Broidy, the pastor who oversaw construction of the building in 1912, is installed on the right front of the building. Geography and culture Kingston has three recognized area neighborhoods. The Uptown Stockade Area, The Midtown Area, and The Downtown Waterfront Area. The Uptown Stockade District was the first capital of New York State. Meanwhile, the Midtown area is known for its early 20th century industries and is home to the Ulster Performing Arts Center and the historic City Hall building. The downtown area, once the village of Rondout and now the Rondout-West Strand Historic District, borders the Rondout Creek and includes a recently redeveloped waterfront. The creek empties into the Hudson River through a large, protected tidal area which was the terminus of the Delaware and Hudson Canal, built to haul coal from Pennsylvania to New York City. The Rondout neighborhood is known for its artists' community and its many art galleries; in 2007 Business Week online named it as among "America's best places for artists." It is also the site of a number of festivals, including the Kingston Jazz Festival and the Artists Soapbox Derby. Midtown is the largest of Kingston's neighborhoods, home to Kingston High School and both campuses of HealthAlliance Hospital, part of the Westchester Medical Center Health Network; HealthAlliance Broadway Campus (formerly The Kingston Hospital) and HealthAlliance Mary's Avenue Campus (formerly Benedictine Hospital). While the Uptown area is noted for its "antique" feeling, the overhangs attached to buildings along Wall and North Front streets were added to historic buildings in the late 1970s and are not authentically part of the 19th century Victorian architecture. The historic covered storefront walks, known as the Pike Plan, were recently reinforced and modernized with skylights. In the Stockade district of Uptown, many 17th century stone buildings remain. Among these is the Senate House, which was built in the 1670s and was used as the state capitol during the revolution. Many of these old buildings were burned by the British Oct. 17, 1777, and restored later. A controversial restoration of 1970s-era canopies was marred by the sudden appearance of painted red goats on planters just prior to the neighborhood's rededication. This part of the city is also the location of the Ulster County Office Building. According to the United States Census Bureau, the city has an area of 8.6 square miles (22.4 km2), of which 7.3 square miles (19.0 km2) is land and 1.3 square miles (3.4 km2), or 15.03%, is water. The city is on the west bank of the Hudson River. Neighboring towns include Hurley, Saugerties, Rhinebeck, and Red Hook. The city's Hasbrouck Park was created in 1920; it includes a 45 acres (18 ha) area and includes a nature trail. |U.S. Decennial Census| As of the 2010 census, the city had 23,887 people, 9,844 households, and 5,498 families. The population density was 3,189.5 persons per square mile (1,232.2/km2). There were 10,637 housing units at an average density of 1,446.4 houses per square mile (558.8/km2). The city's racial makeup was 73.2% White, 14.6% Black or African American, 0.50% Native American, 1.80% Asian, 1.90% from other races, and 5.00% from two or more races. Hispanic or Latino of any race were 13.4% of the population. As of the 2000 census there were 9,871 households out of which 27.0% had children under the age of 18 living with them, 35.2% were married couples living together, 15.8% had a female householder with no husband present, and 44.3% were non-families. 36.8% of all households were made up of individuals and 14.8% had someone living alone who was 65 years of age or older. The average household size was 2.28 and the average family size was 3.02. In the city, the population was spread out with 23.9% under the age of 18, 8.1% from 18 to 24, 28.9% from 25 to 44, 21.9% from 45 to 64, and 17.1% who were 65 years of age or older. The median age was 38 years. For every 100 females, there were 89.1 males. For every 100 females age 18 and over, there were 84.1 males. The city's median household income was $31,594, and the median family income was $41,806. Males had a median income of $31,634 versus $25,364 for females. The city's per capita income was $18,662, with 12.4% of families and 15.8% of the population below the poverty line, including 23.5% of those under age 18 and 10.3% of those age 65 or over. The Kingston Tigers are the city high school's sports teams. Kingston Stockade FC is a men's semi-professional soccer club that competes in the National Premier Soccer League (NPSL) in the 4th division of the US soccer pyramid. Kingston Stockade FC play their home games at Dietz Stadium. In 1921, one time major league player Dutch Schirick organized a semi-professional team, the Colonels, in Kingston, New York. Major league teams would, on occasion, play exhibition games against the Kingston Colonels, and would sometimes recruit local talent. Bud Culloton became a pitcher for the Pittsburgh Pirates. The government of Kingston consists of a mayor and city council known as the Common Council. The Common Council consists of 10 members, nine of which are elected from wards while one is elected at large. The mayor is elected in a citywide vote every four years. List of Mayors: |James Girard Lindsley||1872-1877||Lindsley Street named for him| |William Lounsbery||1878-1879||Lounsbery Place named for him| |John E. Kraft||1890-1891| |Henry E. Wieber||1896-1897| |William D. Brinnier||1898-1899| |James E. Phinney||1900-1901| |A. Wesley Thompson||1906-1907||Resigned on May 21, 1907| |Walter P. Crane||1907-1909||Crane Street named for grandfather, Walter B. Crane| |Palmer H. Canfield||1914-1921||Canfield Street named for grandfather, Palmer A. Canfield Sr.| |Walter P. Crane||1922-1923| |Morris Block||1924-1926||Died in Office, November 7, 1926| |Edgar J. Dempsey||1926-1931| |Eugene B. Carey||1932-1933| |Harry B. Walker||1934||Resigned January 11, 1934| |Conrad J. Heiselman||1934-1941| |William F. Edelmuth||1942-1947| |Oscar V. Newkirk||1948-1953| |Frederick H. Stang||1954-1957| |Edwin F. Radel||1958-1961| |John J. Schwenk||1962-1965||Schwenk Drive named for him| |Raymond W. Garraghan||1966-1969||Garraghan Drive named for him| |Francis R. Koenig||1970-1979||Koenig Boulveard named for him| |Donald E. Quick||1980-1983| |Peter J. Mancuso||1984-1985| |Richard "Dick" White||1986-1989| |John P. Heitzman||1990-1991| |John A. Amarello||1992-1993| |Thomas R. "T.R." Gallo||1994-2002||Died in Office, January 21, 2002| |James M. Sottile||2002-2011| |Shayne R. Gallo||2012-2015||Brother of T.R. Gallo| |Steve T. Noble||2016–Present| - The Kingston City School District contains seven elementary schools, two middle schools, and one high school. - Kingston High School is the city's public high school - Most students at John A. Coleman Catholic High School reside within the Kingston city school district. The Kingston Center of SUNY Ulster (KCSU) is a branch of the county's community college that offers programs, courses and certifications at a convenient Midtown location. KCSU is the new home for Police Basic Training and also offers human services, criminal justice and the general education courses required by the State of New York to satisfy the liberal arts core of an A.A. or A.S. degree. - Kingston-based: Daily Freeman, Kingston Times - Outside Kingston: Art Times, Poughkeepsie Journal, Times-Herald Record (Middletown) - See also: List of newspapers in New York in the 18th-century: Kingston - Television: Time Warner Cable Kingston Area Public-access television cable TV channel 23 - Magazines: Chronogram, Trends Journal - Music festivals: O+ Festival - Blogs: Kingston Creative, and Kingston Happenings Passenger railroad service to Kingston itself was discontinued in 1958 when the New York Central Railroad ended service on the West Shore Railroad. However, about 11 miles (20 km) away is the Rhinecliff-Kingston Amtrak station, and 17 miles (30 km) away is the Poughkeepsie Amtrak/Metro-North station. CSX Transportation operates freight rail service through Kingston on the River Line Subdivision. There is also a small rail yard of about 7 tracks in Kingston. The Kingston-Rhinecliff Bridge, carrying New York State Route 199, is the nearest bridge traversing the Hudson River at 4.32 miles (6.95 km) to the north. U.S. Highway 9W runs north-south through the city. The New York State Thruway, also known at this section as Interstate 87, runs through the western part of the city. The area is served by Kingston-Ulster airport (20N), located at the western base of the Kingston-Rhinecliff bridge. The nearest major airports to Kingston are Stewart International Airport 39 miles (62.8 km) south in Newburgh, and Albany International Airport approximately 65 mi (105 km) north. The three major metropolitan airports for New York City - John F. Kennedy International approximately 93 mi (150 km) south, Newark Liberty International approximately 86 mi (138 km) south, and LaGuardia Airport approximately 80 mi (129 km) south. City-owned CitiBus system (headquarters at 420 Broadway) provides city bus service and Ulster County Area Transit (UCAT) provides service to points elsewhere in Ulster County. Route A travels between Kingston Plaza and Riverfront, B between Albany Avenue and Fairview Avenue, and C between Golden Hill and Port Ewen. On the first Saturday of every month an "art bus" is available for a fare of $1. The bus, usually a CitiBus tourist trolley, takes passengers on a guided tour of the art galleries of Kingston. Kingston's art galleries all have openings on the first Saturday of the month. Weekend water taxi service between Kingston and Rhinecliff, New York is available May through October for $10 round-trip. Some trips stop at the Rondout Light; a tour is available for an additional $5. Kingston historically was an important transportation center for the region. The Hudson River, Rondout Creek and Delaware and Hudson Canal were important commercial waterways. At one time, Kingston was served by four railroad companies and two trolley lines. Kingston was designated as a New York State Heritage Area with a transportation theme and the Hudson River Maritime Museum and Trolley Museum of New York are on the waterfront. Also, the Catskill Mountain Railroad, a scenic railroad company, runs trains from Kingston on the former Ulster and Delaware right of way. As of 2016, over a dozen separate ongoing projects were being coordinated between the Kingston Land Trust, Kingston City Government and Ulster County Government, connecting all three of Kingston's neighborhoods with a combination of rail trails, bike lanes and Complete Streets connections. Residents of the city and surrounding areas are served by the two hospital campuses of HealthAlliance of the Hudson Valley, a 315-bed healthcare system: - HealthAlliance Hospital: Broadway Campus (formerly Kingston Hospital) - HealthAlliance Hospital: Mary's Avenue Campus (formerly Benedictine Hospital) There are also multiple urgent care sites, private practice offices and laboratories in the city and surrounding area. - "2016 U.S. Gazetteer Files". United States Census Bureau. Archived from the original on August 24, 2017. Retrieved Jul 5, 2017. - "Population and Housing Unit Estimates". Archived from the original on May 4, 2018. Retrieved July 28, 2019. - New York-Newark, NY-NJ-CT-PA Combined Statistical Area Archived 2014-12-29 at the Wayback Machine, United States Census Bureau. Accessed December 28, 2014. - "Discover the Hudson Valley's Tribal History". www.hvmag.com. Archived from the original on 2017-05-24. Retrieved 2019-11-15. - Kraft, Herbert C. (1986). The Lenape: Archaeology, History, and Ethnography. Newark: New Jersey Historical Society. pp. 131–169. - Smith, Jesse (2005). "Esopus Wars were 'the clash of cultures'". Daily Freeman. Archived from the original on 2019-11-15. - "Schoonmaker, Marius. The History of Kingston, Burr Print. House, Kingston, NY 1888". Archived from the original on 2016-01-14. Retrieved 2015-09-29. - "Burning of Kingston". New York Packet. 23 October 1777. Archived from the original on 14 October 2014. Retrieved 2014-08-06. - "Hendricks, Howard. "Kingston", Clearwater, Alfonso Trumpbour. The History of Ulster County, New York, W. J. Van Deusen, Kingston NY, 1907". Archived from the original on 2016-01-14. Retrieved 2015-09-29. - "Close Of The Ice Harvest.; Nearly All The Houses Filled--The Largest Crop Ever Gathered". The New York Times. 1881-01-25. Archived from the original on 2012-11-10. Retrieved 2010-05-07. - "Ulster Landing and East Kingston". brickcollecting.com. Archived from the original on 2011-07-28. Retrieved 2007-10-31. - Rob Yasinsac. "Hudson Valley Ruins: East Kingston - Hudson Cement Company and Shultz Brick Yard by Rob Yasinsac". hudsonvalleyruins.org. Archived from the original on 2007-10-24. Retrieved 2007-10-31. - Steuding, Robert Rondout A Hudson River Port p. 155 - Confessore, Nicholas; Barbaro, Michael (2011-06-25). "New York Clerks' Offices Gird for Influx of Gay Couples". The New York Times. Archived from the original on 2017-05-03. Retrieved 2017-02-10. - ""St. John's History", St. John's Episcopal Church". Archived from the original on 2019-08-29. Retrieved 2019-08-29. - ""Our Mission", Angel Food East". Archived from the original on 2019-08-29. Retrieved 2019-08-29. - Murphy, Patricia O'Reilly. Kingston, Arcadia Publishing, 2013, p. 67ISBN 9780738598260 - ""History", Holy Cross Church". Archived from the original on 2019-08-29. Retrieved 2019-08-29. - "Our History". Archived from the original on 20 May 2016. Retrieved 29 May 2016. - "Burtsell, Richard Lalor. "The Roman Catholic Church", Clearwater, Alphonso Trumpbour. The History of Ulster County, New York, W. J. Van Deusen, 1907 - Ulster County (N.Y.)". Archived from the original on 2016-01-14. Retrieved 2015-09-29. - "St. Joseph Catholic School". Archived from the original on 31 May 2016. Retrieved 29 May 2016. - "Hudson River Maritime Museum". Archived from the original on 2008-01-08. - Roney, Maya (February 26, 2007). "Bohemian Today, High-Rent Tomorrow". Bloomberg Business (formerly Business Week). bloomberg.com. Retrieved 2017-07-01. - "Kingston, NY: Profile Archived 2017-07-30 at the Wayback Machine". Forbes. Forbes.com. Retrieved 2017-07-01. - Leonard, DB (November 23, 2011). "DB Leonard commentary: Goats go viral". Kingston Times. Ulster Publishing. Archived from the original on December 10, 2011. Retrieved November 27, 2011. - "Census of Population and Housing". Census.gov. Archived from the original on May 12, 2015. Retrieved June 4, 2015. - "Home". Archived from the original on 2016-07-31. Retrieved 2016-08-01. - Horrigan, Jeremiah. "Steve Noble takes Kingston's top prize". recordonline. Dow Jones Local Media Group. Archived from the original on 12 February 2015. Retrieved 17 November 2011. - "Kingston Center of SUNY Ulster - SUNY Ulster". Archived from the original on 2016-08-07. Retrieved 2016-08-01. - "Traveler's Information". Ulster County. Archived from the original on 2006-08-21. Retrieved 2006-08-13. - "Archived copy". Archived from the original on 2013-12-13. Retrieved 2013-12-08.CS1 maint: archived copy as title (link) - "Kingston-Rhinecliff water taxi launches today". DailyFreeman.com. Archived from the original on 2012-03-18. Retrieved 2010-11-30. - "The Lark: Hudson River Water Taxi". HudsonRiverCruises.com. Archived from the original on 2010-12-26. Retrieved 2010-11-30. - "5 New Reasons Why You Should Move to Midtown Kingston Right Now - Kingston Creative". Kingston Creative. 2016-04-29. Archived from the original on 2016-06-02. Retrieved 2016-06-11. - "NORTHERN DUTCHESS HOSPITAL". Health Quest. Archived from the original on 24 November 2017. Retrieved 3 November 2017. |Wikivoyage has a travel guide for Kingston (New York).| |Wikimedia Commons has media related to Kingston, New York.|
<urn:uuid:b51e9019-16e1-43ee-b397-e2188cf478a5>
CC-MAIN-2022-33
https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Kingston,_New_York
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00095.warc.gz
en
0.941353
7,528
2.71875
3
In October 2012, three months before Israel’s last election, the Likud-led government headed by Prime Minister Benjamin Netanyahu took an unprecedented decision: It voted to allow a large group of individuals from northeastern India – not considered Jewish by law – to immigrate to Israel and undergo conversion upon arrival. Exactly a year later, the next government formed by Netanyahu voted to bring in an even larger group. All told, nearly 1,000 members of the Bnei Menashe (“Sons of Manasseh”) community have arrived in Israel over the past two years. They joined about 1,500 others already living in the country – many of them in West Bank settlements – who had arrived in trickles over the years. Another large group is expected to arrive this year, in accordance with the same government decision. Operation Menashe, as it has been called, has been overwhelmingly portrayed in the local media as an inspiring story of a “lost tribe’s” return. Yet an investigation by Haaretz reveals that this operation – spearheaded by an individual who views Israel’s Arab minority as a demographic threat and advocates using unconventional means to boost the country’s Jewish population – has been fraught with questionable government decisions, an ambiguous rabbinical ruling and potential conflict of interest. Moreover, the parties responsible for ensuring that these new immigrants are integrated smoothly into Israeli society seem to have dropped the ball, according to community insiders, creating a disenfranchised and disillusioned community. The organization leading this effort, Shavei Israel, was founded and is run by Michael Freund, an American immigrant who served as an aide to Netanyahu during his first term in office in the late 1990s. The rabbinical ruling on which it relied, reportedly issued in 2005 by former Sephardi Chief Rabbi Shlomo Amar, was said to have determined that the Bnei Menashe were “seed of Israel” – a term broadly used to describe individuals who would not be considered Jewish according to religious law (halakha), but who have proven Jewish ancestry and roots and can, therefore, immigrate to Israel. But a government document obtained by Haaretz offers a surprising revelation: No such ruling was explicitly made. This document, prepared by the Interior Ministry, states categorically that Rabbi Shlomo Amar ruled that the Bnei Menashe are not “seed of Israel,” according to the accepted halakhic definition of the term, and have no proven Jewish ancestry. Amar confirmed to Haaretz that this was indeed his ruling. Yet at the time, Amar’s 2005 ruling was hailed in the press as “historic,” with Freund describing it in one news report as “the breakthrough we have been waiting for.” However, the government, led at the time by Ariel Sharon, did not support the mass immigration of the Bnei Menashe. It was only after Netanyahu returned to the Prime Minister’s Office for his second term, in 2009, that Freund found a government receptive to his cause and willing to take action. - Exodus Threatens Israel-India Ties - Pardes Hannah Mayor Angered by Indian Jews' Conversion Course - PMO: No Decision Yet on Bnei Menashe - 15 Bnei Menashe Families From India Allowed to Immigrate - Israel-India Relations / Strong, but Low-key - 'Lost Tribe' of Indian Jews Comes Home - ‘Lost Tribe’ Indian Jews, Home in Israel - Rank and File - The Mossad Operative Who Risked His Life for Ethiopian Jews - The Visible Links Between Israel’s Invisible Citizens Almost three years ago, Shavei Israel succeeded in convincing the government to resume immigration of the Bnei Menashe after a hiatus of five years – and on a much larger scale than ever before. The move required special government permission, because the Bnei Menashe don’t qualify as Jews under the Law of Return and are, therefore, not eligible for automatic citizenship. Both that decision and the decision to transfer responsibility for the welfare of the immigrants to a private organization during their initial absorption period – rather than the Jewish Agency and the Immigrant Absorption Ministry, which ordinarily handle such matters – represent major deviations from long-standing government policy and practice. In fact, Shavei Israel was able to secure 7 million shekels (about $1.8 million) in government funding for this purpose without going through the usual government tender requirement. Freund, who is also a commentator for The Jerusalem Post, has urged the government in his columns to take a more “creative” approach to immigration, as he terms it, to guarantee that the Jews maintain their majority in the country. “The fact is that there are plenty of people out there in the big wide world who would like to move to Israel,” he wrote in a September 2001 column. “The problem is that most of them are not Jewish. While many are no doubt motivated by economic reasons, there are countless others who are sincere in their desire to be Jews, and it is incumbent upon Israel to at least explore the possibilities that such populations present.” Referring specifically to the tribes of northeastern India, he wrote in the same column, “For a country struggling to find potential new sources of immigration, groups such as the Bnei Menashe and others like them might very well provide the answer.” ‘Seed of Israel’ or not? The Bnei Menashe, whose connections to the ancient Israelites have long been challenged by social scientists, had been immigrating to Israel in trickles for close to 20 years before the government headed by Prime Minister Ehud Olmert in 2006-2009 closed the country’s gates to them. Two cabinet decisions taken in the past few years have allowed for the resumption of this immigration. In October 2012, the cabinet approved a request from then-Interior Minister Eli Yishai to grant temporary residency visas to 274 members of the community so that they could undergo conversion in Israel. Exactly a year later, it approved a request submitted by his successor, Gideon Sa’ar, for another 899 visas to be allocated over a two-year period. The background material provided to the cabinet ministers by the Interior Ministry ahead of the October 2013 vote states explicitly, “According to the ruling of Rabbi Amar, they [Bnei Menashe] do not comply with the halakhic definition of ‘seed of Israel.’ In other words, it is not possible to prove that the community is historically part of the People of Israel.” Narrowly defined, “seed of Israel” is a halakhic term that applies to anyone either born to a non-Jewish mother and a Jewish father, or having at least one Jewish grandparent. A Jew, according to halakha, is anyone born to a Jewish mother. The Chief Rabbinate of Israel allows for an expedited conversion process in the case of those defined as “seed of Israel.” Several hundred thousands of immigrants from the Former Soviet Union were allowed into Israel under the Law of Return during the 1990s on that basis. The term also has a broader definition that applies to anyone with demonstrated Jewish ancestry dating back several generations. It was this interpretation that provided the basis for the government decision to allow thousands of Falashmura – descendents of Ethiopian Jews forced to convert to Christianity in past centuries – to immigrate to Israel in recent years. In response to a request from Haaretz for clarification, Rabbi Amar, who currently serves as the chief Sephardic rabbi of Jerusalem, confirmed in writing through an aide that while serving as Sephardic chief rabbi, he had indeed ruled that the Bnei Menashe were not “seed of Israel.” At the same time, he said, their high level of observance of Jewish law and custom demonstrated their “strong affinity to the Jewish people.” There was no doubt, therefore, according to Rabbi Amar, that “their forefathers were among the exiles of Israel, for as our sages say, their end attests to their beginning.” The reason they could not be declared “seed of Israel,” he wrote, was that their ancestors had lived in isolation for so many years. Yet leaders of Shavei Israel, which had been aggressively lobbying the government and the rabbinical authorities to recognize the Bnei Menashe as part of the Jewish people, have tended to avoid this nuance, presenting Amar’s ruling in far more definitive terms. In 2010, Freund told the Knesset Committee for Immigration, Absorption and Diaspora Affairs that his organization had approached Rabbi Amar in 2004, with a request that he rule on the status of the Bnei Menashe and that a year later, the former chief rabbi ruled that the Bnei Menashe were indeed “seed of Israel.” Before that committee meeting, the Knesset Research and Information Center prepared a special background report on the Bnei Menashe. The report opens by noting that in 2005, Rabbi Amar recognized the Bnei Menashe as “seed of Israel.” The source of this piece of information, according to a footnote in the report, is none other than Michael Freund. In a column he published in The Jerusalem Post in June 2011, Freund reported on a briefing he delivered to the Ministerial Committee on Immigration and Absorption, headed at the time by Foreign Minister Avigdor Lieberman, on the status of the Bnei Menashe. “And rest assured, I told the ministers, the Bnei Menashe are our lost brethren,” he wrote. “In March 2005, Sephardic Chief Rabbi Shlomo Amar recognized them as Zera Yisrael, or the ‘seed of Israel,’ and said they should be brought to the Jewish state.” It was this same ministerial committee that eventually took a decision in principle to reopen Israel’s gates to the Bnei Menashe, after they were closed by the previous government. Asked to address the discrepancy concerning Rabbi Amar’s ruling, Freund said in a written response, issued through his lawyers, that neither he nor any member of his organization had misrepresented it, citing a letter he had received from the former chief rabbi’s bureau dated July 4, 2011. [That would be more than six years after the ruling was initially reported on – JM.] The letter notes that Freund had asked the former chief rabbi whether the Bnei Menashe could “conceptually and ideologically” be considered “seed of Israel,” and that Rabbi Amar’s response to this question was affirmative. However, the letter notes that the Bnei Menashe most certainly do not fit the halakhic definition of “seed of Israel.” “Don’t forget that halakhically, the term seed of Israel means the son of a Jewish father and non-Jewish mother, and beyond any doubt the Bnei Menashe do not fit this halakhic definition” it quotes Rabbi Amar saying. In their response to Haaretz’s queries, Freund’s lawyers also said that in addressing the Knesset committee, he “did not pretend to present a halakhic stand on the matter and did not make a presentation as if there were a halakhic stand on the matter. ” They also noted in their response that Rabbi Amar, in numerous instances, had expressed his great appreciation for Shavei Israel and its activities among the Bnei Menashe. A rabbi with many hats Among those who also helped spread the word that Rabbi Amar had deemed the Bnei Menashe “seed of Israel” was Rabbi Eliyahu Birnbaum, a judge on the Chief Rabbinate conversion courts. He also happens to work for Shavei Israel. Birnbaum is cited as the source of various stories that appeared in the press about Rabbi Amar’s ruling in 2005. In an article he published in November 2007 in Makor Rishon, a Hebrew-language publication that caters to the Orthodox population, Birnbaum wrote, “The Israeli Chief Rabbi Shlomo Amar recognized the Jewish roots of the Bnei Menashe and their being seed of Israel.” Born in Uruguay, where he also served as that country’s chief rabbi, Birnbaum has for many years been a judge on the Chief Rabbinate’s conversion courts. He is also the founding director of several programs at the Ohr Torah Stone Yeshiva established by Rabbi Shlomo Riskin and located in the Jewish settlement of Efrat. The yeshiva’s website, which details his professional experience, does not mention one important position he holds: Birnbaum is also the rabbi and educational director of Shavei Israel. His wife, Rabbanit Renana Birnbaum, is also on staff at Shavei Israel, serving as director of Machon Miriam, the organization’s Spanish- and Portuguese-language conversion school in Israel. In response to questions from Haaretz, Birnbaum said via email that he has been working with Shavei Israel since it was founded 12 years ago, on a voluntary basis. Sephardi Chief Rabbi Shlomo Amar. (Photo: Emil Salman) Asked to explain why he reported that Rabbi Amar had ruled that the Bnei Menashe were “seed of Israel,” Birnbaum wrote: “I reported the conclusions regarding the Bnei Menashe in the way they were reported to me, to the best of my understanding.” In response to a question about why he himself had not set the record straight, Rabbi Amar responded, “We are not responsible for what they write, and it is not our job to speak in their names or to correct them.” Birnbaum was also asked about what could potentially be viewed as a possible conflict of interest in working both for the Chief Rabbinate and an organization that lobbies the government and Chief Rabbinate to recognize “lost Jews” like the Bnei Menashe. He responded that, in his view, there was no conflict because “there is no connection between my role as a rabbinic judge and decisions of principle taken by the Chief Rabbinate.” Also questioned about Birnbaum’s possible conflict of interest, Freund responded through his lawyers that the Shavei Israel rabbi receives no salary for his work and provides help “out of a sense of mission and pure Zionism.” They also noted that Birnbaum sits on the rabbinical conversion court for minors, “and to the best of our client’s knowledge does not sit on the rabbinical court engaged in converting the Bnei Menashe.” They said he had provided full disclosure about his activities to all the relevant parties. An unusual government decision As a rule, the Interior Ministry does not allow groups into Israel for the purpose of conversion, out of fear that some might exploit this opportunity to reap economic benefit. Under the Law of Return, Jews who immigrate to Israel are entitled to automatic citizenship and a sizable package of benefits. This is likely the first and only time the government has allowed and even provided finance for the mass immigration of a large community whose members do not qualify as Jews under the Law of Return, nor do they have proven Jewish ancestry according to the broader definition of “seed of Israel.” So why did the Interior Ministry appear to bend the rule in the case of the Bnei Menashe? Spokeswoman Sabine Hadad issued the following response: “The person who made the recommendation was Rabbi Amar. The interior minister submitted the request because immigration comes under our jurisdiction, but I promise you that the person responsible is Rabbi Amar.” Asked whether the Interior Ministry has any criteria of its own for determining which groups are permitted to immigrate to Israel, she answered, “It’s the Prime Minister’s Office that coordinates all this.” The Prime Minister’s Office referred all questions on the matter back to the Interior Ministry. The background material provided to the ministers before they voted in October 2013 included a section of explanation from the Interior Ministry, in which it is clearly stated that, according to Rabbi Amar’s ruling, the Bnei Menashe are not “seed of Israel” and have no proven Jewish ancestry. While it is not clear whether any of the ministers read this material, what is certain is that they went ahead and approved plans to bring 899 members of the community to Israel nonetheless. To understand the extent to which the government deviated from long-standing policy in the case of the Bnei Menashe, it’s worth drawing a comparison with the Falashmura. In the case of the Falashmura, the government ruled that only those members of the community who could prove they were descendants of Jews forced to convert would be eligible to immigrate to Israel. The Bnei Menashe, however, are unable to prove this kind of Jewish lineage. The Falashmura were required to undergo conversion upon arriving in Israel, as the Bnei Menashe are. In the case of the Falashmura, it was a private organization – NACOEJ (North American Conference on Ethiopian Jewry) – that received authorization from the government to compile lists of candidates for immigration among the group. But it was the Interior Ministry that ultimately determined whether those on the list were eligible to immigrate. In the case of the Bnei Menashe, however, it is representatives of Shavei Israel and the Chief Rabbinate that make that decision, according to a spokeswoman from the Immigrant Absorption Ministry. Although Shavei Israel is the only organization with representatives on the ground in northeastern India preparing the Bnei Menashe for aliyah, Freund denied that it had anything to do with compiling these lists. “The organization does not and never had the authority to determine eligibility for immigration to Israel,” he said, insisting that it was the government of Israel that determined eligibility. The Interior Ministry maintained, however, in response to a question from Haaretz, that in the case of the Bnei Menashe, it does not determine eligibility. In the case of the Falashmura, it was the Jewish Agency that was charged with the logistics of bringing the immigrants to Israel and held responsible for their welfare as soon as they arrived. In the case of the Bnei Menashe, both these functions are filled by Shavei Israel, which also pays for the flights of the immigrants. According to the government decision taken in 2013, Shavei Israel is only officially responsible for the welfare of the Bnei Menashe for a period of roughly three months in Israel, until they pass their conversion tests and obtain new immigrant status. After that period, they are effectively left to their own devices and do not receive extra support from the Jewish Agency, as do other immigrant groups deemed to have special needs. Exemption from government tender Because the Bnei Menashe do not qualify as Jews under the Law of Return, when they first arrive in Israel they are not eligible for the usual package of benefits provided to new immigrants. Recognizing their need for basic assistance during this interim period, the cabinet, in its most recent decision of October 2013, agreed to allocate 7 million shekels to help ease the transition. This funding, according to the decision, would be contracted out to a service provider through a tender issued by the Immigrant Absorption Ministry. But Shavei Israel was awarded the contract, after the treasury agreed that it be exempted from the tender. An Immigrant Absorption Ministry spokeswoman explained, “The ministry published its intention [on the appropriate treasury website page], and no responses were heard from other organizations that may have thought they could provide such a service.” She defended the request for a tender exemption, saying, “Shavei Israel has for years been the only organization that tends to the needs of the Bnei Menashe, starting from when they are in India.” She also noted that Shavei Israel had agreed to put up matching funds of 7 million shekels. In response, Freund’s lawyers said their client was awarded the contract without a tender because it was the only organization that had agreed to enter into a joint venture with the State of Israel in this operation by offering matching funds. “It should be clarified and stressed,” they said, “that before entering into a contract with the organization, the tender committee and the treasury published several announcements to the public regarding the future contract, in which they requested offers from other parties interested in entering into a joint venture with the State of Israel for the purpose of absorbing the Bnei Menashe.” Bnei Menashe arriving in Israel in 2006. (Photo: Dan Keinan) Yet Micha Gross, the director of Amishav, another organization with considerable experience working with the Bnei Menashe, told Haaretz that he was not aware that the Immigrant Absorption Ministry had considered publishing a tender and that “of course” he would have submitted a bid had he known. Returning a ‘Lost Tribe’ The Bnei Menashe can be traced to three different tribes that originally migrated from Burma (now Myanmar) and now reside in two northeastern Indian border states – Manipur and Mizoram. Among social scientists who have studied the community, it is widely believed that missionaries who converted them to Christianity in the 19th century were the ones who linked them to the Lost Tribes. DNA testing has not shown any concrete evidence that they originated in the Middle East. It was Rabbi Eliyahu Avichail, the founder of Amishav, who “discovered” them more than 30 years ago, and he gave them the name they are known by today. An Orthodox rabbi, Avichail’s great passion in life was finding “lost Jews,” and he would travel the globe in search of them. In his travels, he observed that some of the traditions of these Indian tribes, such as the observance of three annual festivals and certain life-cycle practices, bore similarities to Jewish rituals, and that some of their folklore appeared to be based on biblical stories. He subsequently began converting them. Most of the Bnei Menashe he brought to Israel were moved to the West Bank and the Gush Katif settlement bloc in the Gaza Strip (before it was evacuated in 2005). Freund, a former New Yorker with a similar passion for “lost Jews,” joined Avichail’s organization after he left the Prime Minister’s Office in 1999. Not long thereafter, the two had a fall out and Avichail left the organization. Freund subsequently founded Shavei Israel. Although Shavei Israel reaches out to “lost Jews” around the world, most of its efforts are focused on the Bnei Menashe. The organization’s single largest source of funding is Freund himself, who, according to its 2013 financial report, contributed from his own pocket close to half of the organization’s 7.8 million shekel budget. The rest of its funding comes mainly from Christian evangelical groups, most prominently Bridges for Peace and the International Christian Embassy Jerusalem. Thinking ‘more creatively’ Freund, who began his career in Israel working as a press adviser to Netanyahu during his first term as prime minister, is married to the daughter of Pincus (“Pinky”) Green, a billionaire commodities trader who, together with his former partner, the late Marc Rich, received a presidential pardon from Bill Clinton in 2001 after being indicted in the United States on charges of tax evasion. Green serves as one of Shavei Israel’s funders. Freund lives in Ra’anana, north of Tel Aviv, with his family. Politically, Freund belongs to the ideological right. In his Jerusalem Post columns, he has condemned the government for evacuating Gaza, lashed out against those who support a Palestinian state and praised the settler movement. In his September 2001 column for The Jerusalem Post, he also spelled out his motivations for working with so-called “lost Jews”: ”It seems fair to say that, aside from the danger posed by non-conventional weapons in the hands of Israel’s neighbors, the issue of demography might very well be the greatest threat to the future of Israel as a Jewish state,” he wrote. “As the percentage of Jews continues to decline, it will grow increasingly difficult for Israel, as a democracy, to ignore mounting calls by its Arab minority for cultural autonomy and perhaps even self-rule. And if the day were to come when Arab Israelis could elect more representatives to the Knesset than Jewish Israelis, the Jewish identity of the State would be in grave doubt.” In the piece, Freund noted that the pool of potential immigrants from the Soviet Union was drying up, and there was little reason to expect a wave of mass immigration from the West. “While Israel must certainly continue to promote immigration, both as a means of achieving personal Zionist and Jewish fulfillment and as a national responsibility,” he wrote, “it must also begin to think more creatively about how to address the ongoing erosion in the country's Jewish demographic profile.” The recent Bnei Menashe arrivals have not been dispatched to settlements in the West Bank but rather to locations in northern Israel, among them mixed Jewish-Arab towns like Acre and Upper Nazareth, where right wingers tend to perceive an “Arab threat” to the local Jewish population. Through his lawyers, Freund noted that the Bnei Menashe are settled in locations “in accordance with an organized plan of the Immigrant Absorption Ministry.” The ministry, in response, said it had no such plan. “We let them go wherever they want with our blessing,” it said. Before Freund found a government willing to embrace his plans to resettle large numbers of the Bnei Menashe in Israel, it seemed he would often use his column in The Jerusalem Post to settle scores with members of previous governments who did not. In 2006, for example, when the late Ze’ev Boim of the centrist Kadima party, then serving as immigrant absorption minister, blocked a group of Bnei Menashe from coming to Israel, Freund called the decision “illegal” and “immoral,” and threatened to sue him. A year later, Freund’s target was Interior Minister Meir Sheetrit, who imposed a complete ban on Bnei Menashe immigration. In his column, Freund labeled Sheetrit’s move “post-Zionism of the ugliest sort, tinged by prejudice and sheer ignorance,” and delivered the following warning: “I’d like to put Mr. Sheetrit and his colleagues on notice. The Divine process of Israel’s return to Zion is far greater than any single person or even government, and no human power can stand in its way.” Asked how he explained this opposition to bringing the Bnei Menashe to Israel, Freund responded that in the past some ministers had opposed his plans for either ideological reasons or because they were unfamiliar with the community. Today, however, he said, the issue was a “matter of consensus.” “There is wide national support that spans the political spectrum – including the coalition and the opposition, religious and secular – and there is not and cannot be any reason for anyone to oppose bringing the Bnei Menashe to Israel,” Freund said through his lawyers. Freund was last year’s recipient of the Moskowitz Prize for Zionism (also known as the “Lion of Zion Award”), established by the American billionaire and right-wing activist Irving Moskowitz, who contributes heavily to the settler movement. The director of the Moskowitz Prize committee, political strategist Ruth Jaffe Lieberman, also serves on the board of Shavei Israel. Asked if there might be a conflict of interest in her filling these two functions, she responded, “During the talks about Michael Freund, I removed myself from the discussions. And I didn’t cast a vote.” ‘No money for food’ Freund maintains that the Bnei Menashe have integrated remarkably well into Israeli society, despite coming from a remote part of the world and having been raised in a very different culture. As he wrote in a piece published on the Shavei Israel website, “They live observant lifestyles, volunteer for combat units in the Israel Defense Forces, and work hard to support themselves and their families. Only 4-5 percent are reliant on social welfare benefits, which is half the national average.” But interviews with members of the community and people familiar with their plight reveal a different picture. It is a picture of a community that has seemingly fallen through the cracks and been left largely to fend for itself, they say. Members of the Bnei Menashe community say they fear that if they speak out, it could affect the chances of their relatives being placed on the lists of those deemed eligible for immigration to Israel. And that, along with Shavei Israel’s successful public relations campaign, may explain why most Israelis are likely unaware of their true predicament. More than 2,500 Bnei Menashe live in Israel today. According to insiders and others familiar with the community, poverty and alcoholism are widespread among these immigrants, and their children tend to lag behind in school. Because they are not eligible for immigrant benefits when they arrive in Israel, most of the adults immediately look for work and, therefore, have no time to learn Hebrew. The Knesset Research and Information Center report published in December 2010 cites a study by researchers at the Emek Yezreel College, which found that most Bnei Menashe find employment quickly, but only in minimum wage and under minimum wage jobs. The study also found that the Bnei Menashe tended to keep to themselves and not mix with other sectors of Israeli society. The Bnei Menashe Council, a grassroots leadership group that was defunct for many years, has recently been revived and is trying to raise money to address these problems. Since immigration resumed two years ago, Shavei Israel has been providing the new arrivals with housing at its own privately run absorption centers, while they study for their conversion exams. It also gives them limited financial assistance during this period. After they complete the process, the converts are handed over to Garinim Torani’im – groups of young Orthodox families who try to affect religious and social change in disadvantaged communities. On a recent visit to such a group in Upper Nazareth, members of the Garin Torani were observed scurrying about, trying to find space at the local school for the 14 new Bnei Menashe children who had just arrived in town. For lack of a better alternative, the children were put into a windowless shack in the courtyard that also serves as a shelter. A young bearded teacher was using space in one of the administrative offices to teach a few of the newcomers basic Hebrew and Torah. This Garin Torani recently had 30 newly converted Bnei Menashe families put under its care by Shavei Israel, reports Chanan Ziderman, a nonprofit consultant recently retained by the Bnei Menashe Council to help with fundraising efforts. “There was nothing to eat, so I just gave them my credit card and told them to buy pizza,” he said. It is not only the newcomers who are struggling. “It would not be an understatement to say we are the weakest and most miserable community in Israel,” said Isaac Thangjom, a member of the Bnei Menashe who immigrated to Israel in 1997 independently. Four years ago, when the Knesset Committee for Immigration, Absorption and Diaspora Affairs visited Kiryat Arba in the West Bank, its members heard from the mayor and his aides about the many problems facing the Bnei Menashe community. More than 700 of them resided there at the time – one of their largest bases in the country. According to the official protocol of that visit, the mayor reported on cases of two and three Bnei Menashe families living in one small apartment, while aides noted that schools and nurseries in Kiryat Arba were being stretched to the limits and could not provide Bnei Menashe children with the special help they needed learning a new language. To quote an activist in the Bnei Menashe Council, who today lives in Nitzan after being evacuated from Gush Katif nine years ago (and who asked that his name not be published), “Our people are in a bad way, but they are very afraid to complain.” Responding through his lawyers, Freund said, “Every immigrant experiences difficulties coming to a new country, and it doesn’t matter if the immigrants come from Manhattan, Marseilles or Manipur.” Shavei Israel, he added, was in daily contact with social workers and municipal officials, and, to the best of the organization’s knowledge, most of the Bnei Menashe experienced a successful absorption in Israel – “among other things, thanks to the personal support, spiritual guidance and economic assistance of the organization.” In their response, the lawyers also noted that Shavei Israel had no legal responsibility to support the Bnei Menashe after they received their immigrant status. Despite that, it did provide certain members of the community with academic scholarships and job training courses. In addition, it dispatches support staff to several towns to assist with absorption. Amishav’s director, Gross, said the reason the Bnei Menashe have not spoken up about their living conditions is that they apparently fear it could affect their chances of being reunited with their relatives. “One of the things that’s important to the Bnei Menashe is that relatives of theirs who are still in India can come here,” he noted. Shavei Israel is now campaigning to bring the 7,000 remaining Bnei Menashe to Israel, along with thousands of other so-called “lost Jews.” “Mr. Freund believes with all his heart that these Bnei Menashe deserve to come to Israel, and he will therefore continue to work toward that, including financing this activity – without any intent to profit or gain personal benefit,” his lawyers said.
<urn:uuid:a970594c-d012-4851-a442-c1e1663af4ae>
CC-MAIN-2022-33
https://www.haaretz.com/2015-02-19/ty-article/how-one-man-is-pumping-up-israels-jewish-majority/0000017f-dc34-db22-a17f-fcb555a20000
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570871.10/warc/CC-MAIN-20220808183040-20220808213040-00296.warc.gz
en
0.978253
7,379
2.578125
3
Brugada syndrome (BrS) was first described more than 25 years ago as a clinical entity in people resuscitated from sudden cardiac death due to documented VF.1 The original 1992 case series described eight patients without apparent structural heart disease who all had VF associated with persistent coved ST-segment elevation in the right precordial leads.1 In 1996 this arrhythmic syndrome was named Brugada syndrome. The next year, BrS was recognised as the same clinical entity as sudden unexplained nocturnal death syndrome, first reported in 1917 in the Philippines.2 The syndrome was considered a familial disease because of syncope and/or sudden death in many relatives of a same family, and the first genetic alteration was identified in 1998.3 Typical presentation of the syndrome is syncope or resuscitated sudden death, and symptoms usually occur at night or at rest especially after a large meal. Fever is a common trigger, particularly in children. As subsequent registry data were published, it became apparent that the spectrum of risk is wide, with most patients classified as low risk. Despite intense research efforts, as documented by about 5,000 publications on BrS, controversies still exist over its pathophysiology, risk stratification and care. In the last 20 years, 12-lead surface ECG has represented the primary source of information for diagnosis and prognosis, but the specificity and accuracy of the abnormal ECG pattern are relatively low.4 Brugada Syndrome Burden At present, it is challenging to establish the actual burden of the syndrome, mainly because we do not know the real number of asymptomatic people due to the high variability and fluctuations of the typical ECG pattern. The incidence appears to be low (<1%), but the condition is responsible for >10% of all sudden deaths and up to 20% of sudden deaths in structurally normal hearts. The prevalence is 8–10 times higher in men than women. More data are becoming available about unexpected deaths in different populations, so the real incidence of BrS needs to be updated.5 The most typical presentation of BrS is syncope or resuscitated cardiac arrest in the third or fourth decade of life due to polymorphic ventricular tachycardia (VT) or VF. Symptoms typically occur at night or at rest during the day, and also uncommonly during exercise. Monomorphic VT is rare and is more prevalent in children and infants, for whom fever is the most common trigger. Diagnosis may also be made on familial screening of patients with BrS or incidentally following a routine ECG. Symptoms typically first develop during adulthood, commonly at 40 years, but they may occur also in children or older people. More than 80% of adult patients are men, but there is an equal male:female ratio in children. However, the clinical presentation of BrS has changed.6 In more recently diagnosed patients, there has been a decrease in resuscitated cardiac arrest as the first clinical presentation of the disease, thereby making inducibility and risk stratification crucial.6 Many people will remain asymptomatic throughout their life. In 2012, an expert consensus panel clarified ECG characteristics and diagnostic criteria and established two ECG patterns for BrS.7 Type 1 (coved-type) represents the only diagnostic pattern for BrS, while type 2 (saddle-back type) is only suggestive of BrS. The type 2 pattern is characterised by an ST-segment elevation >0.5 mm (usually >2 mm in V2) in >1 right precordial lead (V1–V3) followed by a convex ST. To facilitate differentiation of type 2 ECG from other Brugada-like patterns, additional criteria have been suggested that utilise the triangle formed by the ascending and descending branch of the R-wave.8 Frequent day-by-day fluctuations in the ECG pattern may occur in the same patient, including a normal pattern (concealed BrS).9 Placement of the right precordial leads in more cranial positions can increase sensitivity due to variable anatomical correlation between the right ventricular outflow tract (RVOT) and V1–V2 in the standard position. Abnormal ECG intervals including P wave duration, PR or QRS duration may be commonly observed. In up to 20% of patients, AF or supraventricular tachycardia due to atrioventricular (AV) nodal re-entry or Wolff-Parkinson-White syndrome have been reported.10 Further investigation is needed in cases where there is a suspicion of BrS (syncope, dizziness, agonal respiration, resuscitated cardiac arrest, family history of BrS or suggestive ECG pattern) but patients do not have a spontaneous type 1 ECG pattern. They should have a pharmacological test performed with a sodium-channel blocking drug under continuous monitoring.11 The test is positive when a type 1 ECG pattern appears during infusion (Figure 1). In the presence of QRS widening (>130%) or the occurrence of frequent premature ventricular contractions (PVCs) or complex ventricular arrhythmias, pharmacological testing should be stopped.11 It should be emphasised that about a quarter of tests may deliver a false negative. This is important when evaluating a patient who has experienced a frank syncope or an aborted sudden death. Ajmaline is the ideal drug for this purpose because of its shorter duration of action (1 mg/kg over 10 minutes, maximum 100 mg; Figure 1); and higher sensitivity than flecainide, but it is not available in many countries. The IV formulation of flecainide (2 mg/kg over 10 minutes, maximum of 150 mg), is not always available in many countries in IV formulation although it is generally available as an oral formulation. A contraindication to pharmacological testing is PR prolongation in the baseline ECG because of the risk of inducing AV block. A drug challenge should be performed under strict monitoring of blood pressure and 12-lead ECG, and facilities for cardioversion and resuscitation should be available. Moving leads V1–V3 up to the second intercostal space improves diagnostic yield. The patient needs to be monitored for 3 hours or until the ECG is normalised as late positive tests have been reported. The plasma half-life of flecainide is 20 hours, while ajmaline is 5 minutes. Isoprenaline infusion may be employed to counteract these drugs if serious ventricular arrhythmias develop. The first genetic alteration in BrS was identified in 1998 in the SCN5A gene by Chen et al.3 The current challenge in clinical genetics is the interpretation of genetic alterations and their translation into clinical practice.12 To date, nearly a quarter of BrS patients were found to be carriers of SCN5A variants. Over 300 SCN5A variants have been found to be associated with BrS, the majority located in SCN5A, but the causal role of these mutations in BrS is not always clear. Even within families, the observed phenotypes carrying the same SCN5A variant are highly diverse. Environmental and epigenetic alterations also determine variable disease severity. However, the high number of variants may be an overestimate, according to recent guidelines from the American College of Medical Genetics.12 ECG findings predictive of SCN5A mutations include longer and progressive conduction delays (PQ, QRS and HV intervals). The degree of ST elevation and the occurrence of arrhythmias do not differ between subjects with and without an SCN5A mutation.12 Therefore, the presence or absence of an SCN5A mutation does not have any effect on the incidence of sudden cardiac death in BrS. It should be emphasised that BrS is not the only condition attributed to SCN5A mutations. It is well-known that long QT syndrome type 3, progressive cardiac conduction disease (Lenegre’s disease), idiopathic VF, sick sinus syndrome, dilated cardiomyopathy and familial AF are all linked to SCN5A mutations and overlapping syndromes have been reported.12 BrS is commonly accepted as an autosomal dominant channelopathy, however, recent data suggest that it follows a more complex polygenic inheritance model.12 BrS can result from the presence of several variants that confer susceptibility to the phenotype in a given person. At present, genetic analysis in BrS has little to contribute to diagnosis, prognosis and therapeutic management, in contrast to long QT syndrome type 3.12 It does not yet appear to play an important role in risk stratification. As a result, further large studies are required to clarify the exact role of novel genetic variants in BrS pathogenicity for potential therapeutic strategies. Many patients with type 1 ECG pattern are asymptomatic. Therefore, the 2015 European Society of Cardiology (ESC) guidelines proposed a new diagnosis for BrS.13 This is essentially based on the typical ECG pattern, either spontaneous or after sodium-channel blocker, showing in at least one right precordial lead (V1 and V2) positioned in the second, third or fourth intercostal space, without requiring any evidence of malignant arrhythmia. This definition was challenged in an expert consensus conference report endorsed by the Heart Rhythm Society, European Heart Rhythm Association, Asia Pacific Heart Rhythm Society and the Latin American Society of Cardiac Pacing and Electrophysiology.4 The task force was concerned that the ESC definition could result in over-diagnosis of BrS, particularly in patients who only display type 1 ECG after a drug challenge. Data suggest this latter group is at very low risk and that the presumed false-positive rate of pharmacological challenge is not trivial.14 ECG should thus be routinely performed when a diagnosis of BrS is suspected but is uncertain on a standard ECG, and in screening family members of BrS patients (Figure 1). Typical ECG changes of BrS can also be brought on following a meal and on standing. Rarely, ST changes of BrS may be detected in inferior or lateral leads. Other ECG Findings in Brugada Syndrome In BrS, the PR interval may be increased (≥200 ms), particularly for genetic variants affecting the sodium-channel SCN5A, which frequently reflect the presence of an increased HV interval. Also described are: - P wave abnormalities (prolonged or biphasic P waves); - late potentials detected by signal-averaged ECG; - QRS widening; and - fragmented QRS. AF occurs in about 10–20% of BrS patients and is associated with increased risk of syncope and sudden cardiac death. Sick sinus syndrome, neurally mediated syncope and atrial standstill have also been described. Conduction delays in the RVOT have also been reported. Misdiagnosis of Brugada Syndrome Diagnosis of BrS requires exclusion of other causes of ST-segment elevation (Brugada phenocopies). It is well-known that spurious BrS type ECG changes can be observed following cardioversion, can last for a few hours and may lead to an incorrect diagnosis of BrS. Misdiagnosis of BrS can occur with: - ECG changes of early repolarisation; - athlete’s heart; - right bundle branch block; - acute pericarditis; - Prinzmetal angina; - arrhythmogenic right ventricular cardiomyopathy (ARVC); - Duchenne muscular dystrophy; - electrolyte disturbances; and As in all cases of Brugada phenocopies, a sodium-channel blocking agent will be usually negative. Asymptomatic people are the majority (about 63%) of newly diagnosed Brugada patients. Although the reported annual rate of asymptomatic BrS events has decreased over time, this is not negligible (0.5%–1.2% annual incidence), leading to a malignant arrhythmic events rate of 12% at 10-year follow-up in a population with a mean age of 40 years.4,5 Unfortunately, for most patients the first symptom is cardiac arrest or sudden cardiac death. Therefore, risk stratification of asymptomatic patients is of utmost importance. Identification and management of asymptomatic subjects at high risk of sudden death represent the major challenges in BrS.4,5,13–15 In cardiac arrest or patients with presumed arrhythmic syncope, these strategies are of little use, since these people are already recognised to be at high risk. Syncope in combination with a spontaneous type 1 ECG pattern is a universally accepted risk factor because up to 62% of symptomatic BrS patients will experience a new event 48–84 months after diagnosis, leading to sudden death. However, there are no clear-cut recommendations for the asymptomatic group. The recent guidelines neither encourage nor discourage electrophysiological study and VT/VF inducibility patterns for BrS stratification in patients with BrS.13 These recommendations are also supported by several large prospective registries and by a recent pooled individual patient data analysis including eight prospective studies.15 Several non-invasive risk stratification markers have been proposed, including signal-averaged ECGs, but the results derive from small observational studies and require validation in larger series.16 Management of Brugada Syndrome Management of patients with BrS continues to be challenging. There are limited therapeutic options, essentially ICD implantation and quinidine.13 An ICD is always indicated in symptomatic BrS, i.e. resuscitated cardiac arrest and/or non-vagal syncope, or nocturnal agonal respiration. An electrophysiological study may be performed in asymptomatic patients with spontaneous type 1 ECG to assess the need for an ICD.13–15 Although effective for preventing sudden cardiac death, ICD also carries a relevant risk of complications over the patient’s lifetime, particularly if the patient is younger at the time of device implantation.17 Beyond a high prevalence of inappropriate shocks, ICD implantation at a young age also exposes patients to recurrent risks of infection, secondary to device changes and lead complications, frequently requiring subsequent extraction procedures that carry a risk of death.17 ESC guidelines strongly recommend that all BrS patients should be educated about modulating or precipitating factors and taught to avoid these.13 Quinidine has a high rate of effectiveness in the electrophysiology laboratory and has been used to suppress VF in several clinical scenarios, including arrhythmic storms or multiple ICD shocks, or as an alternative to an ICD in children. Unfortunately, the use of quinidine is limited by its unavailability in many parts of the world and its relatively high incidence of side-effects. Treatment of Arrhythmic Storms Isoprenaline infusion is effective in acute situations and quinidine is the only effective drug in long-term treatment. Cilostazol has also been shown to be effective and is recommended for long-term treatment.2 Epicardial Ablation in Brugada Syndrome: Moving from Promise to Reality Since its introduction in 1992, assessment of BrS has focused on parameters based on the ECG, 24-hour Holter recording, and/or electrophysiological testing. However, in the last 30 years, the spectacular success of RF ablation in eliminating all supraventricular arrhythmias led electrophysiologists to search for arrhythmic substrate sites as a target for catheter ablation in patients with BrS and VF because antiarrhythmic drugs have been ineffective in preventing recurrent VF episodes. The recent discovery using 3D electroanatomical mapping of a well-defined potentially reversible arrhythmic substrate in patients with BrS is one of the new key research areas of the 21st century that will allow moving from promise to reality in the management and care of BrS.18–24 Initial observations by Nademanee et al. in BrS patients with frequent electrical storms proved epicardial ablation to be effective in controlling ventricular arrhythmias during follow-up in eight of the nine patients.18 Subsequently, our group used 3D potential duration mapping by the CARTO system (Biosense Webster) to demonstrate for the first time that in BrS ajmaline was able to reveal highly variable arrhythmogenic substrates, characterised by abnormally prolonged and fragmented epicardial potentials (Figure 2). The substrate size ranged from a small area corresponding to the superior part of RVOT towards an extensive area from the medial to inferior aspect of the anterior RV free-wall without involving other regions of the RV or LV.19–21 Additionally, such abnormal electrograms were only recorded when coved-type ST-segment elevation was present, either spontaneously or after ajmaline provocation. These findings are clinically relevant and suggest that sodium-channel blockade, i.e. ajmaline and flecainide, or warm water instillation into the pericardium can unmask abnormal areas further and increase the size of the VF substrates to be targeted, thus leading to a more successful epicardial ablation that eliminates the Brugada pattern.19–22 Interestingly, many patients with concealed ECG pattern became inducible only after there was a consistent ajmaline-induced increase of the substrate size.21 Re-induction of coved-type ECG pattern by ajmaline after ablation was commonly caused by residual abnormal electrograms in the corresponding epicardial RVOT area.21 By contrast, disappearance of coved-type ECG pattern was due to elimination of the remaining epicardial substrate by catheter ablation.21 The presence of such abnormal potentials can be commonly transient and is correlated with ST-segment elevation and VT/VF inducibility.21 We also demonstrated that, independently from clinical presentation, inducibility by a single or double extrastimuli reflected larger substrates than inducibility by three extrastimuli (Figures 3–5). These original observations support the concept that BrS is a complex disease characterised by large but potentially reversible abnormal substrates representing the primary mechanism of malignant VT/VF. Unlike traditional stable substrates, which are characterised by scar or fibrosis as in post-ischemic VT, the impressive variation in size and shape of the BrS substrate, as exposed by ajmaline, clearly suggests that a component is functional rather than a fixed structural replacement with fibrosis. We cannot exclude the fact that in the natural history of BrS over-exposure to specific triggers can facilitate substrate progression from functional to structural changes, as observed in patients with frequent electrical storms.23 Therefore, in BrS, epicardial ablation of all abnormal potentials areas should be guided by repeated infusion of ajmaline to unmask the entire substrate size in order to eliminate multiple re-entrant circuits leading to rapid unstable ventricular arrhythmias and VF. It is conceivable that a substrate-based, interventional approach can pave the way to a cure for BrS, potentially removing the need for ICD implantation or chronic quinidine therapy, as suggested by the preliminary results over short-term follow-up reports of >300 patients worldwide.25 However, epicardial ablation may be associated with potential risks and complications due to epicardial access and RF applications. Therefore, this procedure should be performed in highly experienced centres and ajmaline administration should be performed after patients are counselled appropriately for the potential arrhythmogenic implications of drug administration.
<urn:uuid:1dea42ee-2b99-404e-b115-37003b71cc35>
CC-MAIN-2022-33
https://www.aerjournal.com/articles/brugada-syndrome-progress-diagnosis-and-management
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571989.67/warc/CC-MAIN-20220813232744-20220814022744-00496.warc.gz
en
0.927833
4,165
3.046875
3
IT Research Center for the Holy Quran and its Sciences (NOOR) Department of Computer Science College of Computer Science and Engineering P. O. Box: 344 Al-Madinah Al-Munawarrah, Saudi Arabia Electronic dissemination of digital information has benefited from recent advancements in Information and Communication Technologies (ICTs) and no doubt dissemination of Islamic information in different formats has taken advantage of this rapid progress in technology. In this day and time, information is available in many formats and on many different applications and devices. However, the online availability of digital resources in Quranic studies and research is very limited. In this work, the current available digital Quranic resources from authentic sources will be used in order to formulate strategies on how to extract, format, present and make these resources available for researchers in a centralized knowledgebase. The methodology starts by collecting these authentic electronic Islamic resources from CDs/DVDs, libraries, databases, organizations, websites… etc. Then, strategies for efficient dissemination for the different types of digital resources are developed; depending on the format of the files, the metadata is extracted automatically, if possible, otherwise data is entered manually into the knowledgebase. Therefore, the manual entry of data into the knowledgebase for resources having poor or lacking metadata would efficiently help researchers to easily locate the resources they are searching for. Developing these strategies for each type of format could efficiently help to search for information through the Internet. Finally, efficient dissemination of electronic information which is rich in metadata will help researchers in the search process and will aid in accessing, exploring and disseminating quality research in Quranic studies. With the continual increase in digital Islamic content, ways to ease the gathering and organizing of information is very essential to researchers which in hand will save time and efforts during the literature survey of a research project. The research in the area of Quran and its sciences has noticed a vast increase in the last few years due to the technological advancements in digital technologies in the areas of publishing, indexing, searching and multimedia tools. In this work, strategies for data collection and data entry will be developed to aid in the dissemination of Quranic resources. It is clear that the scattered Quranic resources in different formats (paper and digital) which are available from different sources make it difficult to collect data. In addition, no uniform or standardized formats are noticed in the existing information on the Internet. This makes it even difficult to gather and organize in one place. The metadata for many of these information (digital files) is not available and many files are posted as Word, PDF, different Image formats, databases…etc., which makes it even difficult to classify and make available in a central repository. Therefore, human intervention is needed in order to help in the process of data entry and to organize these resources. This shows that the number of journals, magazines or other resources available in a language other than Arabic is considered very low and no doubt if a search is done for other languages than English less and less number of resources would be found. In addition, there could be many other journals publishing work related to Islamic sciences; however, most of such work is not noticed by many researchers especially those who do not know any other language besides Arabic or English. Therefore, many journals published in Arabic language or those languages using Arabic script such as Persian and Urdu are not indexed for example in a Google search the results are limited to languages using Latin scripts since the interface is in English. Also, many Arabic resources are available only in specific libraries where such information may not be readily available to all researchers in the area of Quran or Islamic studies. Similarly, conferences proceedings’ are not available in standard digital formats and/or properly disseminated and distributed in a local environment which may not be easily available for the majority of researchers and would be very difficult to include in this study. There are many Quranic resources (Ancient Quranic manuscripts, Quran explanation (Tafsir), books related to Quran, conference research papers, magazines, journal papers …etc) available in many different places around the world. However, no centralized body has decided to collect all of this information which is mostly in paper format to be available electronically under one umbrella. Few digital resources exist on the Internet through different organizations, thus what is available is only a drop in an ocean from what could be found in many places around the world. For example the city of Timbuktu, in Mali holds many old Islamic treasures (ancient manuscripts). According to the Library of Congress, “The texts and documents included in Islamic Manuscripts from Mali are the products of a tradition of book production reaching back almost 1,000 years .” This paper is organized as follows: section 2 provides the literature survey, section 3 discusses library classification systems, section 4 explains data collection and strategies for data entry, section 5 presents the methodology for Strategies on data Collection related to Islamic and Quranic Resources, section 6 discusses the results, and finally section 7 concludes this paper. The information on the Internet has been exponentially increasing and no doubt that the Quranic content is very large. When a search on books on Quran was made from the online bookseller Amazon, 37,852 items appeared which includes books in the following languages English, German, French, Spanish, Italian, Arabic, Hebrew and Hindi. With the largest number of books available in English then Arabic followed by few books available in the other languages. In addition, a search on the keyword “Islam” produced 74089 results to date (January, 2013), , compared to 50,436 items in 2009 . This shows an increase of at least 54% in Islamic publications in a period less than 4 years. The main source of information which is considered up-to-date for researchers include: journals, magazines, conference papers and Master/Ph.D. dissertations (thesis). The existing number of journals or conferences presenting Quranic research is not known, as mentioned above such information is scattered and not easy to find. In an attempt to search for the number of journals related to Quran or Islamic studies the following information was collected: From 12214 journals listed by Thomson Reuters/ ISI Web of Science List: Science (January 2012) , the number of journals written in English which were found by using the keywords search “Islam”, “Islamic” or “Muslim” are only six. In addition the Access to Mideast and Islamic Resources (AMIR) listed 475 titles of open access journals in Middle Eastern Studies as of January 4, 2013, however the number of Islamic journals is around 55 journals. The journals are published in different languages which include: Arabic, English, Turkish, Urdu, French, Korean …. etc. With no specific journal on Quran, however, some of the listed journals contain topics on Quran and its related sciences. The king AbdulAziz Foundation for Research and archives publishes a directory on Scientific Peer Reviewed Journals published in Saudi Arabia. According to this directory contains 64 journals from which there are only 3 journals on Quran and all of them are published in Arabic. AskZad which is the first and largest Arabic digital library that offers an extensive referential, cultural and academic database contains Pan-Arab Academic Journal Index (PAJI) which contains full Arabic language indices of more than 700 Middle Eastern university-published journals and approximately 350 organization-published journals . Another main resource in the research community is dissertations published by graduate students. There are many dissertations available through university libraries; however, many of them are not accessible to all researchers. Thus, since many of these dissertations are not in digital format it is difficult for a researcher to visit universities in order to get the reference they are looking for especially if inter-library loan services are not available through those universities. Therefore, the AskZad database provides the Pan-Arab Dissertations (PAD) index which contains almost 7000 dissertations published by graduate students in the Middle East in any language. Currently, in Saudi Arabia universities are converting all dissertations in paper to digital format in an attempt to make resources available to all researchers from all around the world. In regard to conferences, forums, symposiums and workshops on Islamic and Quranic studies, the last few years have seen a surge in the number of such events discussing Islam, Quran and their sciences. Table 1, lists few of on Islamic and Quranic studies in 2013. Table 1: A list of some Islamic and Quranic Related Conferences in 2013. International Conference on Islamic Information and Education Sciences In conclusion, this section, the research in the area of Islamic and Quranic studies have seen an interest from many researchers from all around the world and the need to make resources available under one place is becoming more visible and vital. Library Classification Systems: Library classification systems are used to catalog resources such as books, periodicals, films … etc. The two main standard library classification systems available and widely used are the Dewey Decimal System (DDS) and the Library of Congress (LOC) Classification System. In addition, there are other classification systems which are developed for specific fields and/or organizations such as the Colon classification, Harvard-Yenching Classification: An English classification system for Chinese language materials and V-LIB 1.2, this is just to name a few. In addition there are other universal classification systems in other languages such as: New Classification Scheme for Chinese Libraries, Nippon Decimal Classification (NDC), Chinese Library Classification (CLC), Korean Decimal Classification (KDC) and Library-Bibliographic Classification (BBK) from Russia . The DDS was developed in the second half of the 19th century as a library cataloging system to organize all knowledge which relies on a simple framework that starts with ten subject classes (religion, sciences, etc.). These classes are then broken down into ten divisions, which are then broken down into ten subdivisions. Resources are assigned numeric call numbers based on where content within them falls in this taxonomy of knowledge. On the other hand, the LOC classification system which was developed at the turn of the 20th century differs in its design form the DDS. It was created to categorize books and other items held in the Library of Congress. It features 21 subject categories with resources being identified by a combination of both letters and numbers. The number of categorization classes is not restricted, nor are the numerous subclasses included in the system . Each system has its shortcoming for example since the Dewey system was developed in the 19th century it may not be able to add new fields such as Computers since it was not accounted for under the ten subject category headings. While the system has been updated over time, a closed taxonomy has forced computers and other tech topics to be shoehorned into a category labeled 'General.' However, the Library of Congress has 'Technology' as a subject heading. Consequently, “due to their flaws librarians think that libraries should follow practices which are best to their respective collections”. “While some librarians and other bibliophiles have a strong preference for either Dewey or the LOC system, many others concede that both systems have flaws and that libraries should follow practices that are best for their respective collections. Many public libraries, for example, continue to use Dewey while some academic libraries have made the switch to LOC to allow for greater specialization in identifying resources. ” Most of the libraries in the Islamic World use the DDS classification system; however, there are some organizations which developed classification schemes designed to suite their own collection. For example, the Center of Studies and Quranic Information at the Institute of Imam Al-Shatiby in Jeddah, Saudi Arabia, devised a classification system based on the Dewey system for Quranic Studies which includes five main divisions from which different sub-divisions are branched and so on . In another example, from a visit to the Imam Ibn Alqayim Library in Riyadh, Saudi Arabic the librarian informed the authors that the library devised its own classification system which suites its needs since all its books are on Islamic and Quranic Studies. It is also noticed that websites or search engines such as Google , Yahoo , etc., use their own classification (directory) scheme which suites the way their information is organized. In the work of Idrees, , the author concluded that neither the standard classification systems, nor indigenous expansions or schemes are fulfilling the purpose for classifying Islamic resources. In response to shortcomings of the standard systems, different practices have been adopted. Organizations/libraries have developed their own systems without following or developing any standards, e.g., International Islamic University, Islamabad, . In other cases some organizations have developed expansions in the standard systems . “Efforts were made to get such expansions formally incorporated in the original schemes, but, such efforts could not succeed. Subsequently, there have been very different approaches in the expansions of even same standard systems and no uniformity is found in this regard. Thus, the same kind of knowledge could be seen organized differently at different places. ” The study in proposes the development of a new, independent and comprehensive system that covers all the related and possible aspects of Islamic knowledge and the materials being produced on the associated topics. For IT and computing classification systems the Association for Computing Machinery (ACM), developed the 2012 ACM Computing Classification System (CCS) which replaces its traditional 1998 version of the ACM-CCS. It is being integrated into the search capabilities and visual topic displays of the ACM Digital Library. It reflects the state of the art of the computing discipline and is receptive to structural change as it evolves in the future . However, in regard to Quran related IT classification a modified system has to be developed to serve this purpose, since IT is noticed to be entering all disciplines. In conclusion, due to the many classification schemes universally used in libraries or specific to organizations and websites, and the unavailability of a universal Arabic and/or Islamic classification system the best way to classify Islamic research material would be to devise a new comprehensive expandable classification system which will allow the inclusion of all Quranic and related IT resources. The current standard library classification systems and ACM-CCS could be used as the reference to design such an expandable Islamic library classification system compatible with international standards. Data Collection and Strategies for Data Entry: The information collected comes in different formats and file sizes which cause difficulties during the data entry process. Therefore, a strategy is needed in order to organize the data collected during the data entry process. It is noticed that most of the data gathered either from Internet resources or visiting different organizations in Saudi Arabic do not provide the metadata for the files they post on the Internet or distributed on CDs. This is because of the fact that the information is not organized in databases and therefore the metadata available is either minimal or nil. The aim is not only to collect digital resources related to Islamic studies but also to classify and formulate strategies to ease the data collection and data entry which are the main stages of this project. Data collection is not that difficult if the essential means (strategies) are set to guide the collector on how to deal with different formats of information. No doubt that with the existence of large number of Islamic organizations and the availability of enormous information on the web it is not easy to find information from one source. In addition, each organization may have its information in different formats than others which definitely may cause some problems for people who may not have the suitable software to deal with such file formats. With the Internet being the main source of data collection, clearly the metadata is the essential requirement during a search process, otherwise, it may be difficult to search for any given item. Metadata provides information about the content of digital documents (files) such as text, images, audio, video … etc. The objective of producing metadata for each Islamic resource available in digital format helps researchers identify and search for items very easily. The main objectives of metadata are: Digital Identification – by using ISBN, ISSN, file name, URL, DOI (Digital Object Identifier) Archiving and preservation in order to track the resources and their physical characteristics For example, a text document metadata may contain the following data: title, author, date, size of document (no. of pages, no. of words.. etc.), abstract or summary, … etc. The example in Figure 1 below shows a case where there is a minimal metadata for a book available on a Quran website. Figure 1(a) shows the list of books available, 3 books only. Here, the title of the book is provided and the type of the resource (book) is mentioned. Clicking on the download arrow the download process starts, then the open file window appears, Figure 1(b). Finally, when clicking on open file a window comes with the message: “File extension is unknown,” Figure 1(c). This is just an example of many Internet links which cannot be accessed or do not provide any metadata on the files posted. Figure 2(a) and (b) show two examples of complete metadata for book search. Figure 2(a) is obtained by clicking on the books section of the website, then choosing the book of interest the detail is shown providing the metadata available on the book. Similarly, the book details in Figure 2(b) were obtained in the same way. This shows that the metadata is available through the organization and it may be difficult to obtain it from them to be migrated to another database or indexing system since they are the sole owners of information. The purpose of this work is to provide researchers and students with the means they need in the research process. A limited list of books available on a website. (b) After downloading file, option is given to open it (c) A message with “unknow file extension” Figure 1: An example of a Resource with minimal metadata Example with complete metadata (b) Example with complete metadata Figure 2: Examples of websites providing complete metadata for resources available Therefore, in order to gather metadata on resources available from different organizations collaboration is required in order to obtain such information. In doing so, the metadata could be entered by the owner of information automatically with a written script provided to them in order to migrate the fields available on their database into the proposed knowledgebase. Otherwise, data has to be entered semi-automatically by cutting/pasting the data available into a data entry form, with referencing the source by providing the link to the source from which the information was obtained. Another example of a database is provided by Umm-Alqura University, Makkah, Saudi Arabia. This database provides detail on Master and Doctorate thesis. The database search window provides the ability to search by a keyword in the title or the name of the author (student), Figure 3(a). Following this, a list of the number of results with the titles of the thesis and names of authors is provided, from this the specific item is chosen, Figure (b) shows an example of a chosen item, the complete metadata is available; however each metadata is provided in terms of numbers. These numbers mean nothing to the investigator (researcher) unless the corresponding information associated with it is available. A sample from the search on the topic of Quran Figure 3: Example of Thesis database with metadata In other cases, the information is available but not organized in a simple away to ease the search process. In other words the metadata is available however, it is not organized in a format that makes it easy to find. Methodology for Data Collection Strategies for Islamic and Quranic Resources The methodology implemented follows the following steps which will be explained in details below: In the initial stage of this project data was gathered from different sources (Internet, libraries, visiting some organizations … etc.) and in different formats (paper and digital). Then the different formats of the collected material were studied to formulate a strategy for the data collection and data entry stages of the project. The strategy formulated for the data collection and data entry is: The overall content of data was first studied in order to classify the material into their main classification. Here, the data collected is studied and depending on the file titles and content it is categorized into main and sub-classifications. The type of resources available can be any of the following: books, journals/magazines, conferences/workshops/forums, dissertations, video, and audio. These are the most essential resources needed by researchers. Figure 4 presents the most essential data resources needed by researchers. Figure 4: The most essential resources needed by researchers Then, the different file formats are separated according to these classifications. For example, lets assume the files are organized as follows: first we start with a folder which contains all the files available, then for each classification we have different sub-classifications (sub-folders), next each sub-classification folder contains five folders containing the three different formats found from the data collected, these are: text files (word, pdf, txt, …. etc.), pdf created from images such as ancient manuscript files and files with different image formats, database files (simple or complex), audio and video files. Ancient manuscripts, audio and video files are not the target of the project at this stage. The databases sub-folder may contain two folders due to the fact that some databases could be simple not containing enough metadata and others could be complex, rich in metadata. The Proposed organization structure of data is shown in Figure 5. Figure 5: Organization Structure of Files Next, the main metadata fields needed from the different types of resources are specified as shown in Figure 6. The fields shown below the lines in the resources below are considered optional. Finally, depending on the format (paper or digital) the data entry process to create the metadata for each resource (book, article, research paper ….etc.) is divided into automatic, semi-automatic or manual. The data entry process for paper formats, text files (word, txt, … etc.), pdf created from images, simple database files and rich database files are processed manually, semi-automatic, manually, semi-automatic and automatic respectively. Figure 6: List of Metadata Required for the Different Types of Digital Resources Finally, in designing the knowledgebase metadata is considered the essential requirement for all resources available for entry into the knowledgebase. The burden of entering metadata should be reduced and only essential (minimum) fields for the different types of resources should be entered. The design of the data entry form should be friendly, easy to use, and should include all types of fields which could be needed for any type of resources. It should use drop-down lists to choose from, ensures meaningful field names and provided help text where needed. Results and Discussion Most of the data collection so far has been done in Saudi Arabia in Madinah, Jeddah and Riyadh. The data collection process started by visiting several organizations, some of the organizations visited were: Taibah University Library, Islamic University Library, King Fahd National Library, Ibn Alqayim public library, Islam house website project … etc. The resources received were in different formats: Sample books, booklets, CDs, large amount of files with different formats copied on external hard disks … etc. As explained in the methodology section of this paper for different data formats different data entry approaches were used. A data entry interface was designed as shown in Figure 7 to help in the manual and semi-automated data entry process. The data entered is organized in a database; Figure 8 shows an example of few items saved in the database. In this work, the data entry process has just started with limited entries that require more human resources. Meanwhile, the collection process is still progressing according to the plan setup for this project. On the other hand, other alternatives are being investigated depending on the data formats being collected since too many resources are not properly cataloged, classified or disseminated or/and contain poor amount of metadata which makes searching for Islamic and Quranic resources difficult to reach at best and lost at worst. Figure 7: Manual Data Entry Interface Figure 8: Sample of data organization in the database after using the manual data entry form. This work so far concentrated on the collection of authentic resources for Quran and its related sciences; in addition, to designing a dedicated data entry system. The process of data collection and entry was found to be a tedious process, but a very important one to build a proper foundation for efficient collection and dissemination of research findings and results on Quran and its related sciences. Finally, this work contributes an attempt to organize, classify and catalog, as well as, raise awareness about the importance of developing unified classification/standards for related Islamic and Quranic research resources across the Arab Muslim world. The authors would like to thank and acknowledge the IT Research Center for the Holy Quran (NOOR) at Taibah University for their financial support during the academic year 2012/2013 under research grant reference number NRC1-112. http://international.loc.gov/intldl/malihtml/islam.html, accessed December 11, 2012. www.amazon.com, accessed January 30, 2013. Haroon Idrees and Khalid Mahmood, “Devising a Classification Scheme for Islam: Opinions of LIS and Islamic Studies Scholars”, Library Philosophy and Practice (e-journal) - Libraries at University of Nebraska-Lincoln, Oct. 2009, pp. 1 – 15. http://supportservices.ufs.ac.za/dl/userfiles/Documents/00002/1828_eng.pdf, accessed December 22, 2012. http://www.darah.info/WebTrBooks.aspx, accessed December 3, 2012. http://www.tafsir.net/vb/tafsir24164/, accessed December 3, 2012. http://askzad.com/e_genpages/AboutUS.aspx, accessed November 13, 2012. http://en.wikipedia.org/wiki/Library_classification, accessed November 22, 2012. Education Insider News Blog, Published on December 2010, Dewey Decimal System Vs. Library of Congress: What's the Difference?, http://education-portal.com/articles/Dewey_Decimal_System_vs_Library_of_Congress_Whats_the_Difference.html, viewed on December 23, 2012. Imam Alshatibi Institute, Database for Quranic Information Resources, http://www.quran-c.com/, accessed January 7, 2012. http://www.googleguide.com/directory.html, accessed January 2, 2013. yahoo directory, http://dir.yahoo.com/, accessed January 2, 2013. Haroon Idrees, “Organization of Islamic Knowledge in Libraries: The Role of Classification Systems,” Library Philosophy and Practice - Libraries at University of Nebraska-Lincoln, http://digitalcommons.unl.edu/libphilprac/, 1-1-2012, pp. 1 – 14, ISSN 1522-0222 Idrees, H., User Relationship Management, Dr. Muhammad Hamidullah Library, Islamic Research Institute. Pakistan Library & Information Science Journal, 38, no. 3 (September, 2007): pp. 25-31. ACM --- Understanding Metadata, ISBN – 1-880124-62-9, Niso Press, pp. 1 - 16. http://www.niso.org/publications/press/UnderstandingMetadata.pdf. http://www.alsabaaforquraan.com/Library.aspx?libID=19, accessed January 7, 2012. http://www.islamhouse.com/p/1241, accessed January 9, 2013. http://www.irtipms.org/PubDetE.asp?pub=54, accessed January 9, 2013. Umm Alqura University, Makkah, Saudi Arabia, accessed, January 13, 2012, http://uqu.edu.sa/isr/islamic_culture_cen.htmlUmmalqur.
<urn:uuid:696e2679-9e3b-4e42-886b-08006f0f2dfa>
CC-MAIN-2022-33
https://ininet.org/strategies-for-collection-of-research-and-quran-related-it-res.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00694.warc.gz
en
0.917599
6,000
2.671875
3
Understanding the Keto Diet The ketogenic diet is a low carb, moderate protein, and high fat diet which puts the body into a metabolic state known as ketosis. When you’re body is in a state of ketosis, the liver produced ketones which become the main energy source for the body. The ketogenic diet is also referred to as keto (key-toe) diet, low carb diet, and low carb high fat (LCHF). So why is it so awesome and why is it taking the world by storm? Because it completely reverses how your body functions (in a good way) along with changing how you view nutrition. It’s based around the premise that your body was designed to run more efficiently as a fat burner than a sugar burner. Fat Burner vs Sugar Burner When you eat something that is high in carbs (that yummy donut), your body will produce glucose and insulin. - Glucose is the easiest molecule for your body to convert and use as energy so that is why it’s the preferred energy source for your body. - Insulin is produced to process the glucose in your bloodstream by transporting it around your body. This sounds pretty efficient, right? The problem with this is that when glucose is used as a primary energy source, fats are not needed for energy and therefore are stored. With the average person’s diet, glucose is the main energy source. This initially doesn’t seem like a problem until you realize that the body can’t store that much glucose. This becomes an issue for you because the extra glucose gets converted into fat which is then stored. Because your body uses glucose as it’s main energy source the glucose that is converted into fat doesn’t get used. When your body runs out of glucose it tells your brain you need more so you end up reaching for a quick snack like a candy bar or some chips. You can begin to see how this cycle leads to building up a body that you don’t really want. So what’s the alternative? Become a fat burner instead of a sugar burner. When you lower your intake of carbs, the body begins to look for an alternative energy source and your body enters a metabolic state known as ketosis. Ketosis is a natural process and makes perfect sense when you think about the human body. You’ve probably heard of the fact that you can go weeks without food but only a couple of days without water. The reason for this is ketosis. Most people, for better or for worse, have enough fat stored on them to fuel their body for a while. When your body is in a state of ketosis, it produces ketones. Ketones occur from the breakdown of fat in the liver. You might be thinking why isn’t the body constantly breakdown fats in the liver? Well, when your body is producing insulin, the insulin prevents the fat cells from entering the bloodstream so they stay stored in the body. When you lower your carb intake, glucose levels, along with blood sugar levels, drop which in turn lowers insulin levels. This allows the fat cells to release the water they are storing (it’s why you first see a drop in water weight) and then the fat cells are able to enter the bloodstream and head to the liver. This is the end goal of the keto diet. You don’t enter ketosis by starving your body. You enter ketosis by starving your body of carbohydrates. When your body is producing optimal ketone levels you begin to notice many healh, weight loss, physical and mental performance benefits. All of these benefits are why we help people with the ketogenic diet in our Keto Dash program. - Benefits of a Ketogenic Diet - What Do I Eat on a Keto Diet? - Keto Macros - Getting Started With Keto - How to Reach Ketosis - Types of Ketogenic Diets - Exercise on Keto - Dangers of a Keto Diet - What Happens to My Body? - Keto Flu - Common Side Effects on a Keto Diet - Saving Money and Budgeting on Keto Benefits of a Ketogenic Diet When people say that the keto diet changed their life they are not exaggerating. When you decide to switch over to the ketogenic diet, you quickly realize that it is more than just a diet. It’s a completely new lifestyle that offers numerous benefits. Most people look into a specific diet to lose weight and the keto diet is one of the most effective ways to lose weight in a healthy manner. Because the ketogenic diet is using body fat as an energy source, your body will begin to burn the unwanted fat causing obvious weight loss benefits. On keto, your insulin (the fat storing hormone) levels drop which allows your fat cells to travel to the liver and get converted into ketones. Your body effectively becomes a fat burning machine. Control Blood Sugar Unfortunately, many people suffer from diabetes which is caused by your body’s inability to handle insulin. Keto naturally lowers blood sugar levels due to not eating as many carbs so your body can’t produce glucose. Keto has been shown to have huge benefits for people that are pre-diabetic or have Type II diabetes. Because the ketogenic diet helps you to maintain more consistent blood sugar levels you find that you have more control of your everyday life while on keto. This is one of those benefits that has to be experienced. You can’t understand how cloudy carbs make your thinking until you can ween yourself off of them. When on the ketogenic diet you experience increased mental performance. In fact, many people partake in keto simply for this reason. The reason why you experience an increase in mental performance is that ketones are a great fuel source for your brain. The increase in fatty acids has a huge impact in brain function. Increase in Energy You’ve already learned that keto helps your body turn fat into an energy source. But, did you know that this helps to increase your energy levels? Because your body can only store so much glucose, when it runs out it means your body has run out of fuel (energy) and it needs more. Carbs also cause spikes in blood sugar levels and when those levels drop you experience a crash. Keto helps to provide your body with a more reliable energy source allowing you to feel more energized throughout the day. Better Appetite Control When eating a diet that is heavy in carbs you can often find yourself hungry a lot sooner than you expected after eating a meal. Because fats are more naturally satisfying they end up leaving our bodies in a satiated state for much longer. That means no more random cravings along with feeling like you’re going to collapse if you don’t get something in you immediately. Keto has been used to treat epilepsy since the early 1900s. It’s still one of the most widely used treatments for children suffering from uncontrolled epilepsy today. A big benefit of the ketogenic diet for people that suffer from epilepsy is that it allows them to take fewer medications which is always a good thing. Cholesterol & Blood Pressure The ketogenic diet has been shown to improve triglyceride levels and cholesterol levels. Less toxic buildup in the arteries allowing blood to flow throughout your body as it should. Low carb, high fat diets show a dramatic increase in HDL (good cholesterol) and a decrease in LDL (bad cholesterol). Studies have shown that low-carb diets show better improvement in blood pressure over other diets. Because some blood pressure issues are associated with excess weight, the keto diet is an obvious warrior against these issues due to its natural weight loss. Insulin resistance is the reason why people suffer from Type II diabetes. The ketogenic diet helps people lower their insulin levels to healthy ranges so that they are no longer in the group of people that are on the cusp of acquiring diabetes. One of the more common improvements that people on the keto diet experience is better skin. What Do I Eat on a Keto Diet? Unfortunately, on the ketogenic diet you can’t eat whatever you want. However, unlike many other diets, once you find yourself in ketosis your cravings for the things you can’t eat usually disappear and if they don’t? Well, there are plenty of alternatives for the things you are used to eating. Remember, that the goal of the ketogenic diet is to get your body into a state of ketosis and to do that you need to reduce your carb intake. It’s important to understand that carbohydrates are not only in the junk foods that you love, but also some of the healthier foods that you enoy. For example, on keto you nee to avoid wheat (bread, pasta, cereals), starch (potatoes, beans, legumes) and fruit. There are small exceptions like avocado, star fruit, and berries as long as they are consumed in moderation. Foods to Avoid - Grains – wheat, corn, rice, cereal - Sugar – honey, agave, maple syrup - Fruit – apples, bananas, oranges - Tubers – potato, yams Foods to Eat - Meats – fish, beef, lamb, poultry, egg - Leafy Greens – spinach, kale - Above ground vegetables – broccoli, cauliflower - High fat dairy – hard cheeses, high fat cream, butter - Nuts and seeds – macadamias, walnuts, sunflower seeds - Avocado and berries – raspberries, blackberries, and other low glycemic impact berries - Sweeteners – stevia, erythritol, monk fruit, and other low-carb sweeteners - Other fats – coconut oil, high-fat salad dressing, saturated fats, etc. To get the complete list of foods you can eat while on keto check out our keto shopping list. Here are just some of the amazing keto recipes that you can cook yourself and enjoy. Understanding macros is a key component to being successful on the ketogenic diet. What are macros? They are the main sources of calories in your daily diet. The macros that you need to keep an eye on are: Because the ketogenic diet is a high fat diet, the majority of your daily calories will come from fats. The general ratio of macros to follow is 70% fats, 25% protein, and 5% carbohydrates. This means that 70% of your calories will come from fats, 25% from protein, and 5% from carbs. When starting off on keto your daily, net carbs shouldn’t exceed 20g. That means even if your recommended daily macro carb count is 27g, you still want to stay below 20g. Totals Carbs vs Net Carbs It’s important to understand that not all carbs are treated equal when looking at a nutrtion label. On nutrition labels you’ll Total Carbohyrates along with a further breakdown of Fiber and Sugars. On keto, you care about net carbs which are Total Carbohydrates – Fiber = Net Carbs. Because fiber doesn’t have an affect on your blood sugar levels it is considered a net zero carbohydrate. Vegetables on a Ketogenic Diet Vegetables are tricky on a ketogenic diet because we’ve been raised under the idea that vegetables are healthy and they are. However, almost all of the vegetables that you consume today contain carbs. Some more than others so it’s important to understand the ones that have a safer number of Net Carbs. |Spinach (Raw)||1/2 Cup||0.1| |Bok Choi||1/2 Cup||0.2| |Lettuce (Romaine)||1/2 Cup||0.2| |Cauliflower (Steamed)||1/2 Cup||0.9| |Cabbage (Green Raw)||1/2 Cup||1.1| |Cauliflower (Raw)||1/2 Cup||1.4| |Broccoli (Florets)||1/2 Cup||2| |Collard Greens||1/2 Cup||2| |Kale (Steamed)||1/2 Cup||2.1| |Green Beans (Steamed)||1/2 Cup||2.9| If you’re looking for more details on Keto and vegetables then check out 11 Low Carb Vegetables That You Can Safely Eat on the Ketogenic Diet. Getting Started With Keto So how do you get started with keto? That’s a great question and it’s something we detail in our Keto Dash program. However, if you want to run wild on your own then here is what you need to do: - Understand meal planning and plan your meals so you don’t have missteps - Calculate your daily macro goals - Drink enough water - Get enough sleep When getting started on the keto diet you don’t want your daily macros to exceed 20g of carbs. You want to cut out all sugar and have most of your carbs come from vegetables. The reality of it is, if you want to get started you can dive right in after you’ve caclulated your macros and planned some meals. If you aren’t in the mood to plan meals right now then you can just go off of what your body tells you and eat what when you feel hungry although this usually means you fall short of your macro goals. It’s important that you make food that you enjoy. Being on keto isn’t about missing out on food you love. It’s about finding the food you love that is great for your body. How to Reach Ketosis In our book, The 3-Day Weight Loss Manual, we show you a strategy that will help you to achieve ketosis as quick as possible. While you don’t need to reach ketosis as quickly as possible, many people consider ketosis as their first successful milestone on the keto journey. The following steps will help you great in achieving that: - Restrict your carbohydrates: Because you can’t reach ketosis when your body still has a supply of glucose to burn you need to restrict your net carb intake to 20g or less than a day. - Restrict your protein: Protein is a sneaky one in this diet because if you eat too much it ends up being converted into glucose which will keep you out of ketosis. - Stop worrying about fat: To lose fat on keto you need to consume healthy fats so you have to get rid of the mental block you have regarding it. You don’t lose weight on keto by feeling hungry all of the time. - Drink water: Water is a huge deal on keto. You need to consume a lot of it. You need to stay hydrated and be consistent with the amount of water you drink. To make it easier, consider drinking water with fresh lemon in it or grab some MiO with Electrolytes. - Careful with snacking: One thing you have to keep in mind is that even while eating keto you can suffer from small insulin spikes. Less snacking means less of those giving you a better chance of losing weight. - Consider fasting: Fasting in this case means intermittent fasting. Instead of eating throughout the day, you block off an 8-hour window and in that window you eat all of your meals. - Add exercise: A simple 20-30 minute walk everyday can help regulate weight loss and your blood sugar levels. An increased workout routine usually means an adjustment in macros so just because keto gives you more energy don’t assume that things can stay the same when you run a marathon. - Look at supplements: Supplements can help you reach ketosis quicker but they aren’t necessary. Always check food labels to see the ingredients being added. If sugar is at the top of the list then runaway. How to Know if You’re in Ketosis There are a couple of different ways to see if you’re in ketosis. One common way is by using ketone test strips but these aren’t meant to determine if you’re body is in ketosis. They just let you know the level of ketones that your body is getting rid of. Another method is by using a blood glucose monitor. The issue with this is that the blood strips can be expensive over time and once you’re in ketosis you start to understand your body a bit more so you won’t keep running back to the monitor. You can also check for ketosis by keeping an eye on these symptoms: - Increased Urination: Keto is a natural diuretic, so you’ll find yourself going to the bathroom more than usual. Especially with how much water you are consuming. Acetoacetate (say that 3x fast), a ketone body, is excreted through the urine so this is another cause for more frequent bathroom breaks. - Dry Mouth: The more fluids your body is releasing, the more you may experience dry mouth. This is your body telling you that you need more electrolytes. This is why we add MiO with Electrolytes to our water. Also keeping salty things around helps like pickles. - Bad Breath: Acetone is a ketone that is partially excreted through your breath. It doesn’t have the most pleasant smell but thankfully it disappears in the long run. - Reduced Hunger and Increased Energy: This is the most telltale sign of ketosis. You find that you don’t get hungry as often and you can go much longer without food because you have more energy. The last thing you want to do is drive yourself crazy measuring and testing your ketone levels. Once you get a handle on things, you’ll learn to see the signs that your body is giving you. To learn more about ketosis check out 7 Signs You Might Be in Ketosis When Doing the Ketogenic Diet Types of Ketogenic Diets A common question, especially from people that workout, is whether or not you can build muscle while on keto and the answer is yes. Many workout programs have you consuming a large number of carbs to fuel your workouts. While on keto you don’t need to bulk up on the carbs but you can fill up your glycogen stores so that you have glucose ready for a workout. If you wish to add mass to your body while on keto it is suggested that you consume 1.0 – 1.2g of protein per lean pound of body mass. This is also why there are different types of keto diets because some people have different needs. - Standard Ketogenic Diet (SKD): This is the classic keto diet that everyone knows and most people stick with as they are aiming for weight loss. - Targeted Ketogenic Diet (TKD): This is a small variation where you follow SKD but intake a small amount of fast-digesting carbs before your workout. - Cyclical Ketogenic Diet (CKD): This is the more complication variation that is usually used by bodybuilders. In this variation you give yourself one day a week to carb up to resupply glycogen stores. Which one is for you? If you work out pretty hard then you might want to do TKD or CKD. Exercise on Keto The concern of people that exercise is that keto will affect their physical performance and while this isn’t true in the long run, in the short term you might experience a small drop. Your body needs a small amount of time to adjust. The good news is that studies (on trained cyclists) have shown that the ones on the ketogenic diet didn’t find a compromise in their aerobic endurance or a loss of muscle mass. If you want to learn more about exercising on a ketogenic diet then read Ketogenic Diet 101: Exercising on a Keto Diet. Dangers of a Keto Diet Are there dangers to the ketogenic diet? If your body is producing too many ketones then it enters ketoacidosis. This is highly unlikely to occur in normal circumstances because for most people it’s a challenge to get into optimal ranges for ketosis so getting into the range where you need medical intervention isn’t likely. What Happens to My Body? Because you’re completely rewiring how your body works, your body isn’t going to be ready right away to handle the breakdown of fats for energy. While switching over to keto you’ll have a transition period where your body uses up all of its glycogen stores and doesn’t have enough enzymes to breakdown fat to produce ketones. This means your body doesn’t have an immediate fuel source which causes a lack of energy and general lethargy. In the first week of keto, many people will report headaches, mental fogginess, dizziness, and aggravation. Sounds terrible, right? This is caused by the loss of electrolytes so it’s important that you continue to replace them throughout the day. Keeping your sodium (don’t hesitate to salt things up) intake up throughout the day can prevent all of these side effects. Sodium helps with water retention in the body along with replenishing the much needed electrolytes. The groggy feeling and fatigue actually has a term and it’s keto flu. Keto flu is a very common experience that some people go through when transitioning over to keto. It usually goes away in just a few days but if you don’t take active measures to find against it, it can stay around for much longer. When transitioning to keto, you may feel some slight discomfort along with fatigue, headaches, nausea and cramps. It doesn’t sound fun but it’s important to understand why it is happening. - Keto is a diuretic. Everytime you urinate you’re losing electrolytes and water. To combat this you can make a nice drink from a bouillion cube (makes a great broth) or by using MiO with Electrolytes and increasing your water intake. The goal is to replace the electrolytes that you’re using. - You’re transitioning. All of the years of carb intake has trained your body to convert carbs into glycogen so when you transition over to keto, your body needs time to make the proper adjustments. You can’t simply make your car go electric by adding another battery. For most people, eating less than 20g of net carbs a day will get them on track for ketosis within a matter of days. To ensure the process is sped up some you should aim for less than 15g of net carbs daily. Continue to monitor your electrolyte intake along with how much water you are drinking. To learn how to beat the Keto Flu go on and read Keto Flu: What It Is and How to Beat It the Healthy Way. Common Side Effects on a Keto Diet As with any drastic change you make to your body’s chemistry there are going to be side effects. Of course, if you think about it, your current way of eating has side effects as well. On keto there are known ways to combat each of these side effects so you’re in good hands. Do to keto being a diuretic, when your body is losing out on fluids it can cause cramps. To prevent these you do the same thing that you’re already doing to prevent keto flu and that is upping your water and sodium intake. If you find that cramps still persist then you might look into taking a magnesium supplement. Because the most common cause of constipation is dehydration, you can help prevent it by increasing the amount of water you drink everyday. You also want to ensure that the vegetables you eat contain quality fiber. If you find that these aren’t enough then you can add psyllium husk powder to your drinks and meals. This sounds scarier than it really is. Your heart may begin to beat faster and harder when transitioning over to keto, it’s pretty standard. If you the problem persists over a long period of time then you need to make sure you’re drinking enough water and eating enough salt. If the problem continues to persist then you may need to add a potassium supplement to the mix. Reduced Physical Performance Since your body hasn’t fully transitioned to burning fat yet, it loses out on its energy source pretty quickly if you are exercising hard. As your body shifts to using fat for energy, you’ll find that all of your strength and endurance will return to normal. Saving Money and Budgeting on Keto Some people believe that eating keto is more expensive but this isn’t true. Initially, you might find yourself needing to restock the pantry with keto-friendly items but beyond that, eating keto isn’t more expensive than eating normally. You’ll find that you can buy meat in bulk and you can store the unused portion in the freezer. Because you’re on keto, you’ll find that you are cooking more for yourself instead of going out. This shows significant savings along with helping you build your budget. Cooking on Keto also doesn’t have to be time-consuming. Love your instant pot? Then here are instant pot keto recipes just for you.
<urn:uuid:5cf367f1-1562-430b-b511-bc9f95af2a58>
CC-MAIN-2022-33
https://mydietplan.org/ketogenic-diet-beginners-guide-keto-weight-loss/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573540.20/warc/CC-MAIN-20220819005802-20220819035802-00497.warc.gz
en
0.926221
5,533
2.734375
3
Ortho -phenylphenol (OPP, or 2-phenylphenol) and its water-soluble salt, sodium ortho -phenylphenate (SOPP), are antimicrobial agents used as bacteriostats, fungicides, and sanitizers. The phenolic compounds are effective disinfectants against some viruses . When carcinogens and/or radioactive materials are also present, waste should be chemically disinfected prior to handling as a chemical and/or radioactive waste. Due to the ambident nature of phenoxyl radicals, both C- and O . The myriad effects of disinfectant usage on greenhouse gas emissions and ecosystem health mean that a more environmentally friendly disinfectant should be produced. (Pressed-wood products include plywood, paneling, particleboard, and fiberboard and are not the same as pressure-treated wood products, which contain chemical preservatives and are . NY-08: IARC Group 1, 2A & 2B Carcinogens . It may cause genetic changes. . Vesphene IIIst Phenolic Disinfectant Safety Data Sheet According to Federal Register / Vol. The EPA recommends the use of "exterior-grade" pressed-wood products to limit formaldehyde exposure in the home. In cells, these compounds are believed to undergo oxidative metabolism when in contact with peroxidase enzymes or transition metals, which leads to phenoxyl radicals . Cleansers, antiseptics, and disinfectants are differentiated by their intended use and characteristic properties, not by their chemical content. Disinfection is the process of elimination of most . Some of the commonly used phenolic component as a disinfectant are: Triclosan, Hexachlorophene, bisphenols, and hexylresorcinol. TK60 One-Step Disinfectant. 300. It does not contain the suspected carcinogen ortho-phenyl phenol. Although phenolic compounds are commonly found in plants such as henna, they're used less often now because of their negative health effects - especially those that carry carcinogenic properties. Multiuse items must always be discarded. 2. known carcinogen -Aldehydes are sensitizing agents -Irritating to skin and respiratory tract -Toxic and must be collected as chemical waste . (1) Other uses of phenol include as a slimicide, as a disinfectant, and in medicinal products such as ear and nose drops, throat lozenges, and mouthwashes. Some phenol-based disinfectants are considered carcinogens. PHENOLICS. Furthermore, using wipes and disinfectants containing chlorine on surfaces, such as metal and wood, can cause damage like discoloration. It also provides evidence for the importance of phenoxy radicals produced by one-electron transfer reactions initiated by chlorine in the production of dicarbonyl ring cleavage products. High-level disinfection - kills all organisms, except high levels of bacterial spores, and is effected with a chemical germicide cleared for marketing as a sterilant by FDA. However, the CDC recommends that 5.25% sodium hypochlorite (household bleach) diluted to a concentration of 0.05% can be used for the decontamination of a blood spill.4. This adaptive quality is an important factor in the use of phenols today, as they can be synthesized to serve specific purposes. Potentially contaminated materials, such as manure, bedding, straw, and feedstuffs, should be removed and disposed of, and then the surface should be thoroughly washed using detergents ( Fig 3 . Are cresols toxic? Phenol is an antiseptic and disinfectant used in a variety of settings.. Generic Name Phenol DrugBank Accession Number DB03255 Background. Phenolic compounds are more difficult to rinse from equipment than other disinfectants, resulting in exposures long after disinfection and possible skin/mucous membrane irritation/injury. Phenol is an antiseptic and disinfectant. first is the production of the carcinogen bis-chloromethyl ether when hypochlorite solutions come in contact with formaldehyde. Ames test-negative carcinogen, ortho-phenyl phenol, binds tubulin and causes aneuploidy in budding yeast. Vesphene IIIse Phenolic Disinfectant Concentrate 1:128 v:v; 10-minute contact time Yes Hydrogen peroxide Product Name Formulation: Ready to Use or Phenol. Kills viruses such as Influenza A/(HON1) ATCC VR-95, Influenza B/(H2N2)(ARUP), Herpes Simplex I and Herpes Simplex II on inanimate surfaces. Cresol is a phenol derivative used as a disinfectant that may cause gastrointestinal corrosive injury, central nervous system, cardiovascular disturbances, renal, and hepatic injury following intoxication. Phenol itself (perhaps the oldest of the surgical antiseptics) is nolonger used even as a disinfectant because of its corrosive effect on tissues, its toxicity when absorbed, and its carcinogenic effect. Talking about its carcinogenicity, the experiments have been done on mice and rats with various doses of phenol, but there were no signs of tumor. 800-443-9942 . Both have been used in agriculture to control fungal and bacterial growth on stored crops, such as fruits and vegetables. It has an irritating odor and is a human carcinogen. He used it to control surgical infections in the operating room. known carcinogens. A cleanser aids in physical removal of foreign material and is not necessarily a germicide. Phenol | C6H5OH or C6H6O | CID 996 - structure, chemical names, physical and chemical properties, classification, patents, literature, biological activities, safety/hazards/toxicity information, supplier lists, and more. Carbolic acid is a synonym for phenol . A. These chemicals are readily soluble in organic solvents but only slightly soluble in water, except for the chlorophenate salts. National Institutes of Health. Is cresol a carcinogen? Phenolic compounds are profoundly toxic to humans, animals, and aquatic life, and can also form carcinogenic chlorophenols in the presence of chlorine. O-Phenylphenol has been shown to cause cancer in studies with lab animals. Sani-Cloth Plus wipes (EPA reg number 9480-6) and Sani-Cloth HB wipes (EPA reg number 61178-4-9480) do not have an efficacy claim against Mycobacterium bovis, which classifies these products as low-level disinfectants. Many scientists believe there is no safe level of exposure to a carcinogen. D. Phenolic disinfectants D. Phenolic disinfectants 15 When washing your hands, you apply soap, lather and scrub your hands and under the free edges of nails with a nail brush for at least: . . . Are known carcinogens D. All of the above D. All of the above Decks in Cosmetology Class (32): Chapter 11 Properties Of Hair And Scalp Chapter 18 . Phenolics are not FDA-cleared as high-level disinfectants for . Phenol . These adverse actions are diminished by forming derivatives in which a functional group replaces a hydrogen atom in the aromatic ring. Some phenol-based disinfectants are considered carcinogens. National Center for Biotechnology Information. Disinfectant-Linked Poisoning Rises Amid COVID-19. Identification Summary. . Delirium. Appendix E (September . A disinfectant is a chemical substance or compound used to inactivate or destroy microorganisms on inert surfaces. The product generated has a pH of 5.0-6.5 and an oxidation-reduction potential (redox) of >950 mV. Phenol is dangerous when ingested or even when exposed to bare skin. Phenol is probably the oldest known disinfectant as it was first used by Lister, when it was called carbolic acid. Chemical disinfectants either destroy, inactivate or inhibit the pathogenic growth. Phenol, Liquified Created by Global Safety Management, Inc. -Tel: 1-813-435-5161 - www.gsmsds.com CAS 108-95-2 Phenol >89 % CAS 7732-18-5 Deionized Water <11 % CAS 6153-56-6 Oxalic acid, dihydrate <0.01 % Percentages are by weight SECTION 4 : First aid measures Description of first aid measures After inhalation: Move exposed to fresh air. Phenol is a disinfectant and chemical precursor with a variety of uses and indications. Phenol is often used in throat lozenges but it has little antimicrobial effect at low concentration. Bottom Line Distributers. Chemical Disinfectants In the laboratory setting, chemical disinfection is the most common method employed to . 77, No. Intermediate-level disinfection - kills mycobacterium, most viruses, and bacteria with a chemical germicide registered as a This pure compound is mixed with water and commercially sold as a liquid product. A chemical process that uses specific products to destroy harmful organisms on environmental surfaces. . Other symptoms of exposure include: Shock. Pathologic changes consisted of severe acute centrilobular hepatic necrosis and renal cortical necrosis. Menu. Kills a human coronavirus similar SARS-CoV-2 (COVID-19) Human coronavirus. the factors (see figure 2-2) that determine the type and amount of disinfection by-products formed during water treatment include (1) the presence of organic matter and inorganic matter in the source water, which is subject to daily as well as seasonal variation in concentration, (2) the disinfecting chemicals used, and (3) the length of time the ). It is also a common ingredient in household cleaning products due to . Ethylene Oxide (CAS #75-21-8) . Moreover, phenol-based products are limited in that they can not be used in the proximity of neonatal areas, particularly isolettes, or other infant contact surfaces. . SNiPER has a broader efficacy than traditional alcohol and phenol based disinfectants. Select one: a. pathogens b. contaminants LopHene Concentrated Phenolic Disinfectant is a low-pH (acidic) germicidal cleaner for use on hard surfaces in labs, production areas, and on process equipment or anywhere thorough disinfection efficacy is required. Delirium. This Environmental Health & Safety guideline documentation is intended for researchers and laboratory personnel. May cause allergic reaction for some individuals. Avoidance of active ingredients in the chemical class of nonyl phenol ethoxylates Phenols are probably the oldest disinfectants used, first ever used by Lister, in 1867. Other Effects It is active against a wide range of micro-organisms including some fungi and viruses, but is only slowly effective against spores. What is a safe and useful type of disinfectant, commonly called quats? Several phenol-containing compounds have been classified as environmental carcinogens by the IARC [176-178]. Phenol is a monosubstituted aromatic hydrocarbon. Occupational exposure to phenol has been reported during its production and use, as well as in the use of phenolic resins in the wood products . Box 147, St. Louis, MO 63166, US Emergency Telephone No.1-314-535-1395 (STERIS); 1-800-424-9300 (CHEMTREC) Telephone Number for Information: 1-800-444-9009 (Customer Service-Scientific Products) Disinfection does not necessarily kill all microorganisms, especially resistant bacterial spores; it is less effective than sterilization, which is an extreme physical or chemical process that kills all types of life. Formaldehyde (formalin) has good disinfectant properties against vegetative bacteria, spores and viruses. Furthermore, triclosan, a common disinfectant used in hand and oral hygiene applications has been shown to produce bacterial resistance upon repeated exposure. Phenolic disinfectants are _____. and as a fungicide and disinfectant for wood preservation, the treatment of . Phenolis a MUTAGEN. Phenol noun - A toxic white soluble crystalline acidic derivative of benzene; used in manufacturing and as a disinfectant and antiseptic; poisonous if taken internally. Formaldehyde is a suspected carcinogen. Quarter antiseptic solution B. Quarternary-based compounds C. Phenolic quarternary agents D. Quarternary ammonium compounds Quarternary ammonium compounds Laws are also called Statutes Invasion of body tissues by disease causing pathogenic bacteria can result in a (n)______ (Pressed-wood products include plywood, paneling, particleboard, and fiberboard and are not the same as pressure-treated wood products, which contain chemical preservatives and are . 2,4,6-triCl-phenol is a probable . Reports of toxicity were not far behind. Hard Nonporous (HN) Healthcare; Institutional; Residential. nerve demyelination and skin contact dermatitis that requires personnel using phenolic disinfectant be provided with appropriate protective clothing and equipment. These . The acute toxicity . Lab Alley Brand Phenol Crystals, ACS Reagent Grade, 99% is for sale in bulk sizes and is in stock. Champion SprayON Phenol Disinfectant precleans or decontaminates critical or semi-critical medical devices prior to sterilization or high-level disinfection. Phenol is also used in disinfectants and antiseptics. Hypochlorous acid. 300. Membrane bound oxidases and dehydrogenases are inactivated by concentrations of phenol that are rapidly bactericidal for microbes. . Corrosive. Oral, dermal, and inhalation toxicity. CAPPIN. Phenol-based. . STERIS Corporation, P. O. It disinfects the inanimate objects. phenol homologs and phenolic compounds are bases of a number of popular disinfectants, such as Lysol. 0.5 (30 seconds) Ready-to-use. Additionally, it may even be polluted with carcinogens. However, there are many phenol-based materials used as disinfectants. . These products emit less formaldehyde because they contain phenol resins, not urea resins. Use only with proper ventilation control (e.g., 3.3 POTENTIAL ENVIRONMENTAL EFFECTS A 3M Product Environmental Data Sheet (PED) is available. Your packages will be shipped in 1-2 business days via UPS or LTL. There are numerous variations of phenolic disinfectants. However, even the most common phenol used, OPP, is considered moderately toxic and an indirect carcinogen. Birex is a cleaner and deodorizer. (1) Sources and Potential Exposure alternative disinfectants or sanitizers registered with the U.S. Environmental Protection Agency (EPA) for specific uses. . Many lethalities are recorded of phenol poisoning. . SOPP is applied topically to the crop . It can also cause environmental harm at low levels. Other symptoms of exposure include: Shock. . Phenol gives off a sweet, acrid smell detectable to most people at 40 ppb in air and at about 1-8 ppm in water (ATSDR, 1998). Disinfection does not necessarily kill all microorganisms, especially resistant bacterial spores; it is less effective than sterilization, which is an extreme physical or chemical process that kills all types of life. TUESDAY, April 21, 2020 (HealthDay News) -- A woman overcome by toxic fumes from her kitchen sink is rushed to the hospital; a toddler is treated . is used for decontamination and disinfection of enclosed volumes such as safety cabinets and rooms (see section on Local environmental decontamination in this chapter).Formaldehyde (5% formalin in water) may be used as a liquid disinfectant. One application of SNiPER often takes the worse contaminated surfaces and turns them into "food safe" surfaces. The effect resulted from high oral doses. The EPA recommends the use of "exterior-grade" pressed-wood products to limit formaldehyde exposure in the home. Disinfection eliminates various microbes, except for bacterial spores, using different chemical agents, heat or radiation. Phenolic disinfectants are effective bactericides, fungicides, tuberculocides and virucides, but are ineffective against spore-forming bacteria such as Clostridium difficile. Typically, death and severe toxicity result from phenol's effects on the central nervous system, heart, blood vessels, lungs and kidneys. The molecule consists of a phenyl group (C 6 H 5) bonded to a hydroxy group (OH). Though phenols are good at killing microbes, they are extremely harmful to human health and can be potent carcinogenic. It is a white crystalline solid that is volatile. An antiseptic is a biocide applied to living tissue, whereas a disinfectant is a biocide applied to inanimate . Poisonous substances produced by some microorganisms are called _____. These are alarming factors in considering the usage of phenols. The purpose of this Guidance Document for Disinfectants and Sterilization Methods is to assist lab personnel in their decisions involving the judicious selection and proper use of specific disinfectants and sterilization methods. Chemicals used as sterilizing agents are called chemisterilants. Formaldehyde should be handled in the workplace as a potential carcinogen with an employee exposure standard that limits an 8 hour time-weighted average exposure to a concentration of 0.75 ppm. Harmful Disinfectant Ingredients Phenol Phenol is an ingredient that was previously used in many household disinfectant sprays. Both have been used in agriculture to control fungal and bacterial growth on stored crops, such as fruits and vegetables. Both materials are considered to be suspect carcinogens according to OSHA and an occupational carcinogens according to NIOSH. This and other phenolic disinfectants derived from coal tar are widely used as disinfectants for various purposes in hospitals. Phenol is a basic feedstock for the production of phenolic resins, bisphenol A, caprolactam, chlorophenols and several alkylphenols and xylenols. These products emit less formaldehyde because they contain phenol resins, not urea resins. Mildly acidic, it requires careful handling because it can cause chemical burns . There are strict discharge limits for phenols in many jurisdictions, typically <0.5 mg/L. Methods and Results: Antimicrobial activity of pure phenolic compounds representing flavonoids and phenolic acids, and eight extracts from common Finnish berries, was measured against selected Gram-positive . Phenol as Disinfectant Phenolic disinfectants are effective against bacteria (especially gram positive bacteria) and enveloped viruses. EPA has classified o-cresol, m-cresol, and p-cresol as Group C, possible human carcinogens. USA ACGIH ACGIH chemical category Not Classifiable as a Human Carcinogen USA ACGIH Biological Exposure Indices (BEI) 40 mg/l Parameter: Acetone - Medium urine-Sampling time: . EPA-registered phenolic disinfectants are used to disinfect surface areas and non-critical medical devices. Sterilization can be achieved by physical, chemical and physiochemical means. The phenol-based residue contamination on non-critical items after using a surface disinfectant can cause hazardous injury to tissue or mucous membranes with which they contact. . identification by a state or federal agency as a carcinogen or teratogen, or satisfaction . Overall, mineral oil doesn 't do . Although superoxidized water is intended to be generated fresh at the point of use, when tested under clean conditions the disinfectant was effective within 5 minutes when 48 hours old 537. Each product exceeds the CDC's recommendations for cleaning and disinfection in healthcare facilities. Phenol remained a healthcare disinfectant through much of the 20th ce Wavicide-01 Disinfectant Solution - From $13.00 No Tax - Buy Wavicide-01 Disinfectant Solution - Easy online ordering from Cascade HealthCare Products Inc. Thus, instead of phenols, their derivatives are preferred. Phenol: Co-carcinogen Shukla: Pinesol: Unresponsive pupils and extreme ataxia were observed prior to death. Short Description: Wavicide 01 High Level Instrument Disinfectant 2 Year Shelf Life, No Activator . Phenol (also called carbolic acid) is an aromatic organic compound with the molecular formula C 6 H 5 OH. Rousseau: Essential oils in flea treatment: Thirty-nine cats and 9 dogs with history of exposure to natural flea preventatives. are powerful disinfectants, can be harmful to the environment, KNOW CARCINOGEN. You can buy Phenol Crystals, ACS Reagent Grade, 99% for $73 online, locally or call 512-668-9918 to order bulk sizes. 500. together, the active stage and inactive, or spore-forming, stage of bacteria are referred to as the: For . This study provides new insights into the formation of reactive and toxic electrophiles during chlorine disinfection. It is also used in slimicides (chemicals that kill bacteria and fungi in slimes), as a disinfectant and antiseptic, and in medicinal preparations such as mouthwash and sore throat lozenges. Phenol can catch fire. Handy individual consumer size containers for DIY projects are leak resistant. It is a dangerous, irritant gas that has a pungent smell and its equivalent to 4% formaldehyde; at this concentration it is an effective disinfectant. A disinfectant is a chemical substance or compound used to inactivate or destroy microorganisms on inert surfaces. Formaldehyde Formaldehyde should be handled in the workplace as a potential carcinogen with an employee exposure standard that limits an 8 hour time-weighted average exposure to a concentration of 0 . Is the domestic phenol disinfectant carcinogenic? TWA mg/m Carcinogen 3 STEL mg/m 3 CELING mg/ m 3 Ethanol 64.00 64-17-5 1900 Not Established Not Established No Propane 05-10 74-98-6 1800 Not Established Not Established No Isobutane 05-10 75-28-5 1900 (NIOSH) Not Established Not Established No *Ortho phenyl phenol 0.051 90-43-7 1 (DOW) Not Established Not Established Yes ** General Description. Name a drawback of using phenolic disinfectants: causes metals to rust, damage plastic and rubber, known carcinogens . Many common household disinfectants co-ntain phenols-caustic and dangerous compounds that could be harming your family's health. Phenolic compounds are more difficult to rinse from equipment than other disinfectants, resulting in exposures long after disinfection and possible skin/mucous membrane irritation/injury. Select one: a. among the safest disinfectants b. the preferred disinfectants for pedicure tubs c. known carcinogens d. a form of alcohol. 58 / Monday, March 26, 2012 / Rules and Regulations . Many common household disinfectants co-ntain phenols-caustic and dangerous compounds that could be harming your family's health. Typically not used for generalized disinfecting. SOPP is applied topically to the crop . and from spontaneous formation following chlorination of water for disinfection and deodorization. 200. Diluted half ounce per gallon of water (1:256), LopHene is a biological decontamination product and disinfection. Disinfectants are generally distinguished from other . They are considered priority water pollutants by the EPA and the NPRI in the USA and Canada, respectively. Phenolic Disinfectants. Reproductive Hazard There is limited evidence that Phenolmay damage the developing fetus in animals. May cause allergic reaction for some individuals. Formaldehyde is not recommended for daily disinfection. Phenolics are phenol (carbolic acid) derivatives and typically used at 1 - 5% dilutions. It is effective against a wide range of bacteria, viruses, and . That's far lower than the $4 to produce a kilogram of phenolic disinfectant and the 50 cents . Disinfection is classified into low-level, intermediate-level or high-level. Joseph Lister introduced the concept of antiseptic surgery using phenol, then known as carbolic acid. SNiPER breaks down the biofilm, exposing the "protected" germs beneath and then keeps on killing to totally eliminate bacteria. This is not an expected result from the recommended use of this product. Phenol is a colorless, odorless chemical that is used in manufacturing plastics. Phenol (carbolic acid) is a powerful microbicidal substance. Wex-cide, ProSpray and Birex are germicidal, fungicidal, virucidal, and tuberculocidal in 10 minutes at 20C. Phenol is used primarily in the production of phenolic resins and in the manufacture of nylon and other synthetic fibers. The chlorinated phenols are a group of 19 isomers composed of phenol with substituted chlorines. Ortho -phenylphenol (OPP, or 2-phenylphenol) and its water-soluble salt, sodium ortho -phenylphenate (SOPP), are antimicrobial agents used as bacteriostats, fungicides, and sanitizers. PHENOL AS DISINFECTANT Phenolic disinfectants are effective against bacteria (especially gram positive bacteria) and enveloped viruses. Sterilization and Disinfection Sterilization is defined as the process where all the living microorganisms, including bacterial spores are killed. (1) Phenol is also used in the production of caprolactam and bisphenol A, which are intermediates in the manufacture of nylon and epoxy resins, respectively. PubChem . 400. universal precautions for bloodborne pathogens are regulated by: OSHA. May 2007; . The most notorious of these N-nitrosamines is NDMA, which is a known carcinogen. While Phenolhas been tested, it is not classifiable as to its potential to cause cancer. Typically, death and severe toxicity result from phenol's effects on the central nervous system, heart, blood vessels, lungs and kidneys. It usually occurs in two steps: a thorough mechanical cleaning followed by application of disinfectant. Disinfection. Common examples of these derivatives include thymols, xylenol, o-phenyl-phenol (OPP) and triclosan. Aims: To investigate the antimicrobial properties of phenolic compounds present in Finnish berries against probiotic bacteria and other intestinal bacteria, including pathogenic species. National Library of Medicine. In its pure state, it exists as a colorless or white solid. Crystal violet is also a suspected carcinogen, and solutions formed from the leuco crystal violet method should also be treated as toxic waste.
<urn:uuid:3c4bb06a-e7c6-441e-b5c1-ef8899327f18>
CC-MAIN-2022-33
https://fr.bluerocktel.com/jujutsu/website/information/30025672dca83181768f9b1edb24843c9-phenolic-disinfectant-carcinogens
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571502.25/warc/CC-MAIN-20220811194507-20220811224507-00096.warc.gz
en
0.9157
5,639
3.171875
3
Mutations in epigenetics remodelers such as ASXL1 (additional sex combs 1) have been identified in acute myeloid leukemia, chronic myelomonocytic leukemia and myelodysplastic syndrome and are associated with poor overall survival.1 Some ASXL1 mutations resulting in a loss of protein expression have been shown to contribute to myeloid transformation through loss of H3K27me3 expression at the HOX gene loci.2 On the other hand, some ASXL1 mutations result in the expression of a truncated protein, which can bind and activate BRCA-1 associated protein 1 (BAP1), a deubiquitinase enzyme, to form the polycomb repressive deubiquitinase complex (PR-DUB). A hyperactive PR-DUB derepresses target genes by deubiquitinating H2AK119 leading to impaired hematopoietic stem cell differentiation and leukemogenesis.53 Conversely, reduced BAP1 activity is sufficient to halt the leukemogenesis driven by truncated ASXL1 protein,6 highlighting the importance of a balanced ASXL1-BAP1 axis in normal hematopoiesis. Recently, the ASXL1 gene locus was shown to undergo alternative splicing to produce circular RNA (circRNA) in addition to the linear protein-coding transcripts.7 Covalently closed circRNA function as competing endogenous RNA and microRNA sponges, protein sponges or decoys, and regulators of cell proliferation, splicing and parental gene expression.8 The majority of circRNA appear to be more often downregulated in tumor tissue compared to normal tissue,9 with evidence demonstrating the role of circRNA in different hallmarks of cancer including cancer initiation and progression, induction of angiogenesis, invasion, and metastasis.10 Moreover, circRNA are more stable, more abundant, and better conserved than linear RNA. In addition, circRNA can be detected in extracellular vesicles, exosomes and blood plasma thereby highlighting their potential as non-invasive biomarkers.11 In a pioneering study performed by Salzman et al., several circRNA were identified in primary human leukocytes and other blood cells as well as in samples from patients with acute lymphoblastic leukemia.12 More recently, fusion circRNA arising from chromosomal translocations in leukemia such as MLL-AF9 and PML-RARA have been shown to contribute to cellular transformation and tumorigenesis in vitro and in in vivo mouse models.13 In the current study, we sought to identify the functions of ASXL1 circRNA, with the idea of developing new therapeutic targets in acute myeloid leukemia. By deep RNA sequencing, we identified two putative circRNA isoforms from the ASXL1 gene locus in the THP-1 leukemic cell line (Online Supplementary Figure S1A, B). These isoforms could be detected by polymerase chain reaction (PCR) using divergent primers on exon 3 in the THP-1 leukemic cell line followed by visualization on an agarose gel. The amplicons were confirmed to be circRNA after RNAse R treatment which degrades ASXL1 linear RNA while enriching the circRNA isoforms (Figure 1A). Similarly, both circASXL1-1 and -2 were also found to be expressed in a number of chronic myeloid leukemia (CML) cell lines by quantitative PCR and enriched after RNAse R treatment (Figure 1B). These circRNA isoforms were also found to be expressed in a variety of leukemic cell lines and HEK293 cells (Online Supplementary Figure S1C). The expression of circASXLl-1, -2 and linear ASXL1 mRNA was analyzed in samples from healthy controls and patients with acute myeloid leukemia (AML) by digital droplet PCR (ddPCR). However, no significant change in expression was detected in either the circRNA or linear ASXL1 RNA isoforms in the healthy controls as compared to the AML samples analyzed (Online Supplementary Figure S1D). Isolation of nuclearcytoplasmic fractions followed by ddPCR analysis demonstrated that circASXL1-1 and circASXL1-2 were localized in the cytoplasm, similarly to linear ASXL1 mRNA (Online Supplementary Figure S1E) with a relative abundance ratio of 100:10:1 of linear to circASXL1-1 and circASXL1-2, respectively (Online Supplementary Figure S1F). Given the relatively low abundance of circASXL1-2, further loss-and gain-of-function analyses were focused on circASXL1-1. Specific depletion of circASXL1-1 was achieved using antisense oligonucleotide (ASO) against the circASXL1-1 backsplice junction (Figure 1C). As expected, use of ASO did not affect linear ASXL1 mRNA levels as compared to scrambled ASO (Figure 1D). To investigate whether circASXL1-1 had functions in regulating the PR-DUB or Polycomb repressive complex 2 (PRC2) activity, depletion of circASXL1-1 was performed using ASO, followed by western blotting to check H2AK119 ubiquitination and H3K27me3 levels. Compared to scrambled treated cells, depletion of circASXL1-1 led to a decrease in H2AK119 ubiquitination (Figure 1E). On the other hand, H3K27me3 did not change significantly as compared to control (Online Supplementary Figure S2A), suggesting that circASXL1-1 affects BAP1 activity but not PRC2 activity in THP-1 leukemic cells. To rule out any off-target activity of ASO and for future mechanistic studies, stable HEK293 cell lines with doxycycline-inducible depletion of circASXL1-1 were generated. Upon induction of short hairpin (sh)RNA expression with doxycycline for 48 h, a significant depletion of circASXL1-1 was observed in shcircASXL1-1 + doxycycline cells as compared to shControl + doxcycline cells with no change in linear ASXL1 mRNA (Figure 1F). ASXL1 protein level also did not change upon depletion of circASXL1-1 with doxycycline (Online Supplementary Figure S2B). Total H2A was immunoprecipitated from these cells and an analysis of the H2AK119ub mark by western blotting indicated a decrease in H2AK119ub after doxycycline-induced stable depletion of circASXL1-1 in HEK293 cells as compared to shControl cells (Figure 1G). These data suggest that circASXL1-1 regulates BAP1-mediated deubiquitinase activity. To elucidate the mechanism of circASXL1-1-induced modulation of BAP1 activity, either wild-type BAP1, a catalytically inactive BAP1 mutant (C91A) or empty vector constructs (CMV-control) were transfected into doxycycline-inducible shcircASXL1-1 HEK293 cells followed by western blotting to determine H2AK119ub levels. In the absence of doxycycline, H2AK119ub levels remained unchanged both in the shControl and the shcircASXL1-1 cells across all groups (Figure 2A, lanes 1–6). However, when circASXL1-1 was depleted after doxycycline induction for 48 h, a reduction in global H2AK119ub levels was observed in shcircASXL1-1 cells overexpressing BAP1 as compared to shControl cells (Figure 2A, lanes 7-12). Moreover, depletion of H2AK119ub levels was dependent on the catalytic activity of BAP1 since the expression of C91A mutant did not affect H2AK119ub levels in circASXL1-1-depleted cells (Figure 2A, compare lanes 11-12). To further ascertain the modulation of BAP1 activity by circASXL1-1, we carried out an in vitro deubiquitinase assay. For this, FLAG-BAP1 was purified from the lysates prepared from doxycycline-induced shControl or shcircASXL1-1-expressing cells followed by incubation with recombinant H2A ubiquityl mononucleosomes (H2AK118 and H2AK119) (Figure 2B). In the absence of doxycycline when the shRNA expression was switched off, there was no appreciable difference in the H2AK119ub levels between shControl and shcircASXL1-1 groups (Figure 2C, lanes 1-7). However, BAP1 immuno-precipitated from circASXL1-1-depleted HEK293 cells showed more activity than that from shControl cells as observed from a decrease in H2AK119ub when probed with total H2A antibody (Figure 2C, compare lanes 10 and 13) and this was specific to BAP1 catalytic activity as C91A mutant did not show any activity when compared to control lanes (Figure 2C, compare lanes 11 and 14), suggesting that circASXL1-1 regulates BAP1-mediated deubiquitinase activity. Since circASXL1-1 regulates BAP1 activity, we hypothesized that circASXL1-1 directly binds to BAP1 and inhibits its function. To test this hypothesis, we performed RNA immunoprecipitation of endogenous BAP1 from HEK293 cells expressing either scrambled antisense oligo or an oligo designed against circASXL1-1 backsplice junction (ASO), followed by RNA isolation to identify the RNA binding to BAP1. circASXL1-1 was found to be enriched in the BAP1 fraction as compared to the IgG control (Figure 2D). Moreover, ASO-mediated depletion of circASXL1-1 abolished the enrichment with levels of circASXL1-1 comparable to the IgG fraction. Furthermore, HEK293 cells expressing either shcircASXL1-1 or shControl were transfected with control-FLAG (EV) or FLAG-BAP1 followed by immunoprecipitation using FLAG beads. circASXL1-1 was found to be enriched in shControl groups (both with and without doxycycline) and shcircASXL1-1 cells without doxycycline as compared to EV. Moreover, the level of circASXL1-1 was comparable to that in the EV group after circASXL1-1 was depleted using doxycycline treatment for 48 h, thereby showing specificity for the assay. As an additional control for the experiment, depletion of BAP1 using short interfering (si)RNA was performed in HEK293 cells (Online Supplementary Figure S3A). An enrichment of circASXL1-1 was observed in the cells treated with control siRNA using an antibody specific to BAP1 as compared to IgG control. However, loss of BAP1 expression abolished circASXL1-1 enrichment with circASXL1-1 levels being comparable to those in the IgG sample (Online Supplementary Figure S3B). Long non-coding FOXO1 is known to bind to BAP1 and was used as a positive control14 (Online Supplementary Figure 3C, D). Taken together, these results suggest that circASXL1-1 affects BAP1 activity by binding to BAP1, possibly blocking and altering the activity of the PR-DUB complex. To investigate the role of circASXL1-1 in leukemia, we generated stable cell lines with depletion of circASXL1-1 in THP-1 cells. The efficiency of depletion was confirmed using quantitative PCR (Figure 3A). Moreover, the expression of linear ASXL1 mRNA was unaffected in shcircASXL1-1 cells as compared to shControl (Figure 3A). Consistent with ASO-mediated depletion of circASXL1-1 in THP-1 cells (Figure 1E), western blot analysis indicated that there was a global reduction of histone H2AK119 ubiquitination in shcircASXL1-1 cells as compared to shControl (Figure 3B). Cell growth assays performed over a period of 4 days demonstrated that shcircASXL1-1 cells showed slower growth than shControl cells (Figure 3C). On examining the morphology of the shcircASXL1-1 cells in culture we observed that shcircASXL1-1 cells tended to stick to the Petri dishes and displayed signs of differentiation (Figure 3D panel ii, arrows point to adherent differentiated cells). However, these were not observed in shControl cells which displayed the morphology of cells in suspension growing in clumps (Figure 3D panel i). To investigate whether shcircASXL1-1 cells were indeed undergoing spontaneous differentiation, we performed fluorescence activated cell sorting (FACS) using CD11b as a marker for measuring differentiation of THP-1 monocytes into macrophages. Our results showed that about 30% of shcircASXL1-1 cells stained positive for CD11b in the absence of any differentiating agent, indicating that depletion of circASXL1-1 led to spontaneous differentiation of THP-1 cells (Figure 3E, 3F). In addition, we also observed downregulation of the myeloid-specific transcription factor IRF8 in shcircASXL1-1 cells as compared to the levels in shControl (Figure 3A). These data suggest that circASXL1-1 could affect the differentiation of THP-1 monocytes by regulating the expression of key genes involved in macrophage differentiation; however further experiments are needed to delineate the transcriptional program and the possible mechanism of action. Since the PR-DUB complex has been shown to regulate hematopoietic stem cell differentiation, we sought to investigate whether modulation of circASXL1-1 levels affected this differentiation. To this purpose, we depleted circASXL1-1 using pLKO-GFP lentiviruses in CD34 hematopoietic progenitors isolated from the bone marrow of healthy individuals. CD34 hematopoietic stem cells were allowed to differentiate into granulocytes and monocytes for 14 days followed by FACS analysis to check for cell surface markers on the differentiated cells (Figure 3G). Depletion of circASXL1-1 led to an increase in both CD13/CD14 double-positive mature monocytes (Figure 3H, I) and CD13/CD15 double-positive granulocytes (Figure 3J, K) as compared to shControl cells suggesting that circASXL1-1 influences the differentiation of hematopoietic stem cells towards the myeloid lineage. Colony-forming assays showed an increase in the number of colonies (Figure 3M) without a significant difference in the morphology of the granulocytic-macrophage colonies obtained from CD34 transduced with shControl virus as compared to shcircASXL1-1 (Figure 3Li-ii). In conclusion, this study provides a new mechanism for regulation of H2AK119ub levels: via interaction of circASXL1-1 and BAP1. Our results indicate that modulation of circASXL1-1 expression affects BAP1 activity but does not influence PRC2 function. Furthermore, circASXL1-1 binds to BAP1 to regulate its catalytic activity. Our data demonstrate that circASXL1-1 influences the myeloid differentiation of human CD34 progenitors although the mechanisms underlying the effect of circASXL1-1 on hematopoietic stem cell function and differentiation warrant further investigation. Interestingly, the circASXL1-1 sequence includes an initiation codon along with a Kozak sequence, giving rise to the possibility of circASXL1-1 translating into a peptide. “circASXL1-1 peptide” could potentially serve as a decoy and compete with ASXL1 protein in binding to BAP1 thus regulating the function of the PR-DUB complex. Regulating BAP1 activity via circASXL1-1 or “circASXL1-1 peptide” could be a promising therapeutic option especially in myeloid malignancies with ASXL1 mutations. - Gelsi-Boyer V, Brecqueville M, Devillier R, Murati A, Mozziconacci MJ, Birnbaum D. Mutations in ASXL1 are associated with poor prognosis across the spectrum of malignant myeloid diseases. J Hematol Oncol. 2012; 5:12. PubMedhttps://doi.org/10.1186/1756-8722-5-12Google Scholar - Abdel-Wahab O, Adli M, LaFave LM. ASXL1 mutations promote myeloid transformation through loss of PRC2-mediated gene repression. Cancer Cell. 2012; 22(2):180-193. PubMedhttps://doi.org/10.1016/j.ccr.2012.06.032Google Scholar - Balasubramani A, Larjo A, Bassein JA. Cancer-associated ASXL1 mutations may act as gain-of-function mutations of the ASXL1-BAP1 complex. Nat Commun. 2015; 6:7307. PubMedhttps://doi.org/10.1038/ncomms8307Google Scholar - Asada S, Goyama S, Inoue D. Mutant ASXL1 cooperates with BAP1 to promote myeloid leukaemogenesis. Nat Commun. 2018; 9(1):2733. https://doi.org/10.1038/s41467-018-05085-9Google Scholar - Nagase R, Inoue D, Pastore A. Expression of mutant Asxl1 perturbs hematopoiesis and promotes susceptibility to leukemic transformation. J Exp Med. 2018; 215(6):1729-1747. PubMedhttps://doi.org/10.1084/jem.20171151Google Scholar - Guo Y, Yang H, Chen S. Reduced BAP1 activity prevents ASXL1 truncation-driven myeloid malignancy in vivo. Leukemia. 2018; 32(8):1834-1837. Google Scholar - Koh W, Gonzalez V, Natarajan S, Carter R, Brown PO, Gawad C. Dynamic ASXL1 exon skipping and alternative circular splicing in single human cells. PLoS One. 2016; 11(10):e0164085. Google Scholar - Wilusz JE. A 360 degrees view of circular RNAs: from biogenesis to functions. Wiley Interdiscip Rev RNA. 2018; 9(4):e1478. Google Scholar - Bachmayr-Heyda A, Reiner AT, Auer K. Correlation of circular RNA abundance with proliferation-exemplified with colorectal and ovarian cancer, idiopathic lung fibrosis, and normal human tissues. Sci Rep. 2015; 5:8057. PubMedhttps://doi.org/10.1038/srep08057Google Scholar - Kristensen LS, Hansen TB, Veno MT, Kjems J. Circular RNAs in cancer: opportunities and challenges in the field. Oncogene. 2018; 37(5):555-565. PubMedhttps://doi.org/10.1038/onc.2017.361Google Scholar - Li X, Yang L, Chen LL. The biogenesis, functions, and challenges of circular RNAs. Mol Cell. 2018; 71(3):428-442. PubMedhttps://doi.org/10.1016/j.molcel.2018.06.034Google Scholar - Salzman J, Gawad C, Wang PL, Lacayo N, Brown PO. Circular RNAs are the predominant transcript isoform from hundreds of human genes in diverse cell types. PLoS One. 2012; 7(2):e30733. PubMedhttps://doi.org/10.1371/journal.pone.0030733Google Scholar - Guarnerio J, Bezzi M, Jeong JC. Oncogenic role of fusion-circRNAs derived from cancer-associated chromosomal translocations. Cell. 2016; 165(2):289-302. PubMedhttps://doi.org/10.1016/j.cell.2016.03.020Google Scholar - Xi J, Feng J, Li Q, Li X, Zeng S. The long non-coding RNA lncFOXO1 suppresses growth of human breast cancer cells through association with BAP1. Int J Oncol. 2017; 50(5):1663-1670. Google Scholar - Ventii KH, Devi NS, Friedrich KL. BRCA1-associated protein-1 is a tumor suppressor that requires deubiquitinating activity and nuclear localization. Cancer Res. 2008; 68(17):6953-6962. PubMedhttps://doi.org/10.1158/0008-5472.CAN-08-0365Google Scholar
<urn:uuid:191f90cb-d25e-47dd-902d-95961b3c9b0f>
CC-MAIN-2022-33
https://haematologica.org/article/view/9956
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570871.10/warc/CC-MAIN-20220808183040-20220808213040-00295.warc.gz
en
0.881862
4,511
2.84375
3
Authors: Isabel Wilkerson While this book seeks to consider the effects on everyone caught in the hierarchy, it devotes significant attention to the poles of the American caste system, those at the top, European Americans, who have been its primary beneficiaries, and those at the bottom, African-Americans, against whom the caste system has directed its full powers of dehumanization. The American caste system began in the years after the arrival of the first Africans to Virginia colony in the summer of 1619, as the colony sought to refine the distinctions of who could be enslaved for life and who could not. Over time, colonial laws granted English and Irish indentured servants greater privileges than the Africans who worked alongside them, and the Europeans were fused into a new identity, that of being categorized as white, the polar opposite of black. The historian Kenneth M. Stampp called this assigning of race a “ caste system, which divided those whose appearance enabled them to claim pure Caucasian ancestry from those whose appearance indicated that some or all of their forebears were Negroes.” Members of the Caucasian caste, as he called it, “believed in ‘white supremacy,’ and maintained a high degree of caste solidarity to secure it.” Thus, throughout this book you will see many references to the American South, the birthplace of this caste system. The South is where the majority of the subordinate caste was consigned to live for most of the country’s history and for that reason is where the caste system was formalized and most brutally enforced. It was there that the tenets of intercaste relations first took hold before spreading to the rest of the country, leading the writer Alexis de Tocqueville to observe in 1831: “ The prejudice of race appears to be stronger in the states that have abolished slavery than in those where it still exists; and nowhere is it so intolerant as in those states where servitude has never been known.” To recalibrate how we see ourselves, I use language that may be more commonly associated with people in other cultures, to suggest a new way of understanding our hierarchy: Dominant caste, ruling majority, favored caste, instead of, or in addition to, instead of, or in addition to, Subordinate caste, lowest caste, bottom caste, disfavored caste, historically stigmatized African-American. Original, conquered, instead of, or in addition to, in addition to, or instead of, women of any race, or minorities of any kind. Some of this may sound like a foreign language. In some ways it is and is meant to be. Because, to truly understand America, we must open our eyes to the hidden work of a caste system that has gone unnamed but prevails among us to our collective detriment, to see that we have more in common with each other and with cultures that we might otherwise dismiss, and to summon the courage to consider that therein may lie the answers. In embarking upon this work, I devoured books about caste in India and in the United States. Anything with the word in it lit up my neurons. I discovered kindred spirits from the past—sociologists, anthropologists, ethnographers, writers—whose work carried me through time and across generations. Many had labored against the tide, and I felt that I was carrying on a tradition and was not walking alone. In the midst of the research, word of my inquiries spread to some Indian scholars of caste, based here in America. They invited me to speak at an inaugural conference on caste and race at the University of Massachusetts in Amherst, the town where W.E.B. Du Bois was born and where his papers are kept. There, I told the audience that I had written a six-hundred-page book about the Jim Crow era in the American South, the time of naked white supremacy, but that the word did not appear anywhere in the narrative. I told them that, after spending fifteen years studying the topic and hearing the testimony of the survivors of the era, I realized that the term was insufficient. was the more accurate term, and I set out to them the reasons why. They were both stunned and heartened. The plates of Indian food kindly set before me at the reception thereafter sat cold due to the press of questions and the sharing that went into the night. At a closing ceremony that I had not been made aware of ahead of time, the hosts presented to me a bronze-colored bust of the patron saint of the low-born of India, Bhimrao Ambedkar, the Dalit leader who had written to Du Bois all those decades before. It felt like an initiation into a caste to which I had somehow always belonged. Over and over, they shared scenarios of what they had endured, and I responded in personal recognition, as if even to anticipate some particular turn or outcome. To their astonishment, I began to be able to tell who was high-born and who was low-born among the Indian people among us, not from what they looked like, as one might in the United States, but on the basis of the universal human response to hierarchy—in the case of an upper-caste person, an inescapable certitude in bearing, demeanor, behavior, a visible expectation of centrality. After one session, I went up to a woman presenter whose caste I had ascertained from observing her interactions. I noticed that she had reflexively stood over the Dalit speaker and had taken it upon herself to explain what the Dalit woman had just said or meant, to take a position of authority as if by second nature, perhaps without realizing it. We chatted a bit, and then I said to her, “I believe you must be upper caste, are you not?” She looked crestfallen. “How did you know?” she said, “I try so hard.” We talked for what seemed an hour more, and I could see the effort it took to manage the unconscious signals of encoded superiority, the presence of mind necessary to counteract the programming of caste. I could see how hard it was even for someone committed to healing the caste divide, who was, as it turned out, married to a man from the subordinate caste and who was deeply invested in egalitarian ideals. On the way home, I was snapped back to my own world when airport security flagged my suitcase for inspection. The TSA worker happened to be an African-American who looked to be in his early twenties. He strapped on latex gloves to begin his work. He dug through my suitcase and excavated a small box, unwrapped the folds of paper and held in his palm the bust of Ambedkar that I had been given. “This is what came up in the X-ray,” he said. It was heavy like a paperweight. He turned it upside down and inspected it from all sides, his gaze lingering at the bottom of it. He seemed concerned that something might be inside. “I’ll have to swipe it,” he warned me. He came back after some time and declared it okay, and I could continue with it on my journey. He looked at the bespectacled face with the receding hairline and steadfast expression, and seemed to wonder why I would be carrying what looked like a totem from another culture. “So who is this?” he asked. The name Ambedkar alone would not have registered; I had learned of him myself only the year before, and there was no time to explain the parallel caste system. So I blurted out what seemed to make the most sense. “Oh,” I said, “this is the Martin Luther King of India.” “Pretty cool,” he said, satisfied now, and seeming a little proud. He then wrapped Ambedkar back up as if he were King himself and set him back gently into the suitcase. In the imagination of two late-twentieth-century filmmakers, an unseen force of artificial intelligence has overtaken the human species, has managed to control humans in an alternate reality in which everything one sees, feels, hears, tastes, smells, touches is in actuality a program. There are programs within programs, and humans become not just programmed but are in danger of and, in fact, well on their way to becoming nothing more than programs. What is reality and what is a program morph into one. The interlocking program passes for life itself. The great quest in the film series involves those humans who awaken to this realization as they search for a way to escape their entrapment. Those who accept their programming get to lead deadened, surface lives enslaved to a semblance of reality. They are captives, safe on the surface, as long as they are unaware of their captivity. Perhaps it is the unthinking acquiescence, the blindness to one’s imprisonment, that is the most effective way for human beings to remain captive. People who do not know that they are captive will not resist their bondage. But those who awaken to their captivity threaten the hum of the matrix. Any attempt to escape their imprisonment risks detection, signals a breach in the order, exposes the artifice of unreality that has been imposed upon human beings. The Matrix, the unseen master program fed by the survival instinct of an automated collective, does not react well to threats to its existence. In a crucial moment, a man who has only recently awakened to the program in which he and his species are ensnared consults a wise woman, the Oracle, who, it appears, could guide him. He is uncertain and wary, as he takes a seat next to her on a park bench that may or may not be real. She speaks in code and metaphor. A flock of birds alights on the pavement beyond them. “See those birds,” the Oracle says to him. “At some point a program was written to govern them.” She looks up and scans the horizon. “A program was written to watch over the trees and the wind, the sunrise and sunset. There are programs running all over the place.” Some of these programs go without notice, so perfectly attuned they are to their task, so deeply embedded in the drone of existence. “The ones doing their job,” she tells him, “doing what they were meant to do are invisible. You’d never even know they were here.” So, too, with the caste system as it goes about its work in silence, the string of a puppet master unseen by those whose subconscious it directs, its instructions an intravenous drip to the mind, caste in the guise of normalcy, injustice looking just, atrocities looking unavoidable to keep the machinery humming, the matrix of caste as a facsimile for life itself and whose purpose is maintaining the primacy of those hoarding and holding tight to power. Day after day, the curtain rises on a stage of epic proportions, one that has been running for centuries. The actors wear the costumes of their predecessors and inhabit the roles assigned to them. The people in these roles are not the characters they play, but they have played the roles long enough to incorporate the roles into their very being, to merge the assignment with their inner selves and how they are seen in the world. The costumes were handed out at birth and can never be removed. The costumes cue everyone in the cast to the roles each character is to play and to each character’s place on the stage. Over the run of the show, the cast has grown accustomed to who plays which part. For generations, everyone has known who is center stage in the lead. Everyone knows who the hero is, who the supporting characters are, who is the sidekick good for laughs, and who is in shadow, the undifferentiated chorus with no lines to speak, no voice to sing, but necessary for the production to work. The roles become sufficiently embedded into the identity of the players that the leading man or woman would not be expected so much as to know the names or take notice of the people in the back, and there would be no need for them to do so. Stay in the roles long enough, and everyone begins to believe that the roles are preordained, that each cast member is best suited by talent and temperament for their assigned role, and maybe for only that role, that they belong there and were meant to be cast as they are currently seen. The cast members become associated with their characters, typecast, locked into either inflated or disfavored assumptions. They become their characters. As an actor, you are to move the way you are directed to move, speak the way your character is expected to speak. You are not yourself. You are not to be yourself. Stick to the script and to the part you are cast to play, and you will be rewarded. Veer from the script, and you will face the consequences. Veer from the script, and other cast members will step in to remind you where you went off-script. Do it often enough or at a critical moment and you may be fired, demoted, cast out, your character conveniently killed off in the plot. The social pyramid known as a caste system is not identical to the cast in a play, though the similarity in the two words hints at a tantalizing intersection. When we are cast into roles, we are not ourselves. We are not supposed to be ourselves. We are performing based on our place in the production, not necessarily on who we are inside. We are all players on a stage that was built long before our ancestors arrived in this land. We are the latest cast in a long-running drama that premiered on this soil in the early seventeenth century. It was in late August 1619, a year before the Pilgrims landed at Plymouth Rock, that a Dutch man-of-war set anchor at the mouth of the James River, at Point Comfort, in the wilderness of what is now known as Virginia. We know this only because of a haphazard line in a letter written by the early settler John Rolfe. This is the oldest surviving reference to Africans in the English colonies in America, people who looked different from the colonists and who would ultimately be assigned by law to the bottom of an emerging caste system. Rolfe mentions them as merchandise and not necessarily the merchandise the English settlers had been expecting. The ship “brought not anything but 20 and odd Negroes,” Rolfe wrote, “which the Governor and Cape Marchant bought for victualles.” These Africans had been captured from a slave ship bound for the Spanish colonies but were sold farther north to the British. Historians do not agree on what their status was, if they were bound in the short term for indentured servitude or relegated immediately to the status of lifetime enslavement, the condition that would befall most every human who looked like them arriving on these shores or born here for the next quarter millennium. The few surviving records from the time of their arrival show they “ held at the outset a singularly debased status in the eyes of white Virginians,” wrote the historian Alden T. Vaughan. If not yet consigned formally to permanent enslavement, “black Virginians were at least well on their way to such a condition.” In the decades to follow, colonial laws herded European workers and African workers into separate and unequal queues and set in motion the caste system that would become the cornerstone of the social, political, and economic system in America. This caste system would trigger the deadliest war on U.S. soil, lead to the ritual killings of thousands of subordinate-caste people in lynchings, and become the source of inequalities that becloud and destabilize the country to this day. With the first rough attempts at a colonial census, conducted in Virginia in 1630, a hierarchy began to form. Few Africans were seen as significant enough to be listed in the census by name, as would be the case for the generations to follow, in contrast to the majority of European inhabitants, indentured or not. The Africans were not cited by age or arrival date as were the Europeans, information vital to setting the terms and time frame of indenture for Europeans, or for Africans, had they been in the same category, been seen as equal, or seen as needing to be accurately accounted for. Thus, before there was a United States of America, there was the caste system, born in colonial Virginia. At first, religion, not race as we now know it, defined the status of people in the colonies. Christianity, as a proxy for Europeans, generally exempted European workers from lifetime enslavement. This initial distinction is what condemned, first, indigenous people, and, then, Africans, most of whom were not Christian upon arrival, to the lowest rung of an emerging hierarchy before the concept of race had congealed to justify their eventual and total debasement. The creation of a caste system was a process of testing the bounds of human categories and not the result of a single edict. It was a decades-long sharpening of lines whenever the colonists had a decision to make. When Africans began converting to Christianity, they posed a challenge to a religion-based hierarchy. Their efforts to claim full participation in the colonies was in direct opposition to the European hunger for the cheapest, most pliant labor to extract the most wealth from the New World. The strengths of African workers became their undoing. British colonists in the West Indies, for example, saw Africans as “ a civilized and relatively docile population,” who were “accustomed to discipline,” and who cooperated well on a given task. Africans demonstrated an immunity to European diseases, making them more viable to the colonists than were the indigenous people the Europeans had originally tried to enslave. More pressingly, the colonies of the Chesapeake were faltering and needed manpower to cultivate tobacco. The colonies farther south were suited for sugarcane, rice, and cotton—crops with which the English had little experience, but that Africans had either cultivated in their native lands or were quick to master. “ The colonists soon realized that without Africans and the skills that they brought, their enterprises would fail,” wrote the anthropologists Audrey and Brian Smedley. In the eyes of the European colonists and to the Africans’ tragic disadvantage, they happened to bear an inadvertent birthmark over their entire bodies that should have been nothing more than a neutral variation in human appearance, but which made them stand out from the English and Irish indentured servants. The Europeans could and did escape from their masters and blend into the general white population that was hardening into a single caste. “ The Gaelic insurrections caused the English to seek to replace this source of servile labor entirely with another source, African slaves,” the Smedleys wrote. The colonists had been unable to enslave the native population on its own turf and believed themselves to have solved the labor problem with the Africans they imported. With little further use for the original inhabitants, the colonists began to exile them from their ancestral lands and from the emerging caste system. This left Africans firmly at the bottom, and, by the late 1600s, Africans were not merely slaves; they were hostages subjected to unspeakable tortures that their captors documented without remorse. And there was no one on the planet willing to pay a ransom for their rescue. Americans are loath to talk about enslavement in part because what little we know about it goes against our perception of our country as a just and enlightened nation, a beacon of democracy for the world. Slavery is commonly dismissed as a “sad, dark chapter” in the country’s history. It is as if the greater the distance we can create between slavery and ourselves, the better to stave off the guilt or shame it induces. But in the same way that individuals cannot move forward, become whole and healthy, unless they examine the domestic violence they witnessed as children or the alcoholism that runs in their family, the country cannot become whole until it confronts what was not a chapter in its history, but the basis of its economic and social order. For a quarter millennium, slavery Slavery was a part of everyday life, a spectacle that public officials and European visitors to the slaving provinces could not help but comment on with curiosity and revulsion. In a speech in the House of Representatives, a nineteenth-century congressman from Ohio lamented that on “ the beautiful avenue in front of the Capitol, members of Congress, during this session, have been compelled to turn aside from their path, to permit a coffle of slaves, males and females chained to each other by their necks to pass on their way to this national slave market. The secretary of the U.S. Navy expressed horror at the sight of barefoot men and women locked together with the weight of an ox-chain in the beating sun, forced to walk the distance to damnation in a state farther south, and riding behind them, “a white man on horse back, carrying pistols in his belt, and who, as we passed him, had the impudence to look us in the face without blushing.” The Navy official, James K. Paulding, said: “ When they [the slaveholders] permit such flagrant and indecent outrages upon humanity as that I have described; when they sanction a villain, in thus marching half naked women and men, loaded with chains, without being charged with a crime but that of being from one section of the United States to another, hundreds of miles in the face of day, they disgrace themselves, and the country to which they belong.” Slavery in this land was not merely an unfortunate thing that happened to black people. It was an American innovation, an American institution created by and for the benefit of the elites of the dominant caste and enforced by poorer members of the dominant caste who tied their lot to the caste system rather than to their consciences. It made lords of everyone in the dominant caste, as law and custom stated that “ submission is required of the Slave, not to the will of the Master only, but to the will of all other White Persons.” It was not merely a torn thread in “ an otherwise perfect cloth,” wrote the sociologist Stephen Steinberg. “It would be closer to say that slavery provided the fabric out of which the cloth was made.” American slavery, which lasted from 1619 to 1865, was not the slavery of ancient Greece or the illicit sex slavery of today. The abhorrent slavery of today is unreservedly illegal, and any current-day victim who escapes, escapes to a world that recognizes her freedom and will work to punish her enslaver. American slavery, by contrast, was legal and sanctioned by the state and a web of enforcers. Any victim who managed to escape, escaped to a world that not only did not recognize her freedom but would return her to her captors for further unspeakable horrors as retribution. In American slavery, the victims, not the enslavers, were punished, subject to whatever atrocities the enslaver could devise as a lesson to others. What the colonists created was “ an extreme form of slavery that had existed nowhere in the world,” wrote the legal historian Ariela J. Gross. “For the first time in history, one category of humanity was ruled out of the ‘human race’ and into a separate subgroup that was to remain enslaved for generations in perpetuity.” The institution of slavery was, for a quarter millennium, the conversion of human beings into currency, into machines who existed solely for the profit of their owners, to be worked as long as the owners desired, who had no rights over their bodies or loved ones, who could be mortgaged, bred, won in a bet, given as wedding presents, bequeathed to heirs, sold away from spouses or children to cover an owner’s debt or to spite a rival or to settle an estate. They were regularly whipped, raped, and branded, subjected to any whim or distemper of the people who owned them. Some were castrated or endured other tortures too grisly for these pages, tortures that the Geneva Conventions would have banned as war crimes had the conventions applied to people of African descent on this soil. Before there was a United States of America, there was enslavement. Theirs was a living death passed down for twelve generations. The slave is doomed to toil, that others may reap the fruits” is how a letter writer identifying himself as Judge Ruffin testified to what he saw in the Deep South. The slave is entirely subject to the will of his master,” wrote William Goodell, a minister who chronicled the institution of slavery in the 1830s. “What he chooses to inflict upon him, he must suffer. He must never lift a hand in self-defense. He must utter no word of remonstrance. He has no protection and no redress,” fewer than the animals of the field. They were seen as “not capable of being injured,“Goodell wrote. “They may be punished at the discretion of their lord, or even put to death by his authority.” As a window into their exploitation, consider that in 1740, South Carolina, like other slaveholding states, finally decided to limit the workday of enslaved African-Americans to fifteen hours from March to September and to fourteen hours from September to March, double the normal workday for humans who actually get paid for their labor. In that same era, prisoners found guilty of actual crimes were kept to a maximum of ten hours per workday. Let no one say that African-Americans as a group have not worked for our country.
<urn:uuid:699ff7b6-5850-4ea1-9888-03ae21e42595>
CC-MAIN-2022-33
https://full-english-books.net/english-books/full-book-caste-the-origins-of-our-discontents-read-online-chapter-4
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572581.94/warc/CC-MAIN-20220816211628-20220817001628-00097.warc.gz
en
0.979623
5,484
3.40625
3
Carbon dioxide (CO2) is an important trace gas in Earth's atmosphere. It is an integral part of the carbon cycle, a biogeochemical cycle in which carbon is exchanged between the Earth's oceans, soil, rocks and the biosphere. Plants and other photoautotrophs use solar energy to produce carbohydrate from atmospheric carbon dioxide and water by photosynthesis. Almost all other organisms depend on carbohydrate derived from photosynthesis as their primary source of energy and carbon compounds. CO2 absorbs and emits infrared radiation at wavelengths of 4.26 ?m (2347 cm-1) (asymmetric stretching vibrational mode) and 14.99 ?m (667 cm-1) (bending vibrational mode) and consequently is a greenhouse gas that plays a significant role in influencing Earth's surface temperature through the greenhouse effect. Concentrations of CO2 in the atmosphere were as high as 4,000 parts per million (ppm, on a molar basis) during the Cambrian period about 500 million years ago to as low as 180 ppm during the Quaternary glaciation of the last two million years. Reconstructed temperature records for the last 420 million years indicate that atmospheric CO2 concentrations peaked at ~2000 ppm during the Devonian (~400 Myrs ago) period, and again in the Triassic (220-200 Myrs ago) period. Global annual mean CO2 concentration has increased by 50% since the start of the Industrial Revolution, from 280 ppm during the 10,000 years up to the mid-18th century to 421 ppm as of May 2022. The present concentration is the highest for 14 million years. The increase has been attributed to human activity, particularly deforestation and the burning of fossil fuels. This increase of CO2 and other long-lived greenhouse gases in Earth's atmosphere has produced the current episode of global warming. Between 30% and 40% of the CO2 released by humans into the atmosphere dissolves into the oceans, wherein it forms carbonic acid and effects changes in the oceanic pH balance. Carbon dioxide concentrations have shown several cycles of variation from about 180 parts per million during the deep glaciations of the Holocene and Pleistocene to 280 parts per million during the interglacial periods. Following the start of the Industrial Revolution, atmospheric CO2 concentration increased to over 400 parts per million and continues to increase, causing the phenomenon of global warming. As of May 2022, the average monthly level of CO2 in Earth's atmosphere reached 421 parts per million. The daily average concentration of atmospheric CO2 at Mauna Loa Observatory first exceeded 400 ppm on 10 May 2013 although this concentration had already been reached in the Arctic in June 2012. Each part per million by volume of CO2 in the atmosphere represents approximately 2.13 gigatonnes of carbon, or 7.82 gigatonnes of CO2. As of 2018, CO2 constitutes about 0.041% by volume of the atmosphere, (equal to 410 ppm) which corresponds to approximately 3210 gigatonnes of CO2, containing approximately 875 gigatonnes of carbon. The global mean CO2 concentration is currently rising at a rate of approximately 2 ppm/year and accelerating. The current growth rate at Mauna Loa is 2.50 ± 0.26 ppm/year (mean ± 2 std dev). As seen in the graph to the right, there is an annual fluctuation - the level drops by about 6 or 7 ppm (about 50 Gt) from May to September during the Northern Hemisphere's growing season, and then goes up by about 8 or 9 ppm. The Northern Hemisphere dominates the annual cycle of CO2 concentration because it has much greater land area and plant biomass than the Southern Hemisphere. Concentrations reach a peak in May as the Northern Hemisphere spring greenup begins, and decline to a minimum in October, near the end of the growing season. Since global warming is attributed to increasing atmospheric concentrations of greenhouse gases such as CO2 and methane, scientists closely monitor atmospheric CO2 concentrations and their impact on the present-day biosphere. The National Geographic wrote that the concentration of carbon dioxide in the atmosphere is this high "for the first time in 55 years of measurement--and probably more than 3 million years of Earth history." The current concentration may be the highest in the last 20 million years. Carbon dioxide concentrations have varied widely over the Earth's 4.54 billion year history. It is believed to have been present in Earth's first atmosphere, shortly after Earth's formation. The second atmosphere, consisting largely of nitrogen and was produced by outgassing from volcanism, supplemented by gases produced during the late heavy bombardment of Earth by huge asteroids. A major part of carbon dioxide emissions were soon dissolved in water and incorporated in carbonate sediments. The production of free oxygen by cyanobacterial photosynthesis eventually led to the oxygen catastrophe that ended Earth's second atmosphere and brought about the Earth's third atmosphere (the modern atmosphere) 2.4 billion years before the present. Carbon dioxide concentrations dropped from 4,000 parts per million during the Cambrian period about 500 million years ago to as low as 180 parts per million during the Quaternary glaciation of the last two million years. On long timescales, atmospheric CO2 concentration is determined by the balance among geochemical processes including organic carbon burial in sediments, silicate rock weathering, and volcanic degassing. The net effect of slight imbalances in the carbon cycle over tens to hundreds of millions of years has been to reduce atmospheric CO2. On a timescale of billions of years, such downward trend appears bound to continue indefinitely as occasional massive historical releases of buried carbon due to volcanism will become less frequent (as earth mantle cooling and progressive exhaustion of internal radioactive heat proceed further). The rates of these processes are extremely slow; hence they are of no relevance to the atmospheric CO2 concentration over the next hundreds or thousands of years. In billion-year timescales, it is predicted that plant, and therefore animal, life on land will die off altogether, since by that time most of the remaining carbon in the atmosphere will be sequestered underground, and natural releases of CO2 by radioactivity-driven tectonic activity will have continued to slow down.[better source needed] The loss of plant life would also result in the eventual loss of oxygen. Some microbes are capable of photosynthesis at concentrations of CO2 of a few parts per million and so the last life forms would probably disappear finally due to the rising temperatures and loss of the atmosphere when the sun becomes a red giant some four billion years from now. The most direct method for measuring atmospheric carbon dioxide concentrations for periods before instrumental sampling is to measure bubbles of air (fluid or gas inclusions) trapped in the Antarctic or Greenland ice sheets. The most widely accepted of such studies come from a variety of Antarctic cores and indicate that atmospheric CO2 concentrations were about 260-280 ppmv immediately before industrial emissions began and did not vary much from this level during the preceding 10,000 years. The longest ice core record comes from East Antarctica, where ice has been sampled to an age of 800,000 years. During this time, the atmospheric carbon dioxide concentration has varied between 180 and 210 ppm during ice ages, increasing to 280-300 ppm during warmer interglacials. The beginning of human agriculture during the current Holocene epoch may have been strongly connected to the atmospheric CO2 increase after the last ice age ended, a fertilization effect raising plant biomass growth and reducing stomatal conductance requirements for CO2 intake, consequently reducing transpiration water losses and increasing water usage efficiency. Various proxy measurements have been used to attempt to determine atmospheric carbon dioxide concentrations millions of years in the past. These include boron and carbon isotope ratios in certain types of marine sediments, and the number of stomata observed on fossil plant leaves. Phytane is a type of diterpenoid alkane. It is a breakdown product of chlorophyll and is now used to estimate ancient CO2 levels. Phytane gives both a continuous record of CO2 concentrations but it also can overlap a break in the CO2 record of over 500 million years. There is evidence for high CO2 concentrations between 200 and 150 million years ago of over 3,000 ppm, and between 600 and 400 million years ago of over 6,000 ppm. In more recent times, atmospheric CO2 concentration continued to fall after about 60 million years ago. About 34 million years ago, the time of the Eocene-Oligocene extinction event and when the Antarctic ice sheet started to take its current form, CO2 was about 760 ppm, and there is geochemical evidence that concentrations were less than 300 ppm by about 20 million years ago. Decreasing CO2 concentration, with a tipping point of 600 ppm, was the primary agent forcing Antarctic glaciation. Low CO2 concentrations may have been the stimulus that favored the evolution of C4 plants, which increased greatly in abundance between 7 and 5 million years ago. Based on an analysis of fossil leaves, Wagner et al. argued that atmospheric CO2 concentrations during the last 7,000-10,000 year period were significantly higher than 300 ppm and contained substantial variations that may be correlated to climate variations. Others have disputed such claims, suggesting they are more likely to reflect calibration problems than actual changes in CO2. Relevant to this dispute is the observation that Greenland ice cores often report higher and more variable CO2 values than similar measurements in Antarctica. However, the groups responsible for such measurements (e.g. H.J. Smith et al.) believe the variations in Greenland cores result from in situ decomposition of calcium carbonate dust found in the ice. When dust concentrations in Greenland cores are low, as they nearly always are in Antarctic cores, the researchers report good agreement between measurements of Antarctic and Greenland CO2 concentrations. Earth's natural greenhouse effect makes life as we know it possible and carbon dioxide plays a significant role in providing for the relatively high temperature that the planet enjoys. The greenhouse effect is a process by which thermal radiation from a planetary atmosphere warms the planet's surface beyond the temperature it would have in the absence of its atmosphere. Without the greenhouse effect, the Earth's average surface temperature would be about -18 °C (-0.4 °F) compared to Earth's actual average surface temperature of approximately 14 °C (57.2 °F). Carbon dioxide is believed to have played an important effect in regulating Earth's temperature throughout its 4.7 billion year history. Early in the Earth's life, scientists have found evidence of liquid water indicating a warm world even though the Sun's output is believed to have only been 70% of what it is today. It has been suggested by scientists that higher carbon dioxide concentrations in the early Earth's atmosphere might help explain this faint young sun paradox. When Earth first formed, Earth's atmosphere may have contained more greenhouse gases and CO2 concentrations may have been higher, with estimated partial pressure as large as 1,000 kPa (10 bar), because there was no bacterial photosynthesis to reduce the gas to carbon compounds and oxygen. Methane, a very active greenhouse gas which reacts with oxygen to produce CO2 and water vapor, may have been more prevalent as well, with a mixing ratio of 10-4 (100 parts per million by volume). Though water is responsible for most (about 36-70%) of the total greenhouse effect, the role of water vapor as a greenhouse gas depends on temperature. On Earth, carbon dioxide is the most relevant, direct anthropologically influenced greenhouse gas. Carbon dioxide is often mentioned in the context of its increased influence as a greenhouse gas since the pre-industrial (1750) era. In the IPCC Fifth Assessment Report the increase in CO2 was estimated to be responsible for 1.82 W m-2 of the 2.63 W m-2 change in radiative forcing on Earth (about 70%). The concept of atmospheric CO2 increasing ground temperature was first published by Svante Arrhenius in 1896. The increased radiative forcing due to increased CO2 in the Earth's atmosphere is based on the physical properties of CO2 and the non-saturated absorption windows where CO2 absorbs outgoing long-wave energy. The increased forcing drives further changes in Earth's energy balance and, over the longer term, in Earth's climate. Atmospheric carbon dioxide plays an integral role in the Earth's carbon cycle whereby CO2 is removed from the atmosphere by some natural processes such as photosynthesis and deposition of carbonates, to form limestones for example, and added back to the atmosphere by other natural processes such as respiration and the acid dissolution of carbonate deposits. There are two broad carbon cycles on Earth: the fast carbon cycle and the slow carbon cycle. The fast carbon cycle refers to movements of carbon between the environment and living things in the biosphere whereas the slow carbon cycle involves the movement of carbon between the atmosphere, oceans, soil, rocks, and volcanism. Both cycles are intrinsically interconnected and atmospheric CO2 facilitates the linkage. Natural sources of atmospheric CO2 include volcanic outgassing, the combustion of organic matter, wildfires and the respiration processes of living aerobic organisms. Man-made sources of CO2 include the burning of fossil fuels for heating, power generation and transport, as well as some industrial processes such as cement making. It is also produced by various microorganisms from fermentation and cellular respiration. Plants, algae and cyanobacteria convert carbon dioxide to carbohydrates by a process called photosynthesis. They gain the energy needed for this reaction from absorption of sunlight by chlorophyll and other pigments. Oxygen, produced as a by-product of photosynthesis, is released into the atmosphere and subsequently used for respiration by heterotrophic organisms and other plants, forming a cycle with carbon. Most sources of CO2 emissions are natural, and are balanced to various degrees by similar CO2 sinks. For example, the decay of organic material in forests, grasslands, and other land vegetation - including forest fires - results in the release of about 436 gigatonnes of CO2 (containing 119 gigatonnes carbon) every year, while CO2 uptake by new growth on land counteracts these releases, absorbing 451 Gt (123 Gt C). Although much CO2 in the early atmosphere of the young Earth was produced by volcanic activity, modern volcanic activity releases only 130 to 230 megatonnes of CO2 each year. Natural sources are more or less balanced by natural sinks, in the form of chemical and biological processes which remove CO2 from the atmosphere. By contrast, as of year 2019 the extraction and burning of geologic fossil carbon by humans releases over 30 gigatonnes of CO2 (9 billion tonnes carbon) each year. This larger disruption to the natural balance is responsible for recent growth in the atmospheric CO2 concentration. Overall, there is a large natural flux of atmospheric CO2 into and out of the biosphere, both on land and in the oceans. In the pre-industrial era, each of these fluxes were in balance to such a degree that little net CO2 flowed between the land and ocean reservoirs of carbon, and little change resulted in the atmospheric concentration. From the human pre-industrial era to 1940, the terrestrial biosphere represented a net source of atmospheric CO2 (driven largely by land-use changes), but subsequently switched to a net sink with growing fossil carbon emissions. In 2012, about 57% of human-emitted CO2, mostly from the burning of fossil carbon, was taken up by land and ocean sinks. The ratio of the increase in atmospheric CO2 to emitted CO2 is known as the airborne fraction (Keeling et al., 1995). This ratio varies in the short-term and is typically about 45% over longer (5-year) periods. Estimated carbon in global terrestrial vegetation increased from approximately 740 gigatonnes in 1910 to 780 gigatonnes in 1990. By 2009, oceanic neutralization had decreased the pH of seawater by 0.11 due to uptake of emitted CO2. Carbon dioxide in the Earth's atmosphere is essential to life and to most of the planetary biosphere. Over the course of Earth's geologic history CO2 concentrations have played a role in biological evolution. The first photosynthetic organisms probably evolved early in the evolutionary history of life and most likely used reducing agents such as hydrogen or hydrogen sulfide as sources of electrons, rather than water. Cyanobacteria appeared later, and the excess oxygen they produced contributed to the oxygen catastrophe, which rendered the evolution of complex life possible. In recent geologic times, low CO2 concentrations below 600 parts per million might have been the stimulus that favored the evolution of C4 plants which increased greatly in abundance between 7 and 5 million years ago over plants that use the less efficient C3 metabolic pathway. At current atmospheric pressures photosynthesis shuts down when atmospheric CO2 concentrations fall below 150 ppm and 200 ppm although some microbes can extract carbon from the air at much lower concentrations. Today, the average rate of energy capture by photosynthesis globally is approximately 130 terawatts, which is about six times larger than the current power consumption of human civilization. Photosynthetic organisms also convert around 100-115 billion metric tonnes of carbon into biomass per year. Photosynthetic organisms are photoautotrophs, which means that they are able to synthesize food directly from CO2 and water using energy from light. However, not all organisms that use light as a source of energy carry out photosynthesis, since photoheterotrophs use organic compounds, rather than CO2, as a source of carbon. In plants, algae and cyanobacteria, photosynthesis releases oxygen. This is called oxygenic photosynthesis. Although there are some differences between oxygenic photosynthesis in plants, algae, and cyanobacteria, the overall process is quite similar in these organisms. However, there are some types of bacteria that carry out anoxygenic photosynthesis, which consumes CO2 but does not release oxygen. Carbon dioxide is converted into sugars in a process called carbon fixation. Carbon fixation is an endothermic redox reaction, so photosynthesis needs to supply both the source of energy to drive this process and the electrons needed to convert CO2 into a carbohydrate. This addition of the electrons is a reduction reaction. In general outline and in effect, photosynthesis is the opposite of cellular respiration, in which glucose and other compounds are oxidized to produce CO2 and water, and to release exothermic chemical energy to drive the organism's metabolism. However, the two processes take place through a different sequence of chemical reactions and in different cellular compartments. A 1993 review of scientific greenhouse studies found that a doubling of CO2 concentration would stimulate the growth of 156 different plant species by an average of 37%. Response varied significantly by species, with some showing much greater gains and a few showing a loss. For example, a 1979 greenhouse study found that with doubled CO2 concentration the dry weight of 40-day-old cotton plants doubled, but the dry weight of 30-day-old maize plants increased by only 20%. In addition to greenhouse studies, field and satellite measurements attempt to understand the effect of increased CO2 in more natural environments. In free-air carbon dioxide enrichment (FACE) experiments plants are grown in field plots and the CO2 concentration of the surrounding air is artificially elevated. These experiments generally use lower CO2 levels than the greenhouse studies. They show lower gains in growth than greenhouse studies, with the gains depending heavily on the species under study. A 2005 review of 12 experiments at 475-600 ppm showed an average gain of 17% in crop yield, with legumes typically showing a greater response than other species and C4 plants generally showing less. The review also stated that the experiments have their own limitations. The studied CO2 levels were lower, and most of the experiments were carried out in temperate regions. Satellite measurements found increasing leaf area index for 25% to 50% of Earth's vegetated area over the past 35 years (i.e., a greening of the planet), providing evidence for a positive CO2 fertilization effect. A 2017 Politico article states that increased CO2 levels may have a negative impact on the nutritional quality of various human food crops, by increasing the levels of carbohydrates, such as glucose, while decreasing the levels of important nutrients such as protein, iron, and zinc. Crops experiencing a decrease in protein include rice, wheat, barley and potatoes.[scientific ] The Earth's oceans contain a large amount of CO2 in the form of bicarbonate and carbonate ions--much more than the amount in the atmosphere. The bicarbonate is produced in reactions between rock, water, and carbon dioxide. One example is the dissolution of calcium carbonate: Reactions like this tend to buffer changes in atmospheric CO2. Since the right side of the reaction produces an acidic compound, adding CO2 on the left side decreases the pH of seawater, a process which has been termed ocean acidification (pH of the ocean becomes more acidic although the pH value remains in the alkaline range). Reactions between CO2 and non-carbonate rocks also add bicarbonate to the seas. This can later undergo the reverse of the above reaction to form carbonate rocks, releasing half of the bicarbonate as CO2. Over hundreds of millions of years, this has produced huge quantities of carbonate rocks. Ultimately, most of the CO2 emitted by human activities will dissolve in the ocean; however, the rate at which the ocean will take it up in the future is less certain. Even if equilibrium is reached, including dissolution of carbonate minerals, the increased concentration of bicarbonate and decreased or unchanged concentration of carbonate ion will give rise to a higher concentration of un-ionized carbonic acid and dissolved CO2. This higher concentration in the seas, along with higher temperatures, would mean a higher equilibrium concentration of CO2 in the air. Carbon dioxide has unique long-term effects on climate change that are nearly "irreversible" for a thousand years after emissions stop (zero further emissions). The greenhouse gases methane and nitrous oxide do not persist over time in the same way as carbon dioxide. Even if human carbon dioxide emissions were to completely cease, atmospheric temperatures are not expected to decrease significantly in the short term. This is because the air temperature is determined by a balance between heating, due to greenhouse gases, and cooling due to heat transfer to the ocean. If emissions were to stop, CO2 levels and the heating effect would slowly decrease, but simultaneously the cooling due to heat transfer would diminish (because sea temperatures would get closer to the air temperature), with the result that the air temperature would decrease only slowly. Sea temperatures would continue to rise, causing thermal expansion and some sea level rise. Lowering global temperatures more rapidly would require carbon sequestration or geoengineering. Carbon moves between the atmosphere, vegetation (dead and alive), the soil, the surface layer of the ocean, and the deep ocean. A detailed model has been developed by Fortunat Joos in Bern and colleagues, called the Bern model. A simpler model based on it gives the fraction of CO2 remaining in the atmosphere as a function of the number of years after it is emitted into the atmosphere: According to this model, 21.7% of the carbon dioxide released into the air stays there forever, but of course this is not true if carbon-containing material is removed from the cycle (and stored) in ways that are not operative at present (artificial sequestration). While CO2 absorption and release is always happening as a result of natural processes, the recent rise in CO2 levels in the atmosphere is known to be mainly due to human (anthropogenic) activity. There are four ways human activity, especially fossil fuel burning, is known to have caused the rapid increase in atmospheric CO2 over the last few centuries: Burning fossil fuels such as coal, petroleum, and natural gas is the leading cause of increased anthropogenic CO2; deforestation is the second major cause. In 2010, 9.14 gigatonnes of carbon (GtC, equivalent to 33.5 gigatonnes of CO2 or about 4.3 ppm in Earth's atmosphere) were released from fossil fuels and cement production worldwide, compared to 6.15 GtC in 1990. In addition, land use change contributed 0.87 GtC in 2010, compared to 1.45 GtC in 1990. In 1997, human-caused Indonesian peat fires were estimated to have released between 13% and 40% of the average annual global carbon emissions caused by the burning of fossil fuels. In the period 1751 to 1900, about 12 GtC were released as CO2 to the atmosphere from burning of fossil fuels, whereas from 1901 to 2013 the figure was about 380 GtC. The Integrated Carbon Observation System (ICOS) continuously releases data about CO2 emissions, budget and concentration at individual observation stations. Anthropogenic carbon emissions exceed the amount that can be taken up or balanced out by natural sinks. As a result, carbon dioxide has gradually accumulated in the atmosphere, and as of May 2022, its concentration is 50% above pre-industrial levels. Various techniques have been proposed for removing excess carbon dioxide from the atmosphere (see Carbon sink#Artificial sequestration). Currently about half of the carbon dioxide released from the burning of fossil fuels is not absorbed by vegetation and the oceans and remains in the atmosphere. False-color image of smoke and ozone pollution from Indonesian fires, 1997 The first reproducibly accurate measurements of atmospheric CO2 were from flask sample measurements made by Dave Keeling at Caltech in the 1950s. A few years later in March 1958 the first ongoing measurements were started by Keeling at Mauna Loa. Measurements at Mauna Loa have been ongoing since then. Now measurements are made at many sites globally. Additional measurement techniques are also used as well. Many measurement sites are part of larger global networks. Global network data are often made publicly available on the conditions of proper acknowledgment according to the respective data user policies. There are several surface measurement (including flasks and continuous in situ) networks including NOAA/ERSL, WDCGG, and RAMCES. The NOAA/ESRL Baseline Observatory Network, and the Scripps Institution of Oceanography Network data are hosted at the CDIAC at ORNL. The World Data Centre for Greenhouse Gases (WDCGG), part of GAW, data are hosted by the JMA. The Reseau Atmospherique de Mesure des Composes an Effet de Serre database (RAMCES) is part of IPSL. From these measurements, further products are made which integrate data from the various sources. These products also address issues such as data discontinuity and sparseness. GLOBALVIEW-CO2 is one of these products. Ongoing ground-based total column measurements began more recently. Column measurements typically refer to an averaged column amount denoted XCO2, rather than a surface only measurement. These measurements are made by the TCCON. These data are also hosted on the CDIAC, and made publicly available according to the data use policy. Satellite measurements are also a recent addition to atmospheric XCO2 measurements. SCIAMACHY aboard ESA's ENVISAT made global column XCO2 measurements from 2002 to 2012. AIRS aboard NASA's Aqua satellite makes global XCO2 measurements and was launched shortly after ENVISAT in 2012. More recent satellites have significantly improved the data density and precision of global measurements. Newer missions have higher spectral and spatial resolutions. JAXA's GOSAT was the first dedicated GHG monitoring satellite to successfully achieve orbit in 2009. NASA's OCO-2 launched in 2014 was the second. Various other satellites missions to measure atmospheric XCO2 are planned. 100 x 1015 grams of carbon/year fixed by photosynthetic organisms which is equivalent to 4 x 1018 kJ/yr = 4 x 1021J/yr of free energy stored as reduced carbon; (4 x 1018 kJ/yr) / (31,556,900 sec/yr) = 1.27 x 1014 J/yr; (1.27 x 1014 J/yr) / (1012 J/sec / TW) = 127 TW. The average global rate of photosynthesis is 130 TW (1 TW = 1 terawatt = 1012 watt). We show a persistent and widespread increase of growing season integrated LAI (greening) over 25% to 50% of the global vegetated area, whereas less than 4% of the globe shows decreasing LAI (browning). Factorial simulations with multiple global ecosystem models suggest that CO2 fertilization effects explain 70% of the observed greening trend Source: Carbon Brief analysis of figures from the Global Carbon Project, CDIAC, Our World in Data, Carbon Monitor, Houghton and Nassikas (2017) and Hansis et al (2015). Global change issues have become significant due to the sustained rise in atmospheric trace gas concentrations (CO2, , ) over recent years, attributable to the increased per capita energy consumption of a growing global population.
<urn:uuid:5337dda3-d344-471c-ae38-bb093007047c>
CC-MAIN-2022-33
http://popflock.com/learn?s=Carbon_dioxide_in_Earth's_atmosphere
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570868.47/warc/CC-MAIN-20220808152744-20220808182744-00096.warc.gz
en
0.939598
6,470
4.03125
4
But Even If You’ve Been Cured It Can Have Lifelong Health Implications “Hepatitis C is a lot more than just a liver disease,” Reau says. “It has been associated with many medical conditions, such as an increased risk of developing diabetes, kidney disease and cancer.” While curing hepatitis C significantly reduces the risk of serious complications, like liver failure, liver cancer and the need for transplantation, it doesn’t completely eliminate the health risks associated with the disease. “Hep C is linked to scarring of the liver or cirrhosis and the more scar tissue that develops, the greater the likelihood of complications,” Reau says. “If there is a lot of scarring, you will need lifelong monitoring.” Reau also recommends leading a healthy lifestyle to help prevent re-infection and further liver damage: Limit alcohol consumption, control your weight, avoid high-risk activities and manage diabetes if you have it. The Evolution Of Hepatitis C Treatment Hepatitis C has been around for a long time. Even before the development of these new treatments, between 15 to 25 percent of individuals infected with HCV did not become chronically infected. Their bodies were able to clear the virus on their own. However, until relatively recently there were few effective treatment options for hepatitis C. Historically the major treatment regimen was a long course of pegylated interferon and ribavirin. However, these treatments have significant problems. They show an only moderate ability to get rid of the virus and they have significant side effects. For example, one study found that as many as a quarter of people taking interferon developed major depressive episodes due to the treatment regimen. In addition, those drugs were contraindicated in individuals with advanced liver or kidney disease. That meant that many people with hepatitis C werent even eligible to take them. Interferon and ribavirin were also least effective against the most common types of hepatitis C. Genotype 1 was historically difficult to treat with pegylated interferon and ribavirin. The treatment regimen worked slightly better with genotypes 2 and 3, but those types were also less common. The combination of poor efficacy and high intolerance were driving forces for the development of interferon-free methods of hepatitis C treatment. These drugs are known as direct acting antivirals . Its DAAs that have led to hepatitis C being considered curable. Whats The Prognosis For Hepatitis B Your doctor will know youâve recovered when you no longer have symptoms and blood tests show: - Your liver is working normally. - You have hepatitis B surface antibody. But some people don’t get rid of the infection. If you have it for more than 6 months, youâre whatâs called a carrier, even if you donât have symptoms. This means you can give the disease to someone else through: - Unprotected sex - Contact with your blood or an open sore - Sharing needles or syringes Doctors donât know why, but the disease does go away in a small number of carriers. For others, it becomes whatâs known as chronic. That means you have an ongoing liver infection. It can lead to cirrhosis, or hardening of the organ. It scars over and stops working. Some people also get liver cancer. If youâre a carrier or are infected with hepatitis B, donât donate blood, plasma, body organs, tissue, or sperm. Tell anyone you could infect — whether itâs a sex partner, your doctor, or your dentist — that you have it. CDC: âHepatitis B Questions and Answers for Health Professionals,â âHepatitis B Questions and Answers for the Public.â Mayo Clinic: âHepatitis B.â UpToDate: âHepatitis B virus: Screening and diagnosis.â HealthyPeople.gov: âHepatitis B in Pregnant Women: Screening.â Annals of Internal Medicine: âScreening for Hepatitis B Virus Infection in Nonpregnant Adolescents and Adults: U.S. Preventive Services Task Force Recommendation Statement.â Read Also: How Can A Person Get Hepatitis B Treating Hepatitis A B C: Id Care Infectious Disease Doctors According to Dr. Aslam, ID Cares unique experience treating chronic illnesses like HIV is what sets us apart. We identify the risk factors, diagnose the strains, and do a lot of community teaching where our patients live. Some of the hepatitis patients were trying to help are IV drug users. Because of our experience treating people with HIV, we have a lot of community and healthcare connections, and we can help patients get into detox and recovery programs to break habits that raise the risk for hepatitis B and C. Sometimes, the complex antiviral medicines used for hepatitis B and C can interact with other drugs, and some patients are resistant to their effects. Interestingly, some people who have hepatitis B or C along with HIV may be prescribed hepatitis medications that can affect their HIV as well. If you have both the diseases, a combination treatment is best, otherwise, patients can develop resistance, warned Dr. Aslam, adding that close monitoring is key in these cases. Difficult, multi-disease scenarios do happen, and according to Dr. Aslam, ID Care is better trained, equipped, and experienced to detect and address the complications, drug interactions, and side issues like drug dependence, depression, and substance abuse that can interfere with successful treatment. We provide a uniquely high level of care to our patients with viral hepatitis and related problems. Abcs Of Hepatitis: From Chronic To Curable What You Need To Know This article was medically reviewed by Dr. Fazila Aslam. Though not uncommon, viral hepatitis is a serious illness that is often less understood than other infectious diseases. Hepatitis A, B, and C are types of viral hepatitis and share some common symptoms, but they infect different patient populations. The different strains are caused by different viruses and are transmitted and treated in different ways. Each hepatitis variant must be identified by a blood test so it can be properly treated, and viral hepatitis should never be ignored as it can cause serious health consequences. Learn more about the ABCs of Hepatitis in this article so this infectious disease is more easily understood and identified early. Also Check: How Do You Contract Hepatitis A B& c What Are The Risk Factors For Getting Hepatitis B Due to the way that hepatitis B spreads, people most at risk for getting infected include: - Children whose mothers have been infected with hepatitis B. - Children who have been adopted from countries with high rates of hepatitis B infection. - People who have unprotected sex and/or have been diagnosed with a sexually transmitted infection. - People who live with or work in an institutional setting, such as prisons or group homes. - Healthcare providers and first responders. - People who share needles or syringes. - People who live in close quarters with a person with chronic hepatitis B infection. - People who are on dialysis. Acute V Chronic Hepatitis C Acute hepatitis C is used to describe a new infection it typically occurs within six months of exposure to the virus. Chronic hepatitis C may last throughout a patients entire lifetime. This infection could result in scarring, cancer, or damage of the liver. In some cases, chronic hepatitis C may even result in death. Recommended Reading: Hepatitis B Curable Or Not Who Is More Likely To Get Hepatitis B People are more likely to get hepatitis B if they are born to a mother who has hepatitis B . Hepatitis B virus can spread from mother to child during birth. For this reason, people are more likely to have hepatitis B if they 15): - were born in a part of the world where 2 percent or more of the population has hepatitis B infection - were born in the United States, didnt receive the hepatitis B vaccine as an infant, and have parents who were born in an area where 8 percent or more of the population had hepatitis B infection People are also more likely to have hepatitis B if they: - are infected with HIV, because hepatitis B and HIV spread in similar ways - have lived with or had sex with someone who has hepatitis B - have had more than one sex partner in the last 6 months or have a history of sexually transmitted disease - are men who have sex with men - are injection drug users - work in a profession, such as health care, in which they have contact with blood, needles, or body fluids at work - live or work in a care facility for people with developmental disabilities - have diabetes - have hepatitis C - have lived in or travel often to parts of the world where hepatitis B is common External link - have been on kidney dialysis - live or work in a prison - had a blood transfusion or organ transplant before the mid-1980s In the United States, hepatitis B spreads among adults mainly through contact with infected blood through the skin, such as during injection drug use, and through sexual contact 16). Hepatitis C: Common Deadly And Curable 18 September 2017 By Rich Hutchinson, Pedro Valencia, Shana Topp, and Thomas Eisenhart Consider some sobering and little-known facts: Hepatitis C is the most common and deadly infectious disease in the United States, beating out HIV, tuberculosis, and 57 other illnesses tracked by the Centers for Disease Control and Prevention . More than 4 million Americans are infectedmost of them baby boomers but only 50% have been diagnosed. The remainder are unaware that they have contracted the virus, thus putting themselves and others at risk. And while mortality rates for most infectious diseases are decreasing, deaths from HCV are rising over time. According to the CDC, nearly 20,000 deaths were reported in 2014, up from just 11,000 in 2003. Perhaps worst of all is that although HCV is common and deadly, about 95% of the time it can be cured with antiviral treatments. Media attention has focused on the high cost of these therapies, but other roadblocks are even more problematic. For instance, there is a lack of awareness about the disease, its risk factors, and available treatments not only among patients and members of the general populationbut also among nurses, physicians, and other health care providers. You May Like: Is There A Cure For Chronic Hepatitis C Don’t Miss: Can Hepatitis C Go Away How Is Hepatitis C Spread Hepatitis C is spread through exposure to blood from an infected person. In any case, in which the blood of an infected person enters the bloodstream of an uninfected person, the previously uninfected person is likely to become infected. There are a few common scenarios in which this may happen: - Sharing Drug Paraphernalia. When a drug user inserts a needle into his or her arm for example, when taking heroin the needle becomes contaminated with the userâs blood. If another user inserts that same needle into their arm, any contaminants in the first userâs arm are transferred to the second user. - Sharing Toiletries. Although transferring hepatitis C is less likely in these cases, sharing razor blades and toothbrushes can lead to infection for the same reason that sharing needles causes an infection traces of blood passing from one person to another. - Unsafe Sex. Having sex without a condom, particularly in sexual scenarios where blood is more likely to be drawn like anal sex, is another potential transfer case for hepatitis C. What Treatments Are Available For Chronic Hepatitis B If Medications Dont Work If you have advanced hepatitis B, you might also become a candidate for a liver transplant. This path does not always result in a cure because the virus continues in your bloodstream after a transplant. To prevent being infected again after your transplant, you may be prescribed hepatitis B immunoglobulin with an antiviral agent. Don’t Miss: How You Get Hepatitis A What Are The Common Types Of Viral Hepatitis Although the most common types of viral hepatitis are HAV, HBV, and HCV, some clinicians had previously considered the acute and chronic phases of hepatic infections as types of viral hepatitis. HAV was considered to be acute viral hepatitis because the HAV infections seldom caused permanent liver damage that led to hepatic failure. HBV and HCV produced chronic viral hepatitis. However, these terms are outdated and not currently used as frequently because all of the viruses that cause hepatitis may have acute phase symptoms . Prevention techniques and vaccinations have markedly reduced the current incidence of common viral hepatitis infections however, there remains a population of about 1 to 2 million people in the U.S. with chronic HBV, and about 3.5 million with chronic HCV according to the CDC. Statistics are incomplete for determining how many new infections occur each year the CDC documented infections but then goes on to estimate the actual numbers by further estimating the number of unreported infections . Types D, E, and G Hepatitis Individuals who already have chronic HBV infection can acquire HDV infection at the same time as they acquire the HBV infection, or at a later time. Those with chronic hepatitis due to HBV and HDV develop cirrhosis rapidly. Moreover, the combination of HDV and HBV virus infection is very difficult to treat. - People with hemophilia who receive blood clotting factors Who Should Get The Hepatitis B Vaccine All newborn babies should get vaccinated. You should also get the shot if you: - Come in contact with infected blood or body fluids of friends or family members - Use needles to take recreational drugs - Have sex with more than one person - Are a health care worker - Work in a day-care center, school, or jail Recommended Reading: What’s Hepatitis B Vaccine Can Hepatitis B Be Prevented The hepatitis B vaccine is one of the best ways to control the disease. It is safe, effective and widely available. More than one billion doses of the vaccine have been administered globally since 1982. The World Health Organization says the vaccine is 98-100% effective in guarding against the virus. Newborns should be vaccinated. The disease has also been more widely prevented thanks to: - Widespread global adoption of safe blood-handling practices. WHO says 97% of the blood donated around the world is now screened for HBV and other diseases. - Safer blood injection practices, using clean needles. - Safe-sex practices. You can help prevent hepatitis B infections by: - Practicing safe sex . - Never sharing personal care items like toothbrushes or razors. - Getting tattoos or piercings only at shops that employ safe hygiene practices. - Not sharing needles to use drugs. - Asking your healthcare provider for blood tests to determine if you have HBV or if you are immune. Preventing The Spread Of Viral Hepatitis Having good personal health habits is the key to preventing the spread of many diseases, including hepatitis. Other preventive measures include: Getting vaccines. A hepatitis B vaccine is given to newborns, babies, and toddlers. It is as part of the routine vaccine schedule. A hepatitis A vaccine is available for people at risk for the disease while traveling. There are no vaccines for hepatitis C, D, or E at this time. Blood transfusion screening. Blood for transfusions is routinely screened for hepatitis B and C to reduce the risk of infection. Antibody treatment. If a person has been exposed to hepatitis, an antibody treatment can be given to help protect him or her from the disease. Safe sex. Practice safe sex, including using condoms. Safe needle use. Dont share or reuse needles, syringes, or other equipment. Not sharing personal items. Dont use personal items that may have even a small amount of an infected persons blood. These include nail clippers, toothbrushes, glucose monitors, or razors. Getting tattoos safely. If you plan on getting a tattoo, use a licensed facility only. Dont get tattoos in an unsafe setting. Don’t Miss: What Is Hepatitis B And C Symptoms Who Are Hepatitis B Carriers Hepatitis B carriers are people who have the hepatitis B virus in their blood, even though they dont feel sick. Between 6% and 10% of those people whove been infected with the virus will become carriers and can infect others without knowing it. There are over 250 million people in the world who are carriers of HBV, with about 10% to 15% of the total located in India. Children are at the highest risk of becoming carriers. About 9 in 10 babies infected at birth become HBV carriers, and about half of children who are infected between birth and age 5 carry the virus. A blood test can tell you if you are a hepatitis B carrier. Medical Treatment For Hepatitis A B & C Treatment for hepatitis A, B, or C is based on which type of hepatitis is present in the bloodstream and the severity of the resulting liver damage. Depending on the results of diagnostic tests, our specialists at NYU Langone may recommend antiviral medication to stop the virus from replicating and protect your liver from further damage. Also Check: How Is Hepatitis A Spread How Do You Get Hepatitis A The main way you get hepatitis A is when you eat or drink something that has the hep A virus in it. A lot of times this happens in a restaurant. If an infected worker there doesn’t wash their hands well after using the bathroom, and then touches food, they could pass the disease to you. Food or drinks you buy at the supermarket can sometimes cause the disease, too. The ones most likely to get contaminated are: - Ice and water Another way you can get hep A is when you have sex with someone who has it. What Should You Know About Pregnancy And Hepatitis B A pregnant woman who has hepatitis B can pass the infection to her baby at delivery. This is true for both vaginal and cesarean deliveries. You should ask your healthcare provider to test you for hepatitis B when you find out you are pregnant. However, while it is important for you and your healthcare provider to know if you do have hepatitis B, the condition should not affect the way that your pregnancy progresses. If you do test positive, your provider may suggest that you contact another healthcare provider, a liver doctor, who is skilled in managing people with hepatitis B infections. You may have a high viral load and may need treatment during the last 3 months of your pregnancy. A viral load is the term for how much of the infection you have inside of you. You can prevent your infant from getting hepatitis B infection by making sure that your baby gets the hepatitis B vaccine in the hours after they are born along with the hepatitis B immunoglobulin. These two shots are given in two different locations on the baby. They are the first shots needed. Depending on the type of vaccine used, two or three more doses must be given, usually when the baby is 1 month old and then 6 months old, with the last by the time the baby is 1 year old. It is critical that all newborns get the hepatitis B vaccination, but even more important if you have hepatitis B yourself. Read Also: How Can You Get Hepatitis B
<urn:uuid:2f738c6b-e537-4871-ba5d-0ccec4f64fc6>
CC-MAIN-2022-33
https://www.hepatitisprohelp.com/is-hepatitis-curable-or-treatable/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00095.warc.gz
en
0.945305
4,127
2.6875
3
Statistics from Altmetric.com If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. Stress has been defined in different ways over the years. Originally, it was conceived of as pressure from the environment, then as strain within the person. The generally accepted definition today is one of interaction between the situation and the individual. It is the psychological and physical state that results when the resources of the individual are not sufficient to cope with the demands and pressures of the situation. Thus, stress is more likely in some situations than others and in some individuals than others. Stress can undermine the achievement of goals, both for individuals and for organisations (box 1). Signs of stress can be seen in people's behaviour, especially in changes in behaviour. Acute responses to stress may be in the areas of feelings (for example, anxiety, depression, irritability, fatigue), behaviour (for example, being withdrawn, aggressive, tearful, unmotivated), thinking (for example, difficulties of concentration and problem solving) or physical symptoms (for example, palpitations, nausea, headaches). If stress persists, there are changes in neuroendocrine, cardiovascular, autonomic and immunological functioning, leading to mental and physical ill health (for example anxiety, depression, heart disease) (box 2, fig 1).1 Situations that are likely to cause stress are those that are unpredictable or uncontrollable, uncertain, ambiguous or unfamiliar, or involving conflict, loss or performance expectations. Stress may be caused by time limited events, such as the pressures of examinations or work deadlines, or by ongoing situations, such as family demands, job insecurity, or long commuting journeys. Resources that help meet the pressures and demands faced at work include personal characteristics such as coping skills (for example, problem solving, assertiveness, time management) and the work situation such as a good working environment and social support. These resources can be increased by investment in work infrastructure, training, good management and employment practices, and the way that work is organised. Historically, the typical response from employers to stress at work has been to blame the victim of stress, rather than its cause. Increasingly, it is being recognised that employers have a duty, in many cases in law, to ensure that employees do not become ill. It is also in their long term economic interests to prevent stress, as stress is likely to lead to high staff turnover, an increase in sickness absence and early retirement, increased stress in those staff still at work, reduced work performance and increased rate of accidents, and reduced client satisfaction. Good employment practice includes assessing the risk of stress amongst employees. This involves: looking for pressures at work which could cause high and long lasting levels of stress deciding who might be harmed by these deciding whether you are doing enough to prevent that harm. HOW STRESS IS CAUSED The degree of stress experienced depends on the functioning of two protective physiological mechanisms: “Alarm reaction”. When confronted with a threat to our safety, our first response is physiological arousal: our muscles tense and breathing and heart rate become more rapid. This serves us well when the threat is the proverbial bull in the field rushing towards us. We either fight or flee. Present day threats tend to be more psychological—for example, unjustified verbal attack by a superior at work. It is usually not socially acceptable to act by “fight or flight”, and an alternative means of expressing the resultant emotional and physical energy is required. This falls in the arena of assertive communication. “Adaptation”. The second adaptive mechanism allows us to cease responding when we learn that stimuli in the environment are no longer a threat to our safety. For example, when we first spend time in a house near a railway line, our response to trains hurtling past is to be startled, as described above. Over time, our response dwindles. If this process did not function, we would eventually collapse from physical wear and tear, and mental exhaustion. Stress is experienced when either of these mechanisms are not functioning properly or when we find it difficult to switch appropriately from one to another. This forms the basis of individual approaches to stress management (fig 2). Figure 2 shows that it is the perception, or appraisal, of the situation that is key to whether or not it causes stress. This is the basis of the transactional model of stress,2 whereby the ability of a person to prevent or reduce stress is determined by that person's appraisal of (a) the threat within a situation (primary appraisal), and (b) the appraisal of his/her coping skills to deal with that threat (secondary appraisal). These appraisals have been shaped by past experiences of confronting stress and, in turn, influence future behaviour and appraisals. Thus, the process of appraisal, behaviour, and stress is continuous, and managing stress can result from changing the way the situation is appraised (cognitive techniques) or responded to (behavioural or cognitive techniques). WORKPLACE FACTORS CAUSING STRESS The workplace is an important source of both demands and pressures causing stress, and structural and social resources to counteract stress. The workplace factors that have been found to be associated with stress and health risks can be categorised as those to do with the content of work and those to do with the social and organisational context of work (fig 1). Those that are intrinsic to the job include long hours, work overload, time pressure, difficult or complex tasks, lack of breaks, lack of variety, and poor physical work conditions (for example, space, temperature, light). Unclear work or conflicting roles and boundaries can cause stress, as can having responsibility for people. The possibilities for job development are important buffers against current stress, with under promotion, lack of training, and job insecurity being stressful. There are two other sources of stress, or buffers against stress: relationships at work, and the organisational culture. Managers who are critical, demanding, unsupportive or bullying create stress, whereas a positive social dimension of work and good team working reduces it. An organisational culture of unpaid overtime or “presenteeism” causes stress. On the other hand, a culture of involving people in decisions, keeping them informed about what is happening in the organisation, and providing good amenities and recreation facilities reduce stress. Organisational change, especially when consultation has been inadequate, is a huge source of stress. Such changes include mergers, relocation, restructuring or “downsizing”, individual contracts, and redundancies within the organisation. A systematic review of the evidence for work factors associated with psychological ill health and associated absenteeism3 (Michie and Williams 2001, unpublished data) found the key factors to be: long hours worked, work overload and pressure the effects of these on personal lives lack of control over work and lack of participation in decision making poor social support unclear management and work role and poor management style. Three of these factors form part of the influential control-demand model of work related strain.4 According to this model, work related strain and risks to health are most likely to arise when high job demands are coupled with low decision latitude (that is, low personal control over work and limited opportunities to develop skills). On the other hand, high job demands with high decision latitude gives the possibility of motivation to learn, active learning, and a sense of accomplishment. Of the two, decision latitude has been found to be more important than demand.5 Since its introduction in 1979, the model has been extended to include social support at work as a predictor of job strain.6 Karasek's model has received sufficient empirical support for it to provide a useful framework for interventions at work. As is evident from figs 1 and 2, individuals differ in their risk of experiencing stress and in their vulnerability to the adverse effects of stress. Individuals are more likely to experience stress if they lack material resources (for example, financial security) and psychological resources (for example, coping skills, self esteem), and are more likely to be harmed by this stress if they tend to react emotionally to situations and are highly competitive and pressured (type A behaviour). The association between pressures and well being and functioning can be thought of as an inverted U, with well being and functioning being low when pressures are either high or very low (for example, in circumstances of unemployment). Different people demonstrate different shapes of this inverted U, showing their different thresholds for responses to stress. A successful strategy for preventing stress within the workplace will ensure that the job fits the person, rather than trying to make people fit jobs that they are not well suited to. INTERACTIONS BETWEEN WORK AND HOME STRESS Increasingly, the demands on the individual in the workplace reach out into the homes and social lives of employees. Long, uncertain or unsocial hours, working away from home, taking work home, high levels of responsibility, job insecurity, and job relocation all may adversely affect family responsibilities and leisure activities. This is likely to undermine a good and relaxing quality of life outside work, which is an important buffer against the stress caused by work. In addition, domestic pressures such as childcare responsibilities, financial worries, bereavement, and housing problems may affect a person's robustness at work. Thus, a vicious cycle is set up in which the stress caused in either area of one's life, work or home, spills over and makes coping with the other more difficult. Women are especially likely to experience these sources of stress,7 since they still carry more of the burden of childcare and domestic responsibilities than men. In addition, women are concentrated in lower paid, lower status jobs, may often work shifts in order to accommodate domestic responsibilities, and may suffer discrimination and harassment INDIVIDUAL STRESS MANAGEMENT Most interventions to reduce the risk to health associated with stress in the workplace involve both individual and organisational approaches. Individual approaches include training and one-to-one psychology services—clinical, occupational, health or counselling. They should aim to change individual skills and resources and help the individual change their situation. The techniques listed in fig 3 mirror the active coping (fight/flight) and rest phases (habituation) of the stress model presented earlier. Training helps prevent stress through: becoming aware of the signs of stress using this to interrupt behaviour patterns when the stress reaction is just beginning. Stress usually builds up gradually. The more stress builds up, the more difficult it is to deal with analysing the situation and developing an active plan to minimise the stressors learning skills of active coping and relaxation, developing a lifestyle that creates a buffer against stress practising the above in low stress situations first to maximise chances of early success and boost self confidence and motivation to continue. A wide variety of training courses may help in developing active coping techniques—for example, assertiveness, communications skills, time management, problem solving, and effective management. However, there are many sources of stress that the individual is likely to perceive as outside his or her power to change, such as the structure, management style or culture of the organisation. It is important to note that stress management approaches that concentrate on changing the individual without changing the sources of stress are of limited effectiveness, and may be counterproductive by masking these sources. For example, breathing deeply and thinking positively about a situation causing stress may make for a temporary feeling of well being, but will allow a damaging situation to continue, causing persistent stress and, probably, stress to others. The primary aim of the individual approach should be to develop people's skills and confidence to change their situation, not to help them adapt to and accept a stressful situation. ORGANISATIONAL STRESS MANAGEMENT The prevention and management of workplace stress requires organisational level interventions, because it is the organisation that creates the stress. An approach that is limited to helping those already experiencing stress is analogous to administering sticking plaster on wounds, rather than dealing with the causes of the damage. An alternative analogy is trying to run up an escalator that's going down! Organisational interventions can be of many types, ranging from structural (for example, staffing levels, work schedules, physical environment) to psychological (for example, social support, control over work, participation). The emphasis on the organisation, rather than the individual, being the problem is well illustrated by the principles used in Scandinavia, where there is an excellent record of creating healthy and safe working environments3,8 (box 3). Box 3 : Principles of preventing work stress in Scandinavia Working conditions are adapted to people's differing physical and mental aptitudes Employee is given the opportunity to participate in the design of his/her own work situation, and in the processes of change and development affecting his/her work Technology, work organisation, and job content are designed so that the employee is not exposed to physical or mental strains that may lead to illness or accidents. Forms of remuneration and the distribution of working hours are taken into account Closely controlled or restricted work is avoided or limited Work should provide opportunities for variety, social contact, and cooperation as well as coherence between different working operations Working conditions should provide opportunities for personal and vocational development, as well as for self determination and professional responsibility Assessing the risk of stress within the workplace must take into account: the likelihood and the extent of ill health which could occur as a result of exposure to a particular hazard the extent to which an individual is exposed to the hazard the number of employees exposed to the hazard. The analysis of stressful hazards at work should consider all aspects of its design and management, and its social and organisational context.9 Although the priority is prevention, protective measures can be introduced to control the risk and reduce the effects of a given hazard. A detailed account of how to assess and reduce risk associated with exposure to stressful hazards is summarised in box 4. Box 4 : A risk assessment strategy—six stages9 Reliably identify the stressors which exist in relation to work and working conditions, for specified groups of employees, and make an assessment of the degree of exposure Assessment of harm: Collect evidence that exposure to such stressors is associated with impaired health in the group being assessed or of the wider organisation. This should include a wide range of health-related outcomes, including symptoms of general malaise and specific disorders, and of organisational and health related behaviours such as smoking and drinking, and sickness absence Identification of likely risk factors: Explore the associations between exposure to stressors and measures of harm to identify likely risk factors at the group level, and to make some estimate of their size and/or significance Description of underlying mechanisms: Understand and describe the possible mechanisms by which exposure to the stressors is associated with damage to the health of the assessment group or to the organisation Audit existing management control and employee support systems: Identify and assess all existing management systems both in relation to the control of stressors and the experience of work stress, and in relation to the provision of support for employees experiencing problems. Recommendations on residual risk: Take existing management control and employee support systems into proper account, make recommendations on the residual risk associated with the likely risk factors related to work stress Increasingly, legislation requires employers to assess and address all risks to employee health and safety, including their mental health (for example, the European Commission's framework directive on the introduction of measures to encourage improvements in the safety and health of workers at work). Creating a safe system of work requires targeting equipment, materials, the environment and people (for example, ensuring sufficient skills for the tasks). It also requires having monitoring and review systems to assess the extent to which prevention and control strategies are effective.10 Although associations between workplace factors and psychological ill health and associated sickness absence have been well documented, evidence based interventions to reduce these problems are scarce.3 Successful interventions used training and organisational approaches to increase participation in decision making and problem solving, increase support and feedback and improve communication.11–16 These studies found that: those taught skills to mobilise support at work and to participate in problem solving and decision making reported more supportive feedback, feeling more able to cope, and better work team functioning and climate. Among those most at risk of leaving, those undergoing the training reported reduced depression12 staff facing organisational change who were taught skills of stress management, how to participate in, and control, their work showed a decrease of stress hormone levels14 staff taught verbal and non-verbal communication and empathy skills demonstrated reduced staff resignations and sick leave16 physically inactive employees undergoing stress management training improved their perceived coping ability and those undergoing aerobic exercise improved their feelings of well being and decreased their complaints of muscle pain, but also reported reduced job satisfaction11 employees undergoing one of seven training programmes emphasising one or more aspects of stress management—physiological processes, coping with people or interpersonal awareness processes—showed reductions in depression, anxiety, psychological strain, and emotional exhaustion immediately after the programme. There was a further reduction in psychological strain and emotional exhaustion at 9–16 months' follow up13 those on long term sickness absence who were referred early to the occupational health department (within two or three months absence) reduced their sickness absence from 40 to 25 weeks before resumption of work and from 72 to 53 weeks before leaving employment for medical reasons, leading to large financial savings.15 Success in managing and preventing stress will depend on the culture in the organisation. Stress should be seen as helpful information to guide action, not as weakness in individuals. A culture of openness and understanding, rather than of blame and criticism, is essential. Building this type of culture requires active leadership and role models from the top of the organisation, the development and implementation of a stress policy throughout the organisation, and systems to identify problems early and to review and improve the strategies developed to address them. The policy and its implementation should be negotiated with the relevant trade unions and health and safety committees (for a trade union example of a model agreement for preventing stress at work see the Manufacturing, Science and Finance Union guide17). Last, but by no means least, interventions should be evaluated, so that their effectiveness can be assessed. Ideally, the method of achieving this should include a high response rate, valid and reliable measures, and a control group. Two measures that provide a comprehensive analysis of work stress and have been widely used are the Job Content Questionnaire, which includes measures of the predictors of job strain described earlier,18 and the Occupational Stress Indicator.19 QUESTIONS (SEE ANSWERS ON P 8) Stress is best seen as: pressures and demands put on an individual maladaptive responses of individuals—for example, anxiety, irritability, aches and pains an interaction between situational demands and the resources of the individual to cope with them a reaction that can be positive or negative, depending on the person and the situation a transactional process, whereby the ability of a person to prevent stress is determined by that person's appraisal of threat and his/her appraisal of his/her coping abilities The following are key dimensions of Karasek's model of job strain: The following are examples of active coping: distracting oneself from the problem Individual approaches to stress management aim to: help the individual change their situation change the sources of stress help develop people's confidence help people adapt to the stressful situation Organisational approaches to stress management: are analogous to putting a sticking plaster on a wound consider stress in terms of hazards depend on training are appropriate in a culture of blame and criticism include interventions designed to increase participation in decision making
<urn:uuid:c11ec22a-2a7e-47c6-8dd3-9eea29e9089b>
CC-MAIN-2022-33
https://oem.bmj.com/content/59/1/67
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00496.warc.gz
en
0.947789
4,190
3.234375
3
Poor health outcomes among lesbian, gay, bisexual, transgender, intersex, queer, and people with a diversity of sexual and gender identities (LGBTIQ+)1-4 highlight the necessity to ensure equitable access to high-quality care5. Research indicates these communities may experience myriad challenges when engaging with health systems, such as multilevel discrimination, receipt of inappropriate care, and insufficient expertise on the part of providers5,6. For LGBTIQ+ populations residing outside of major cities, the rural healthcare landscape has fewer services (specialist or general), health workforce shortages, and travel-related access burdens that can shape health and health care7-9. The precise nature and implications of this intersection of rurality and LGBTIQ+ identity are not yet well understood. Although rural LGBTIQ+ people form part of study samples, findings specific to this cohort are often not distinguished or explored, which may reflect a lower representation within the sample, as well as the possibility that rural LGBTIQ+ community members do not feel comfortable disclosing identity, including as part of research studies10,11, which in turn, impedes accumulating understanding. To date, a systematic review of the peer-reviewed literature published between January 1998 and February 2016, limited to US samples, undertaken by Rosenkrantz et al (2017)10, offers the most comprehensive reporting on this body of work. They identified the presence of mental health issues, sexual risk-taking, and substance-use concerns among rural lesbian, gay, bisexual, and transgender (LGBT) communities for whom stigma, discrimination, insufficient provider cultural competency, and challenges associated with disclosure of identity were experienced in health service interactions. Further, features of the sociocultural context shaped these experiences including the education and approach of providers, a number of access barriers (eg costs), and a lack of social support, combined with social stigma. Rosenkrantz et al found ambiguous and inconsistent results in the comparison of the health and health care between urban and rural LGBT populations (compounded by methodological limitations in this body of literature) and concluded that the differences observed concerning rural populations warranted further investigation of the experiences of this population. Findings from Rosenkrantz et al (2017)10, in addition to recent research7,12,13, underscore the need to support an emerging understanding of the health and healthcare experiences of rural LGBTIQ+ communities and, with it, grow capacity to inform policy and guide practice. This review contributes to these efforts by building and extending this foundation10, along several dimensions. First, the current review provides updated knowledge by synthesising the relevant evidence generated within the past 5 years. Second, the geographical scope of the study is expanded to Canada, Australia, New Zealand, and the UK, as well as the USA, to capture data from countries with comparable health systems that provide services to rural populations. Third, to aid comprehensiveness and incorporate those insights not represented in traditional academic outlets, grey literature is included. The present review seeks to progress understanding about rural LGBTIQ+ communities with regard to wellbeing, healthcare access and experience, and barriers and facilitators to health care. Search strategies for the peer-reviewed literature were adopted from Rosenkrantz et al10, and revised to reflect updated terminology and the expanded focus. The strategy contained three blocks of relevant terms and keywords for identity-related terms, rural-related terms, and health and healthcare-related terms shown in Table 1. PubMed, Academic Search Premier, CINAHL, and PsychInfo databases were searched in August 2020 to capture literature published from January 2015 to August 2020. This timeframe was decided upon to capture relatively recent literature, including literature published since the Rosenkrantz et al review. Results of the searches were imported into EndnoteX9 and duplicates removed. Grey literature was searched via Google Advanced Search for the same period. Given multiple groups of individual search terms, a customised Google Search Application Programming Interface (API) client was developed to combine terms in the aforementioned keyword blocks14. The automated algorithm generated a set of unique search queries for each possible combination of individual terms from each group, combined through the Boolean ‘AND’ operator. The algorithm was configured to ignore results from YouTube and Wikipedia, and to return a maximum of 50 results per query15. The returned results were aggregated and duplicates removed. In this work, three groups consisting of 5, 3, and 10 terms respectively were provided, resulting in 150 individual queries. A total of 6774 raw results were returned, of which 2785 were considered unique. Raw Google API results were parsed into a comma-separated text file, with each entry containing a numeric index, the query string from which it was returned, the results page number, page title, URL, and a summary ‘snippet.’ Reference lists of all included articles were screened. Records were included if they were published in English; qualitative, quantitative or mixed-method; reported findings from the US, Canada, Australia, New Zealand, or UK; published between 2015 and 2020; and reported on the health of and/or healthcare services for LGBTIQ+ adults in rural areas. The reporting of primary findings was required for peer-reviewed records. Study selection was documented and is summarised in a flow chart compliant with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)16 (Fig1). For the peer-reviewed articles, one author reviewed the titles, abstracts, and full texts of articles that appeared to meet the inclusion criteria; another author followed the same procedure for the grey literature. At each step, a second author reviewed a sample of 10% of the articles to ensure consistent application of the inclusion criteria. Throughout the process, consultation was undertaken between the authors to resolve any uncertainty, and progress was reported on and discussed with all authors. The quality of peer-reviewed studies was assessed using the Mixed Methods Appraisal Tool (MMAT)17. This tool was specifically designed to assess a range of quality dimensions in qualitative, quantitative, and mixed-methods research. The Authority, Accuracy, Coverage, Objectivity, Date, Significance (AACODS) checklist18 was used to appraise the grey literature. An independent assessment of all included literature occurred and, in each case, a second assessor examined a 10% sample for rigour. Data extraction and synthesis strategy A purpose-designed Microsoft Excel template was used to guide data extraction, which included citation, year, title, country of study, population, study aims, study design, recruitment strategy, sample size, data collection method, analysis method, barriers, facilitators, findings, and other similar details. These data were then grouped thematically corresponding to the key areas of focus. Microsoft Excel software was used to manage and assist with the analysis. The reporting framework employed in Rosenkrantz et al10 offered an initial template for grouping the data, which was subsequently revised to better capture the data collected in this review. Ethics approval was not required as the systematic review include publicly available resources and data from published studies. Searches conducted on literature published between January 2015 and July 2020 returned 296 unique peer-reviewed records and 2785 grey literature documents, with one additional peer-reviewed record included following reference list review. The full texts of 69 peer-reviewed papers and 2785 grey literature documents were assessed for eligibility. A total of 27 peer-reviewed papers and 2773 grey literature documents were excluded on the basis that metropolitan and non-metropolitan data were not distinguishable, LGBTIQ+ participants’ data were not distinguishable, or there was insufficient focus on health/health care. Subsequently, 42 eligible peer-reviewed papers and 12 grey literature documents were included (Fig1). Characteristics of included studies Complete details of the included peer-reviewed studies and the grey literature are shown in Tables 2 and 3, respectively. In summary, studies from the peer-reviewed literature were conducted in the USA (n=27)13,19-44, Australia (n=8)7,12,45-50, Canada (n=4)8,9,51,52, the UK (n=2)53,54, and a single study conducted in both the USA and Canada (Table 2)55. Grey literature was produced in Australia (n=6)56-61, the USA (n=5)62-66, and Canada (n=1)67 (Table 3). Peer-reviewed studies used a range of quantitative (n=20)19-24,26-28,30,32,33,35-39,41,48,55, qualitative (n=17)7-9,12,13,25,29,34,40,43,45,46,49-52,54, and mixed-methods (n=5)31,42,44,47,53 designs. Grey literature comprised reports (n=4)56,64,66,67, articles (n=5)60-63,65, submissions (n=2)57,58, and an information sheet document59. In the peer-reviewed literature, 36 studies reported findings from LGBTIQ+ communities, most commonly men who have sex with men (n=7)13,28-30,33,39,53; lesbians (n=7)20-22,25,34,35,50; lesbian, gay, bisexual and transgender (n=4)36,37,41,45; and transgender (n=4)7,9,24,40. In the grey literature, communities most commonly identified were lesbian, gay, bisexual, transgender, and intersex (n=3)56,57,59; transgender (n=2)61,63; and lesbian, gay, bisexual and transgender (n=2)60,64. Some of the literature concerned specific subgroups, namely elders (n=5)25,46,50,54,57, veterans (n=2)24,32, and young people (n=4)12,45,48,67. Healthcare provider perspectives were reported on in nine peer-reviewed studies7-9,31,38,42-45 and in five of the grey literature documents57,61,63-65. Additionally, in one study, general practices were examined via document analysis49. The majority of the peer-reviewed studies and grey literature addressed some aspect of engagement with health services. Across these bodies of literature multiple or non-specific health services were most frequently referenced. Quality of included literature The majority of peer-reviewed studies satisfied 80% or more of the quality components corresponding with the study design (Table 4). On average, the grey literature satisfied 5.5 of the six AACODS criteria, with a range of 4/6 to 6/6 (Table 5). Data synthesis resulted in four superordinate themes: wellbeing, healthcare access and experiences, barriers, and facilitators. The first theme examines various domains of health status, while the second theme presents the experiences of rural LGBTIQ+ communities in engaging with health care, including reporting findings of what has constituted quality care. The final two themes identify those barriers and facilitators that are interconnected with health status and experiences of engaging with care. Table 6 depicts themes and nascent subthemes identified in included studies. Theme 1. Wellbeing: This theme concerns mental, physical and sexual wellbeing, as well as reporting on substance use. Mental health Depressive symptoms were reported as an issue among rural LGBTIQ+ communities26,28,32,41,46,47, as were anxiety32,47,55, and elevated psychological distress (measures often included indicators of depression and anxiety)19,48,58. Multiple studies noted the interconnections between mental health and participants’ reduced comfort in disclosing sexual identity37,48, as well as deficits in community support, isolation and loneliness26,34,37,40,46,48,58,59,61. Physical health Studies reported a mixture of good47 and poor27 overall health, with differences noted among subgroups (eg cisgender men reported better health relative to other groups)41. Co-morbid and chronic conditions were identified as impacting health status41,46,47, with approximately a third of participants diagnosed with chronic conditions in one study41. Weight concerns were also identified as an issue22,27,32,35. Where ageing was examined, participants expressed fears about decline in physical health, mobility, and were concerned that being forced by disability or illness to enter assisted living where their isolation might be further exacerbated25,50,54,57. Gardiner’s (2018)46 research provided insight into the intersections of ageing, rurality, and living with HIV, where rural gay men felt that the complexities of their lived experiences had given them wisdom that could be applied to the process of ageing46. Sexual health Rural LGBTIQ+ community members reported receiving insufficient sexual health education and prevention counselling21,29,33,65. Mixed results were found where studies compared prevalence of rural men living with HIV to their metropolitan counterparts24,30,33,48. For MSM in rural areas, HIV-related stigma was correlated with loneliness and impacted sexual health practices30. Older gay men living with HIV in rural areas also managed co-morbidities and treatment side-effects46. It is important to contextualise this information within the broader experiences of provider interaction, as will be discussed in the following sections. Substance use A high prevalence of current and former tobacco use among rural LGBTIQ+ communities was indicated in several studies23,24,36,41,67, with some studies indicating differences among subgroups within these communities27,32. Whitehead et al (2016) found a high prevalence of binge drinking among rural LGBTIQ+ communities41. In other studies, rural status was not found to significantly impact alcohol or illicit drug issues24,32. Similarly Bukowski et al (2017) found that neither rurality nor urbanity impacted prevalence of illicit drug use among transgender veterans24. Theme 2. Healthcare access and experience: This theme captures service access and use, cultural competency, variable quality of care as well as reporting on disclosure. Service access and use Local availability of appropriate services was a key issue for rural LGBTIQ+ participants7,9,12,13,19,25,29,39-41,45-47,50,53,54,57-61,63-67. A lack of specialist gender services could mean long waiting lists and travel, which may be difficult to negotiate, particularly for young people without family support9,40,41,47,67. Suboptimal preventative healthcare practices were identified, including uptake in vaccination and screening for a range of general, sexual, and reproductive health concerns21,33,41. Even where care was available, the full range of services were not easily accessible for rural LGBTIQ+ populations; this would, for example, include where there was fear of discriminatory treatment from providers56, and where local providers would not prescribe pre-exposure prophylaxis (PrEP, a course of medication shown to reduce the risk of contracting HIV)13,21. Cultural competency Participants highlighted the importance of cultural competency (knowledge and awareness necessary to provide appropriate care) on the part of providers7-9,12,13,20,21,25,38,40,41,44,47,50-52,54,56-58,61,63-65. Identifying knowledgeable providers may not always be easy, as demonstrated in Staunton-Smith et al (2019), where only 6 in a sample of 37 of primary health practices visibly displayed signs of a culturally inclusive LGBTIQ+ environment49. Insufficient provider knowledge relative to the concerns of rural LGBTIQ+ community members was reported7-9,12,19,38,40,44,57,59, including this being ‘out of their scope (p. 86)’9. This was mirrored in findings concerning healthcare providers’ knowledge and preparedness31,38,41-44,64. Poor-quality care The majority of the included studies cited interactions between rural LGBTIQ+ community members and providers, which were characterised by explicit and implicit discrimination, stigma, and degradation7,9,19,20,29,42,52,56-58,64. This included instances where participants were exposed to detrimental attitudes and judgements9,12,47,50,61,64, breaches of confidentiality51,52, provider failure to support choices made51, refusal of services9,13,33,64, and invasive questions9,65. In many of these cases, community members were subjected to heterosexism and cisgenderism, which, among other means, was enacted in language (eg misgendering)58,64. Good-quality care Participants valued inclusive, confidential, competent, and affirmative approaches that did not reproduce dominant and stigmatising paradigms via provider behaviour and language9,12,13,42,45,47,51,52,56,57. The benefits of a whole-of-person approach to care were expressed7-9, for example ‘trans healthcare is more than just hormones and surgeries (p. 438)’7. Additionally, it was appreciated when providers engaged in advocacy and facilitated connections with support systems8,9. Disclosure Participants reported that they had not been asked about sexuality and gender by providers, as well as having few opportunities for disclosure20,53. Participants described navigating this process carefully, given the possible impact upon the relationship9,13,29,50-52,64,65. Previous reactions to disclosure were reported to shape this decision9,29,57, and a mixture of affirming and negative reactions were reported in studies20,53,62. A complex picture of disclosure emerged where, for some participants, it was a way to screen unsuitable providers, carers, or the care agencies they represent54, while others felt disclosure was not relevant to their care53. Theme 3. Barriers: Reported within this theme are barriers concerning negative experiences and fear about future interactions, a paucity of available, appropriate services, financial and practical issues, as well as the challenges for providers. Negative experiences and fear about future interactions Apprehension and fear about negative interactions with health services were cited as a barrier13,19-21,25,40,51,54,56-59,61,63-65. Future engagement with services is informed by previous negative experiences and, as a result, trust in health services requires rebuilding8,9,57. Lack of available, appropriate services A lack of local, appropriate services emerged as a substantial barrier with implications for the wellbeing of rural LGBTIQ+ communities7-9,12,13,21-23,25,26,29,34,39,40,43,45-47,49-51,54,56-58,60,61,63-67. A lack of local, affirming, or, at a minimum, non-stigmatising providers of PrEP emerged as a critical issue in this review13,29,39, where, for example, lower urbanicity was strongly associated with increased odds of PrEP desert status for MSM39. Financial and practical considerations The financial and practical considerations associated with travel are impediments to accessing care by LGBTIQ+ people, where appropriate services are not locally available (eg cost and logistics of travel)9,12,13,25,39,41,50,57,61,63,66,67. In addition, insufficient financial coverage and/or limited financial resources formed a barrier to accessing appropriate care8,9,13,19,40,54,63,64. This included, for example, whether insurance covers telehealth (care delivery of care via telephone, video-conference, and other internet-based platforms) specialist consultations63. Further, limited internet coverage in rural areas posed a barrier to accessing internet-based mental health services42,43,45,47,63. Challenges for providers Deficits in relevant education, training, and support mean that providers are underequipped to provide quality care31,38,41-44. As an example, only 54.87% of primary healthcare providers in a US sample reported receiving education specific to LGBTIQ+ health during their professional degree program38, with a similar proportion of professionals indicating that they felt competent to provide LGBTIQ+ patient care in another sample44. Fewer appropriate local services could place a burden on providers9, where they may be professionally isolated61, have long waiting lists8, few appropriate options for referral9, and risk burnout9, which increases demand on already stretched services to support LGBTIQ+ communities, particularly during high-need periods such as created by the COVID-19 pandemic58. Theme 4. Facilitators: Within this theme, education, training and support, the provider approach to care, resources, and new models and the role of support networks and community are identified and described as facilitators to health and health care. Education, training, and support Education, training, and support helps providers deliver quality care to rural LGBTIQ+ communities7,9,19,21,31,38-44,50,53,57-59,65,67. Providers in several studies welcomed opportunities to learn more about LGBTIQ+ needs7,8,31,38,42,43 and were self-educated8,9. The importance of ongoing learning, support, training, and connections with other helpful providers in the community was reported19,31,42,43. Provider approach to care Cultural competency and providers’ willingness to learn underpins quality care9,12,13,22,25,42,43,50,53,54,56,65. An explicit commitment to inclusive and affirmative care, which can take the form of visual signage, as well as being enacted in the language and behaviour of providers, is also valuable9,12,19,20,30,34,41,44,51-53,57,58,62,63,65. These may be especially important where community members are fearful or apprehensive about services, and could aid disclosure12,62. In addition to avoiding the reproduction of problematic heterosexist and cisgenderist assumptions and practices, participants reported the importance of holistic care, which promoted autonomy and aided community members to connect with support systems where desired8,9,13,19,20,22,42,52,59,65. Resources and new models Models to enhance care were suggested: embedding a specialist within primary care practices8,9, creating pathways to streamline and regulate the assessment process for access to transition-related therapies7,60, and using peer advocates as paraprofessionals31,42,43. Telehealth could play a useful role by furthering the reach of services, supporting anonymity where desired, and combating isolation7,13,19,22,26,29,31,39,41,45,56-59,61,63,67. However, caution was urged about ensuring that these services complement, rather than replace, face-to-face services45,56. Role of support networks and community Support from those closest to rural LGBTIQ+ people, as well as from social networks and the broader community, were considered important for wellbeing7-9,21,22,25,26,29,30,34,37,40,43,45-48,50,54,56,58-60,65 as poor community support can impact mental health and help-seeking29,34,48. The development of support systems including those with family, friends, and social networks was advocated for7,9,47,54, and providers can play a role in facilitating these connections8,9,19-23,31,34,42,43,58,59,67. We systematically reviewed recent evidence from peer-reviewed and grey literature to advance understanding of the wellbeing and healthcare experiences of rural LGBTIQ+ communities. Cumulatively, data from the USA, Australia, the UK, and Canada are reported, encompassing a range of primary care and specialist services, with representation of the experiences and views of LGBTIQ+ rural communities, as well as service providers. Overall, the included literature was deemed to be of good quality. The first aim of the review was to examine the health of rural LGBTIQ+ communities. Consistent with previous findings10, rural LGBTIQ+ people face many of the health challenges experienced in the wider LGBTIQ+ community1-4. These challenges included the impact of co-morbid and chronic conditions41,46,47, challenges associated with substance use23,24,36,41, and managing HIV-related stigma and living with HIV in rural contexts30,46. Poor mental wellbeing, including experiences of depression, anxiety, and elevated psychological distress, and its interconnections with comfort in disclosing sexual identity, isolation, and loneliness, as well as deficits in community support, were noted26,34,37,40,48,58,59,61. A chief concern is that in rural communities there may be a lesser visible presence of LGBTIQ+ networks, as well as more identity concealment48, which, in turn, can limit access to support, including that from LGBTIQ+ peers47,48. In examining healthcare experiences as the second aim of the review, suboptimal preventative healthcare access and practices were identified across general, sexual, and reproductive health domains13,21,29,33,41, which presents a pressing concern for the wider LGBTIQ+ community68,69. A lack of locally available, appropriate services, including specialist services, was observed and, even where present, rural LGBTIQ+ populations did not necessarily receive the full range of care needed13,21. Studies describing the experiences of LGBTIQ+ elders illustrated the interaction between rural contexts and wellbeing; elders encounter fewer LGBTIQ+-friendly provider choices and become socially isolated with decline in physical mobility and, potentially, independence25,46,50,54,57. While there has been rapid growth in LGBTIQ+ aging research, significant deficits remain70, and the development of policy and procedures to guide care as well as aged-care provider training are sorely needed71,72. Congruent with best practice guidelines73,74, good-quality care was represented in the review as culturally competent, inclusive, confidential, and affirmative, where a whole-of-person approach was taken. In contrast, poor-quality care included discrimination (eg refusal of services), stigma, and demeaning interactions, often enacted in language (eg misgendering) or practice (eg breaches of confidentiality). These findings suggest that experiences with poor-quality care remain significant, ongoing issues for rural LGBTIQ+ communities. Consistent with Rosenkrantz et al10, identity disclosure could negatively impact healthcare interactions, and, in these current findings, LGBTIQ+ peoples’ assessments of identity relevance and the ways disclosure may enable screening of inappropriate providers capture the complexity and range of determinations inherent in clinical interactions. Greater attention to rural populations in the dedicated study of disclosure practices is vital75,76 to supporting the communities negotiating these interactions. Many of the well-established barriers to wellbeing and health care are also reported for rural communities. Synthesis showed the way in which a lack of local service availability in rural areas is compounded by logistical and practical challenges. In light of this, access improvement initiatives ought to account for these considerations, including any necessary travel and limited internet coverage. Findings that captured service provider perspectives underscore deficits in provider education and training, and alert us to the need to address the pressures of high demand and the risk of professional isolation faced in rural areas8,9,58,61. The provision of education, training, and support that is ongoing and connected with professional networks may be a means to mitigate some aspects of this barrier, including for paraprofessionals, such as those involved with peer support programs. Engaging in the types of high-quality care as previously described is regarded as a facilitator to care. A visible, explicit commitment to these practices would help affirmative services be more easily recognised by communities. The need to engage with new models of care, such as those that embed expertise and streamline processes, was evident7-9,31,42,43. An important insight concerned the potential of telehealth, especially internet-based approaches, to ease some of the burdens associated with a lack of local services. If enacted to complement in-person care, the careful integration of such services, including rural LGBTIQ+ community input and needs at the heart of its development, is recommended45,56. It was not surprising, given existing understanding77-79, that the development of support systems including family, friends, and social networks was viewed as key to wellbeing, and while this may challenging in rural contexts, providers can play a critical role in facilitating these connections8,9,19-23,31,34,43,44,58,59,67. While the abbreviation LGBTIQ+ is commonly used in the literature, the term does not fully capture the full diversity and range of identities and practices of the communities discussed in this review, who have nuanced and individualised experiences of health and health care. Further, this framing holds assumptions that may be unhelpful or problematic for certain groups including people who are intersex80. With the exception of one article49, findings concerning the experiences of rural intersex community members were from the grey literature, which indicates the need for in-depth consideration of the experiences of this population, employing a more relevant and sophisticated strategy. As such, the findings should be interpreted to indicate the range of possible shared issues and experiences encountered collectively. The search was restricted to Canada, Australia, New Zealand, and UK and included only English language records. It is, therefore, possible that relevant studies may have been overlooked. The inclusion/exclusion criteria dictated that studies be excluded where it was not possible to distinguish between results concerning LGBTIQ+ communities and non-LGBTIQ+ communities; therefore, several studies that appeared in the previous review were excluded from the synthesis of the current review. Finally, the evidence encompasses a variety of health service contexts and, therefore, the general concerns about de-contextualisation in systematic reviews are relevant here, and findings should be interpreted accordingly. This review reinforces that many aspects of the health and healthcare experiences of LGBTIQ+ are not unique to the USA and has served to provide further evidence and extend upon what is understood about these experiences. These findings indicate directions for future research efforts, including advancing evidence to guide policy and practice for aged care services in rural areas; investment in strategies to support rural providers; and the design, trialling, and evaluation of tailored models of care that account for rural barriers and harness existing capacities. This research has been funded by the Department of Communities Tasmania’s LGBTIQ+ Grants Program 2020.
<urn:uuid:12fb2360-6e17-4ab0-8bc2-241693099c38>
CC-MAIN-2022-33
https://www.rrh.org.au/journal/article/6999
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571950.76/warc/CC-MAIN-20220813111851-20220813141851-00696.warc.gz
en
0.952473
6,464
2.515625
3
Running Head: THE HISTORY OF SOCIAL SCIENCES The History Of Social Sciences: Senior Seminar Project Rebecca Pottle In order to provide a historical view of the social sciences, it is critical to include a definition of just what exactly social science is. Social science is a somewhat complex field, in that it encompasses several sub-fields within, or sub-branches if you will. The simplest definition is the study of human society and of individual relationships in and to society. It can also be defined as a scientific discipline that deals with such study, generally regarded as including sociology, psychology, anthropology, economics, political science and history. (1). My definition of social sciences, although not supported by all in the field, is an interconnectedness of specific aspects of various components in society. For example, if you were seeking answers to the current state of the economic issues going on in the United States, one must first look at all elements that are contributing to the poor economy, instead of just narrowing it down to one area, such as lack of employment options. There are several layers involved in social sciences, just as there is to the example of the economic situation the United States is dealing with. A social scientist studies all aspects of society, from past events and achievements, to human behavior and relationships among groups. The research is used to provide insight into the different ways individuals, institutions and groups make decisions, respond to change and exercise power. The studies and analyses drawn by social scientists provide possible solutions to social, economic, business, governmental, environmental and even personal problems. Social scientists have certainly earned the reputation of being problem solvers! Although there is controversy within the sciences as to whether or not social science is a “true” science, history has proven that social scientists are a unique, necessary breed of problem solvers. So why does one seek higher education opportunities in the social science field? For some such as myself, it is an accidental discovery of a strong interest in the subject. A student may be well aware that they have an inner-drive to help others but not entirely clear on how to channel that desire into a specific degree. For me the journey into a social science degree was ignited by my first course in sociology. Most agree it is much easier to grasp the theories and information about a certain topic if the student holds an interest in the subject matter. Due to my limited knowledge of social sciences upon my entry into college, I was unable to identify this as a degree option. Was my lack of knowledge of a degree available in social sciences related to the controversy around social sciences as being a “true” science? Unfortunately I cannot say for certain why I was not provided this as a career choice during my pre-entrance career guidance, but I do believe that much more information about this field needs to be disseminated to graduating seniors. Perhaps when social sciences are deemed as important and “scientific” as the other natural sciences, than the information about this field will be better spread. As stated earlier, there is a bit of controversy in regards to the validity of social sciences being labeled as a science. At the base of the disagreement lies the definition of science itself. Some scientists, who rely on more concrete laws, measurement tools and concrete scientific measuring, doubt the validity of social sciences as a whole. The two sides debate that social science can be termed that of a science because, the physical sciences believe that social scientists draw analyses that can be subjective and can manipulate the research design to sway the outcome, where a physical science such as biology can’t. Social sciences are often criticized as being less scientific than the natural sciences, as they are believed to be less rigorous or empirical in their methods. Typically the main reason social sciences are criticized is that this science is largely observational and the explanations for cause-effect relationships are largely subjective. Interestingly during the Ancient times, a distinction, or splitting of the sciences didn’t exist between math, history, poetry or politics. The development of the mathematical proof resulted in a gradual raise in perceived differences between “scientific” disciplines and others. The use of field research is perhaps the strongest similarity between social science and the various other “hard” sciences. Social scientists in an attempt to draw analyses about a certain phenomena do just as the words suggest; study within the field. The field experiment can be done in a laboratory setting but because the subject matter is behaviors and relationships it is extremely important to keep the setting as natural as possible or as the subject would act within it’s regular natural setting. Social science theories did not completely originate in the United States. In fact, Muslims made the first contributions to the social sciences in Islamic civilizations. Often termed the “first anthropologist” Ali-Biruni (973-1048) developed comparative studies on the anthropology of peoples, cultures and religions in the Middle East, South Asia and Mediterranean. His success with the anthropology of religion was made possible by deeply immersing himself in the lore of various nations. This type of field practice laid the basic guidelines for future successful anthropological studying of various cultures in that to best study and report back findings it is critical to immerse oneself into a particular culture or field research. There are many historians that can be credited for the development of the social sciences. Probably the most influential historians often termed “the father of social sciences or sociology” was the French philosopher Auguste Comte (1798-1857). He first introduced the term sociology in 1838 in which he defined sociology as a scientific study of a society. Comte introduced the theory of positivism. Comte’s development of positivism was a belief that a society historically undergoes three phases in its quest for the truth, or also termed “The Law of Three Stages”. The three phases include in this particular order: theological, metaphysical and scientific/positive are attempts at understanding the nature of life using various reasoning that all societies developmentally progress through. The theological phase is the phase passed through, which includes a whole-hearted belief in all areas and things as they relate to God, or God’s will. This phase deals with humankind’s acceptance of the doctrines of the church rather than searching for alternative explanations about humankind’s existence. The restrictions put on the society by the various religious organizations were accepted as facts. The theological phase according to Compte, was predominantly prior to the Enlightenment era. Man believed nature to be mythically conceived and sought the explanation of natural phenomena from supernatural beings. The metaphysical phase is derived from the Greek meta ta physika (“after the things of nature”). It is within this phase that explanations to explain inherent or universal elements of reality are sought that are not easily discovered or experienced in everyday life. This phase instills a belief of the importance of universal human rights. Human beings are born with certain rights that must first and foremost be protected. Comte recognizes the development of this stage during the rise and fall of democracies and dictators in attempts to protect the innate rights of humanity. Much of the philosophy of this stage is present in modern time. Mankind still places a strong value on the rights of individuals. Comte places the beginning of this phase at around the time of the Enlightenment in which logical rationalism was at its peek and continued until after the French Revolution. This stage has also been considered the stage of investigation in which people started reasoning and questioning which was quite different than the previous theological phase in which such questioning would not be considered. The final stage as mentioned earlier, is the positive stage sometimes referred to as the scientific stage. It is this stage in which the foundation of social sciences was laid. Comte argued that a society needs scientific knowledge based on facts and evidence to solve various social problems instead of speculation and superstition, which reigned in the previous two stages of social development. The overall central idea of this is phase is that individual rights are more important than the rule of a person. Comte believed that humanity is able to govern itself, which makes this stage so different from the rest. Comte believed that appreciation of the past and progression thorough the phases are necessary steps to transitioning through the theological, metaphysical to positivism. Positivism holds firmly a belief that only authentic knowledge can be received by the actual sense experience and that affirmation of these theories must come through strict scientific methods. The progression through the phases is due to the development of the human mind, increased application of thoughts and applying reason and logic to the understanding of the world. Much of Comte’s philosophy provided extreme value to the social sciences as it has come to be known today. Perhaps the most thought-provoking theory in my opinion was his belief in valuing historical discoveries and knowledge when seeking answers to current issues. He believed that sociology would “lead to the historical consideration of every science” and in his opinion “the history of one science, including pure political history, would make no sense unless it were attached to the study of the general progress of all humanity. Much of this type of thinking can be likened to the later developed functionalism theory. He believed in an interconnectedness of different social elements, which is where I position my beliefs. Although Comte had much to offer the future of social sciences, he didn’t win the minds and opinions of all he tried to influence. To some, his theory on positivism in itself was contradictory by his behaviors. Again, this stage is described as an evolution from the theological stage, yet Comte in his later work attempted to elevate Positivism into a type of a religion and in fact named himself the “Pope of Positivsm”. This type of behavior contradicted his total theory of the progression through the phases. Throughout history and present in today’s time, there has been consistent rejection of science and religion of any type. It was Comte’s hope that the role of sociologists would involve developing a base of scientific social knowledge and that this knowledge would guide society into a positive direction. Comte consistently tried to demonstrate that each science is necessarily dependent on the previous science. He believed that each science developed by a logic that was applicable to that particular science and subsequent knowledge could only be revealed by the historical study of that science. The sciences themselves are classified on the basis of increasing complexity and decreasing generality of application in the ascending order: mathematics, astronomy, physics, chemistry, biology, and sociology. Each science depends at least in part on the science preceding it; hence all contribute to sociology (a term that Comte himself published). A definite social reformer, Comte felt that a sociology developed by the methods of positivism could achieve harmony and well being for mankind. His theory was ignited with a passionate hope that society may develop in which individuals and nations could live in harmony and comfort. He theorized that with the continued development of the science of sociology, a superior stage of civilization could exist. Perhaps Comte states it best with the following statements: “Here, then, is the great, but evidently the only, gap that has to be filled in order to finish the construction of positive philosophy. Now that the human mind has founded celestial physics, terrestrial physics (mechanical and chemical), and organic physics (vegetable and animal), it only remains to complete the system of observational sciences by the foundation of social physics [sociology]. We need a new class of properly-trained scientists, who, instead of devoting themselves to the special study of any particular branch of natural philosophy, shall employ themselves solely in the consideration of the different positive sciences in their present state. So very much of Comte’s thinking can be considered contributions to the social sciences of today. His research methods and emphasis on a quantitative mathematical basis for decision-making is present with us today. His cyclical method between theory and practice is utilized regularly in not only social sciences, but within certain business practices such as total quality management. I would argue that Comte’s viewpoints provided an opportunity for mankind to view the world they live in much differently than they would have prior to his contributions. His different views opened up the minds of many and paved the way for future advancement of mankind to say the least. His thoughts opened up the imaginations and possibilities of many future scientists. Another famous social science historian or “father of sociology” as he too has been coined, was Emile Durkheim (1858-1917). Durkheim’s work followed much of Comte’s goals in establishing a science of society. His theory argued that in order to make a scientific study of a society, one must think about society and it’s parts as real, and social facts as things. To Durkheim society is really real and not something that emerges from individuals interacting. This type of philosophy received much criticism as for those who challenged his thinking like Max Weber; see individuals as real, and society an abstraction that explains the relations of those individuals. Durkheim termed individuals in a society as social actors and when they interacted, social systems came into being that had properties that could not be reduced to the characteristics of those individuals. His focus was not on what motivated the actions of individuals but rather studying the social facts. He argued that the independent existence of social facts were greater and more objective than actions of individuals and thus could only be explained with use of social facts instead of focusing on what motivates the actions of individuals. Durkheim founded the first European department of sociology at the University of Bourdeaux. Durkheim founded the L’Annee Sociologique in 1896 as a communication method of his research and that of his students. It was the work of this journal that helped establish sociology within the academias and became an accepted social science. This method of sharing research findings became common practice and an expectation within the social sciences and remains a critical component to the social sciences. Throughout his lifetime, Durkheim gave numerous lectures as well as publishing multiple studies on subjects like crime, religion, suicide and education. It is not an easy feat to narrow it down to one area in which Durkheim’s influence complimented the social sciences. His theory on the purpose of education in society was certainly an interesting perspective worthy of consideration. He utilized his profession as a teacher of teachers to integrate his sociological beliefs into curricula. Durkheim believed that education in a society served multiple functions. The first function of education is to reinforce social solidarity. History sharing or sharing the accomplishments of individuals who have done good things for many leads to feelings of insignificance by individuals. The practice of pledging allegiance makes individuals feel part of a group and less likely to break social rules. The second function of education is to maintain and foster specific social roles. Durkheim viewed school as a miniature replica of society as it had similar hierarchy, rules and expectations and the main goals are to train young people to fulfill certain roles in the society. The third function of education is to maintain the division of labour. Schools sort students into various skill set groups and encourage those students to take up employment in the fields in which their abilities best suit them for. This type of philosophy is certainly deserving of consideration. When one views the current schooling system it would be hard to debate that education does not in part fill some of the functions that Durkheim pointed out. In his 1893 publication The Division of Labour in Society Emile Durkheim attempted to describe the transition from primitive societies to advanced industrial societies. Durkheim argued that social order was maintained in society and that people act and think collectively alike with a common conscience to maintain that social order. Durkheim theorized that the type of social solidarity is dependent on the type of society. Often in smaller societies or primitive societies, mechanical solidarity cohesion is derived from the homogeneity of the societies individual’s as people feel connected through similar work, religion, lifestyles, and educational opportunities. In modern industrial societies organic solidarity arises. A type of interdependent labor develops or specialization forms of work that is complimentary to its members. Although individuals perform different tasks and different values are placed on the various tasks, the solidarity is dependent on societies reliance on each other to perform their specific tasks. Durkheim worried that the transition from primitive to advance industrialized would cause disorder, anomie and crisis. He firmly believed that moral regulation was necessary along with economic regulation to maintain social order. According to Durhkheim crime is a normal part of all societies and is present in all types of societies. He further argues that it is a societal necessity that allows members of a society, through chastising of those who violate the law, reaffirm their social values and this process develops the collective conscience and strengthens social solidarity. He believed that crime was necessary for a society to evolve and maintain itself. He describes a society that is without crime in a state of anomie or normlessness state. Although Durkheim believes society causes crime, he believes societies should not try to eradicate crime, as he believes crime is as equally important as conformity. This type of view on crime has historically earned much criticism but is certainly worthy of consideration. One could go on and on providing brief descriptions about the various contributors to social sciences but at at this time I feel it is important to shift the content of this research as it applies to United States. The first course in sociology was insturcted by Frank Blackmar at the Univeristy of Kansas entitled “Elements of Sociology” in 1890. This course is the oldest continuing sociology course in the United States. The first department of Sociology to form in the United States was at the University of Chicago in 1892. This department was founded by Albion Woodbury Small who was born and raised in Maine. Prior to founding the socilogical department at the University of Chicago he instructed at Colby College in Waterville, Maine. In 1895, Albion Woodbury Small established the American Journal of Sociology which is the oldest socilogical scholoary journal dealing with socilogy in the United States. Another Maine born sociologist was Edward Cary Hayes (1868-1928). He attended and became president of the American Sociological Association. Hayes originally enrolled in the University of Chicago to study philosophy but his interestes were peeked with the topic of sociology. Hayes received instruction from Albion Small. Hayes is often viewed as one of the original pioneers who promoted the assimilation of sociology into the American educational system. Now that social science has been proven their academic worthiness, what does the future hold for social science? Comte’s original hope that all sciences would acknowledge an evolution to social sciences has yet to come to complete fruition. Social sciences is divided into sub-branches and thus the term social science is considered very broad to say the least. The use of scientific method inquiries has provided some creditability to the social sciences but the sub-branches such as anthropology, economics, linguistics and political scientists are often more identified as the science instead of the broader term social science itself. The data provided by various scientific research methods, as they pertain to people have been critical in solving various social problems and inquiries. The battle for valididty that society is an object of an organized body of knowledge, capable of being standardized and taught objectively, using rules and methodology is a continous battle worth social sciences persistence. Social scientists have conformed or rearranged their methods to adjust to each era. Shortly after World War I scientists were pushed to apply statistics and mathematical measurments to validate what was once previously studied by observation alone. Perhaps one of the greatest assets of the social science field is the diverse openmindness that ecompasses these professionals which seems to be a reoccuring philosphy throughout history and is vital to it’s continuance. For many, the impact social scientists have on our society goes unnoticed. It would be hard for anyone to argue that certain human behaviors do not impact individuals at some level or another. I believe that when people are provided certain news stories or research findings that they perhaps don’t think of who the professionals are that behind the findings. Some indiviudals when they hear the word science receive a visual image of the stereotypical scientist in the white lab coat. So much of our lives are influenced by the work of social scientist of the past and present, from our educational system, to parenting, to health care issues and our spending habits. Historically the findings of social scientists have transformed our world and will continue to compliment our knowledge base as we tranform into the future. So you may ask where would we be today if such historians like Comte and Durkheim didn’t believe that their was a need for social sciences? To answer this question, it is important to look at what is considered measurable contributions. If you are a female reading this articile, have you ever thought about how women were treated throughout history in comparision to their role in society today? Without the efforts of such social organziations like “NOW” and various other orgnaizations aimed at providing equal opportunites for women, the feasiblilty of a female for president would never exist. Social scientists especially those who fall under the Women’s studies are often considered activists. Not all social scientists are activists in the complete sense of the word such as feminist social scientists are thought to be, but they all have one common thread and that is to observe a situation and provide insight as to the impact on the individuals and the society as a whole. Much of the current work of social scientists today focuses on analyzing the ripple effect human behavior has on nearly everything from economics, to educational pursuits to the enviroment. Recent technological advances have created signifigant changes for some aspects of the work social scientists undergo. Social scientists developed the processs of peer reviews. The purpose of peer reviews is for the scientist to share their findings with other professionals and receive feedback. The feedback the scientist receives often helps clarify any confusing content and has often led to future research topics. With the invention of the world wide web, social scientists have more access to knowledge than they would have ever dreamed about a little over fifty years ago. As our society has evolved into an interactive online type of society so too have the social scientists. In fact new types of social sciences have speacilized in the social science of technology. Interestingly the internet in itself has been termed a type of society or vitual society if you will. The internet has created new forms of social interaction activities and social orgnaization largely because of the widespread usability and access. Much of regular social interaction is now reserved for online social networking. For sure, certain internet domains like Facebook and Myspace have created a new method for regular connecting and interacting with others. Social sciences has maintained and perhaps proven increased importance with the explosion of the internet era. Social scientists are key players in understanding the impact the internet has on our society. In fact specifc fields have developed as a result of the invention of the internet and it’s impact on our society. A fairly newly developed sub-field, Science and Technology Studies has developed as a response to the need for understanding technology and society. The focus on this particular field study is how political, social and cultural values, affect scientifiic research and technological innovation, and how these in turn affect society, politics and culture. Scholars in this field often gravitate towards this subject due to firm beliefs that science and technology are socially embedded constructs. Would Comte be surprised with our era’s tehnolgocial breakthrough? One cannot say for certain, but in my opinon I believe he would argue that the invention of the computer and the explosion of the internet like many other sciences could find value in consulting with social science scholoras to understand the human or social impact to this innovation. To conclude, just as so many of the other scientists have had to evolve to meet the current societal needs, so too have the social sciences. With new found diseases such as AIDS, doctors have had to increase their knowledge of the disease to meet the current large numbers of AIDS infected patients. What would life be like without doctors to help these critically ill patients? One would not want to even begin to think about the poor quality of life we would have without the current medical breakthroughs. One would also not like to dream of what the world would be like without social scientists. Just as doctors are vital to the biological sciences so too are scientist in the social science fields. Human beings are so much more than a physical composition, and social scientists are a necessary component to understanding the social complexities of human beings. It is nearly impossible to put in a list the contributions social scientists have made to mankind. If you are still doubting the value a social science education can provide to society think about the following questions. Police are educated on how to solve specific crimes but they can thank the social scientist for understanding what might contributing to a higher-level of crimes. Accountants are educated with methods of acceptable accounting methods but they can thank social scientists for helping understanding what’s driving a certain cost up. Physicians are educated on how to treat sexually transmitted dieseases yet they too can thank the social scientsists for providing insight as to the causes of higher levels of unprotected sex. The point I am trying to make is that in nearly every profession known to man, the social component to that profession has come to rely on the knowledge social scientists have gathered in their field to compliment the other profession. Each field has their own specific strength that accompanies that particular field, and nearly all aspcets of our life can be benefited with the use of social science research and findings. References Retrieved on January 28, 2009; http://www. wpunj. edu/cohss/philosophy/LOVERS/19th. htm Retrieved on February 13, 2009 http://www. sociosite. net/topics/culture. php#WEBCULTURE Retrieved on February 13, 2009http://www. mdx. ac. uk/WWW/STUDY/lecSHE. htm Retrieved on February 18, 2009 http://en. wikipedia. org/wiki/Social_sciences Retrieved on February 18, 2009 http://www. answers. com/topic/social-sciences
<urn:uuid:b016130d-34ae-4c8b-af2b-e10bc7d4e770>
CC-MAIN-2022-33
https://benjaminbarber.org/history-of-social-sciences/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572192.79/warc/CC-MAIN-20220815145459-20220815175459-00096.warc.gz
en
0.962667
5,424
2.5625
3
Dr. Khomdon Lisam Maharaja of Manipur was invited at Shillong Mr. Prakasa, Governor of Assam invited Maharaja of Manipur, Bodhchandra to Shillong in September, 1949 for talks as per wishes of the Maharaja. The Maharaja, having full trust in the relationship with India, arrived in Shillong on September 17, 1949 accompanied with his ADC, the Private Secretary and a few household staff members along with some bodyguards. Forced attempts to get the Merger Agreement signed by Maharaja Bodhchandra On the first day of the meeting on 18 September, 1949, the Assam Governor straight away placed before the Maharaja an already prepared ‘Merger Agreement’ whereby Manipur would be ‘merged’ with India and asked him to sign the same. The Maharajah had given in writing to the Governor of Assam “I am merely a Constitutional Head of a fully responsible Government under the Constitution Act -1947 approved by the Government of India (British India) and the voice of the Majority is my voice and it shall be constitutionally and legally binding on me not otherwise.” Knowing the Maharaja’s firm stand, Prakasa did not pursue the matter further for the day. The Maharaja on his return to his Redlands residence where he was staying found Indian Army personnel surrounding the compound of his premises. The house arrest had begun as pre-planned. While under house-arrest, the Maharaja was not allowed to have any communication with the outside world, not to speak to Manipur. When Prakasa ventured to suggest to Sardar Patel that the Maharajah might not agree to sign the merger document. Sardar Patel, who was by then seriously ill, demanded, “No Brigadier in Shillong?” Thus Sardar Patel, India’s ‘Iron Man’ had given green signal to use force should it became necessary in this land of Non-violence of Mahatma Gandhi. Prakasa was firm in his insistence that the Maharaja was asked to sign the ‘agreement’ before going back to Manipur. Thus, after resisting for three restless days and sleepless nights, the Maharaja could not see any escape. Ultimately, he signed the treacherous ‘Merger Agreement’ in a state of helplessness, while still under house-arrest on 21 September, 1949. Under the terms of the “agreement” Manipur comes under Indian rule from 15 October, 1949. Thus the Government of India overthrown the Maharaja and occupied Manipur and became part of India. Thus, the signing of Merger agreement on 21 September, 1949 was done by deceit and forceful tactics contrary to international laws. Even after signing the Instrument of Accession, Manipur did not lose her sovereignty as the Union Government was to look after defense, external affairs and communications. The signing of the Manipur Merger Agreement was therefore between a sovereign state called Manipur and the Government of India. It should therefore free from coercion or force or undue pressure. Post-Merger Political Status of Manipur Manipur State Assembly rejected the Merger Agreement The 4th sitting of the 3rd session of the Manipur State Assembly in its session held at the Johnston School on 28th September, 1949 at 2.30.p.m rejected the “Merger Agreement signed on 21st September 1949” and declared the Merger Agreement invalid as the powers and authorities of Maharaja had been vested with the Manipur State Assembly . The excerpt of the Assembly proceedings was published in the Manipur State Gazette, part IV, dated 14 October 1949. Mr. T.C. Tiankham Speaker, Mr. M. K, Priyobarta Singh, Chief Minister amd 6 other Ministers and 43 Hon’ble members were present and adopted the resolution. The copies of the declaration signed by P.B. Singh, Chief Minister, T.C. Tiankham, Speaker, Arambam Ibungotomcha Singh, Minister of Finance and Foreign Affairs, was sent to the Government of India. But there is no reply from the Government of India on this issue during the last 66 years. It is said that the Kuki Chiefs were greatly disheartened to hear the news and they sent 250 armed warriors to protect the king from any possible attack on the king. The dissolution of Manipur Assembly was in violation of Independence Act, 1947. The Indian Independence Act, 1947, para 9(5) states that “No order shall be made under this section, by the Governor of any Province, after the appointed day, or, by the Governor-General, after the thirty-first day of March, nineteen hundred and forty-eight ( 31 March, 1948), or such earlier date as may be determined, in the case of either Dominion,. by “any law of the Legislature of that Dominion.”. However, violating the provisions of the Indian Independence Act, 1947, para 9(5), Shri C. Rajagopalcharry, Governor General of India issued an order on 15-10-1949 declaring that ‘the Ministers’ in Manipur State shall cease to function and ‘the Legislature’ of the State shall stand ‘dissolved’ citing Sections 3 and 4 of the Extra Provincial jurisdiction Act, 1947 (Act XLVII of 1947) . This is in violation of Independence Act, 1947. Manipur Constitution Act-1947 was never repealed Once Manipur became part of the India, the Government of India dissolved the State’s Constitution Assembly in October, 1949 without repealing the Manipur Constitution Act-1947. This is another blunder the Government of India placed Manipur as a part C state. This was considered a disgrace to the state and the people of Manipur. Further it was degraded to the status of the Union Territory from 1956 onwards. In 1972, Manipur was elevated to the status of a state after a long and protracted nonviolent, peaceful struggle. Imposition of Indian Constitution without representation No Manipuri was included in the Constituent Assembly formed in 1946. Tripura can not represent Manipur. No Manipuri participated in the deliberations of the Constituent Assembly on 26 November, 1948. No referendum was conducted in Manipur regarding introduction of Indian Constitution in Manipur or merger of Manipur with the Indian dominion. Rather the Constitution of India was imposed on Manipur with the forced annexation of Manipur. Nagaland was raised from village republic to Statehood in 1963 Nagaland was raised from a village republic to Statehood on 1 December, 1963 as a part of appeasement policy of the Government of India towards the Naga underground movement and violent struggle. Manipuris took it as an gross insult to the state and the people of Manipur perpetuated by the Government of India. States Merger (Chief Commissioners’ Provinces) Order, 1950 was “ultra-vires” and “null and void” As per the Notification issued by the Government of India, Ministry of Law, dated the 22nd January 1950, paras 1(1) (2), 2(1) the States Merger (Chief Commissioners’ Provinces) Order, 1950 shall come into force only with effect from the 23rd January, 1950, and therefore the State of Manipur should have become administered under a Chief Commissioner only from that date onwards and as such the orders issued by the Ministry of States, New Delhi, dated the 15th October 1949 hurriedly merging Manipur with the Dominion of India and the order issued by the Chief Commissioner, Major General Rawal Amar Singh on 15th October 1949, who was appointed on the same day, ceasing the functioning of the Ministers of the State and dissolving the Manipur Legislative Assembly were clearly “ultra-vires” and “null and void” orders i.e they were illegal and invalid orders issued prior to having the “authority to do so”. Forced Merger of Manipur was in violation of international law Maharajah Bodhchandra was made to sign under threat as evidenced from history. Is it not violation of international law to extract agreement between two sovereign countries under threat? Once Manipur became part of the India, the Government of India dissolved the State’s Constitution Assembly in October, 1949 without repealing the Manipur Constitution Act-1947. This is another blunder the Government of India placed Manipur a part C state. This was considered a disgrace to the state and the people of Manipur. Further it was degraded to the status of the union territory from 1956 onwards. In 1972, Manipur was elevated to the status of a state (or province) after a long and protracted struggle. India’s silent war of population invasion against Manipur In 1901, during the British period, Manipur devised a very effective system of controlling the entry of foreigners (Non-Manipruis) called the Permit or Passport system. Indians coming from other parts of India were called foreigners in the terminology of Manipur Administration. This Permit system was brought under the Foreigners Department on 1 November, 1931. If the foreigners wish to visit Manipur, they were required to take permission from the then Durbar and had to pay certain amount of taxes. This Permit or Passport System served two important purposes (1) it controlled and regulated influx of Non-Manipuris (2) it formed an important item of revenue for the state. This Permit system was abolished by Mr. Himmat Singh, the then Chief Commissioner on 18 November, 1950 .The said permit system prohibits any foreigner to acquire or purchase land in Manipur. Although Manipur is 90% hills, the question of implementing Bengal Eastern Frontier Regulation -1973 did not arise since Manipur was a princely state . The abolition of permit/passport system in Manipur has caused immense damage to the Manipur Society. The illegal migration from across the borders has continued unabated for the last more than 60 years. Today, we have more than 51,000 illegal Bangladeshis scattered all over the state mainly in Borobekra, Serous and hundreds of other villages. We have foreigners like Bangladeshis, Burmese and Nepalese also non-indigenous people from other states in all major and small towns and most of the villages in the valley and hill districts occupying our lands, buildings, snatching away our jobs and eroding our economy affecting our day to day life and peaceful existence. This has caused slow transformation in our mongoloid features, identity and culture. According to 2001 Census, the population of outside migrants in Manipur was 707,488 as against the tribal population of 670,782 (UCM-2005). In Tripura, the percentage of indigenous population was 93 % of the entire population in 1947. By 2001, it has been reduced to 22%. Similar trend is likely to happen in Manipur within a short time. The possibility silent demographic invasion will be more with the arrival of railways, Tipaimukh Dam, more recruitment of Nepalese and South Indians in the Manipur Rifles, IRB, Home Guards and large influx of military and paramilitary organisations . Even today, outsiders are found in all small towns, villages including hill districts. The Khwairamband Bazar is practically controlled by outsiders. In addition to this, the Government of India is also encouraging foreigners like Bangladeshis and Myanmarese to migrate and settle in Manipur. Manipur’s indigenous population is hardly 0.20 % of India’s population. What will happen to the indigenous population of Manipur if more than two lakhs of people from mainland India starts migrating to Manipur every year? This was what has happened from 1951 onwards till date. Naturally, the indigenous Manipuris will become minority in our own country during the next 20-30 years. Emergence of insurgency in Manipur The emergence of insurgency in Manipur is solely due to wrong policies and blunders of the Government of India. Manipur is now the most violent theatre of conflict in the North East region of India. The series of blunders committed by the Government of India is the root cause of conflict between the people of Manipur and the Government of India. The emergence of insurgency in Manipur is formally traced to the emergence of the United National Liberation Front (UNLF) on 24 November 1964. People’s Liberation Army (PLA), founded on September 25, 1978, People’s Revolutionary Party of Kangleipak (PREPAK) set up on October 9, 1977 and the Kangleipak Communist Party (KCP) that came into being in April, 1980. A report of the State Home department in May 2005 indicated that ‘as many as 12,650 cadres of different insurgent outfits with 8830 weapons are actively operating in the State’. The people feel that the birth of insurgency is the making of the Government of India. India’s racist attack on the people of North East The Government of India never recognizes Manipuri as at par with that of mainland Indians. We are facing a racial discrimination at the hands of the Government of India as well as at the hands of mainland Indians. Our North East Indian history has never been the history of India. We are forced to learn the mainland Indian history like Akbar, Aurangazeb for many decades by various educational systems in India. . The Government is hell bent to divide the Manipur on the lines of political parties, religion, ethnicity and racism. Racial hatred was stirred by mainland Indians. In the past, racial discrimination was mainly because of different caste and religions. Dalits were subjected to unimaginable discrimination, but no one ever questioned their “Indian-ness”. This is not the case of north easterners. The racism faced by the north east at the hands of the mainland India is of a different order. It is much more “in your face”, because of their different racial appearance, different hair styles, different ethnicity (skin colour and looks, language, cultural indifferences and difficulty in pronouncing names). The north easterners practically had to beg to be accepted as “Indians”. The north easterners come to realize that they are not Indians. The act of racism and violence against north east Indians are not new. North East Indians are targeted everywhere irrespective of their work, their location, they are targeted in every city, every town every colony, be it Delhi, Chennai or Bhopal or Mumbai or Bangalore. The main reason that drives this insane mentality is how they look different from others and their distinct cultures and traditions. Looking at the physical features, the mainland Indians including political leaders, bureaucrats, and landowners presumed that their IQ must be very low. They are deprived of jobs, promotions, social recognitions, respect and human dignity. This is not surprising considering the fact that they themselves indulged in caste system, honour killing, Sati, dowry deaths, female infanticide etc. It is also known that India ranks in the top of least racial tolerant nations in various international surveys. We have to realize that we are not Indians Discriminatory provisions of Article 371C of Indian Constitution The Article 371C of Indian Constitution divides Manipur into Hill and Valley although there are many hillocks in the valley and many valley areas in the hills. Manipur is more than 90% hills. It divides the same Manipuri people from a common ancestor into valley people and Hill people, into Nagas and Meiteis, Kukis and Meiteis. It creates a false sense of superiority among the non-tribal over the tribal. There is no equality of people in Manipur, leading to hatred and conflict situations among the ethnic groups. It divides the same people into Schedule Tribes and General, Schedule Tribes and OBC. It leads to enactment of a discriminatory law. The Manipur Land Revenue and Land Reforms Act, 1960 came into force on 1st June, 1961 vide order no. 140/1/60(vi) dated 31 May, 1961. The Act was passed by Indian Parliament vide bill no. 95-F of 1959 Act, no, 33 of 1960 to make a special provision for protection of Scheduled Tribes under section no. 180. This creates a sense of discrimination against non-tribal leading to disunity, distrust among the ethnic groups in Manipur. This Act encouraged the violent minority to violate the human rights of the sleeping majority. This Act also violates the Article 19(e) of Indian Constitution. The small land area in the valley and Moreh is always open to sale to outsiders threatening the very existence of the Manipuri people. With the arrival of Railways and Look East Policy, the future of Meiteis and hill people of Manipur are in jeopardy. Declaration of Disturbed Area – Manipur had been declared a ‘disturbed area’ in its entirety in 1980 and the Armed Forces Special Power Act (AFSPA) 1958 was imposed in the State on 8 September, 1980. Armed Forces Special Power Act (AFSPA)-1958 The British imperialists enacted the inhuman Armed Forces Special Ordinance in 1942 to crush the Indian freedom movement.. The democratic India enacted a much more heinous version called the Armed Forces Special Powers Act (AFSPA) on 22 May, 1958. According to the Armed Forces Special Powers Act (AFSPA), in an area that is proclaimed as “disturbed”, an officer of the armed forces has powers – to fire upon or use other kinds of force even if it causes death, against the person who is acting against law or order in the disturbed area for the maintenance of public order, To arrest without a warrant anyone who has committed cognizable offences or is reasonably suspected of having done so and may use force if needed for the arrest, destroy any arms dump, hide-outs, prepared or fortified position or shelter or training camp from which armed attacks are made by the armed volunteers or armed gangs or absconders wanted for any offence, to enter and search any premise in order to make such arrests, or to recover any person wrongfully restrained or any arms, ammunition or explosive substances and seize it, stop and search any vehicle or vessel reasonably suspected to be carrying such person or weapons, any person arrested and taken into custody under this Act shall be made present over to the officer in charge of the nearest police station with least possible delay, together with a report of the circumstances occasioning the arrest, -Army officers have legal immunity for their actions. There can be no prosecution, suit or any other legal proceeding against anyone acting under that law. Nor is the government’s judgment on why an area is found to be disturbed subject to judicial review. -Protection of persons acting in good faith under this Act from prosecution, suit or other legal proceedings, except with the sanction of the Central Government, in exercise of the powers conferred by this Act. The AFSPA is the mother of all black laws in India. When the Act was legislated in 1958 to ‘combat’ insurgency there was only the Naga movement in the north-east. Over the last 57 years, the Act has been in force, it has only contributed to the rise of more and more insurgent outfits in the region. While parliamentary democracy requires the army to be kept away from the tasks of internal policing and administration, the AFSPA virtually introduces military rule in a democratic garb. Manipur alone has witnessed a series of massacres and ‘disappearances’. The Army itself says that till date it has had to punish 66 of its men in the north-east as they were found guilty of excesses even though it asserts that only 25 of the 451 complaints received were found valid in its internal scrutiny. (Manipur Burning -Armed Forces (Special Powers) Act and Centre’s Apathy -– Krishna Singh T) . People are asking a simple question to the Hon’blr Prime Minister and the Hon’ble Home Minister of India – why don’t they imposed AFSPA-1958 in Maoist / Naxalites affected states like Bihar, Jharkhand, Orissa, West Bengal, Madhya Pradesh and Andhra Pradesh states. In 2006, Prime Minister Manmohan Singh called the Naxalites the “single biggest internal security challenge ever faced by our country”. In June 2011, he said, “Development is the master remedy to win over people” adding that the government was “strengthening the development works in the 60 Maoist-affected districts. Do they think that the lives of Manipuris are worthless? Immediately after the Malom massacre, one lady called Irom Sharmila has been fasting and on nose feeding for the last fifteen years with a demand for repealing this draconian Act called Armed Forces Special Power Act-1958 (AFSPA) in this land of Gandhiji. The longest fast of Mahatma Gandiji was 21 days. Why the entire India was was shaken when Anna Hazare fasted only for 4-5 five days and there is a deafening silence in India when Irom Sharmila fasted for 15 years. Is it the Indian style of democracy? The Government of India must be thinking that we, Manipuris are expendables. AFSPA and Unsolved massacres in Manipur One of the earlier massacres by the forces of the Indian Union was the Heirangoithong massacre in outskirts of Imphal on 14 March 1984, at the Heirangoithong Volleyball Ground following indiscriminate firing by the CRPF personnel stationed nearby resulting in the death of thirteen persons and injuring thirty-one. The tragic incident that attracted international condemnation was “Operation Bluebird” a months-long protracted counterinsurgency operation in 30 villages in the northern Senapati District of Manipur between July and October 1987 conducted by the Assam Rifles. 15 villagers were killed, women raped. The rebels killed nine soldiers and escaped with 150 guns and 125,000 rounds of ammunition. Tera Bazar Massacre: 25 March, 1993: Unidentified youth shot at CRPF personnel at Tera Keithel, Imphal which killed 2 CRPF men. Thereafter, the CRPF personnel rushed out and fired indiscriminately. Five civilians were killed and many others received bullet injuries. However, no enquiry has been instituted to date. In the morning of 7 January 1995, another incident of the extrajudicial execution of nine civilians by the CRPF in Imphal City within the campus of the Regional Medical College (RMC) now renamed the Regional Institute of Medical Sciences (RIMS). As soon as the firing took place, the CRPF personnel reportedly shouted ‘hamara admi ko mara, sab Manipuri ko maro’ Malom Massacre: 2 November, 2000: Assam Rifles convoy was attacked near Malom, Manipur by insurgents. In retaliation, the troops shot at civilians at a nearby bus-stop leaving 10 civilians dead, including a 60 year old woman and a boy who had been awarded the bravery award by the former Prime Minister Rajiv Gandhi. A brutal combing operation followed. Irom Sharmila’s fast-to-death began in the aftermath of this incident. The massacre at Tabanglong in Western Tamenglong District on 28 December 2000 resulted in the death of eight (8) villagers when they were attacked by soldiers belonging to the 15th Jat Regiment, a military regiment of the Indian army. Several other massacres are – Oinam Leikai Massacre on 21 November, 1980, Ukhrul Massacres on 9 May, 1995, Bashikhong massacre on 19 February, 1995, Churachandpur Massacres on 21 July, 1999, Nungleiban Massacre on 15 October , 1997, Tabokpikhong Massacres on 12 August, 1997, Tonsen lamkhai Massacres on September 3, 2000 These few examples of military massacres only serve to illustrate the fact that, for many years in Manipur, the armed forces belonging to the Indian Union have been killing local civilians arbitrarily by using force inconsistent to the principle of absolute necessity and proportion, in violation of human rights. Manipur’s Representation in Indian Parliament The number of Lok Sabha MPs for Manipur is not viable- The number of Lok Sabha seats for Manipur is only two whereas for UP, it is 80. No MP from Manipur will be able to become the Prime Minister or President even if he is Lord Krishna or Jesus Christ or prophet Muhammad. We find that for a small population hardly two lakhs of Anglo-Indians, Nehru had put the number of Lok Sabha MPs as two. Considering the Indian Constitution, can any Manipuri become the Prime Minster of India in the next 500 years? The number of Lok Sabha MPs for Manipur should be a least eight. There is no balanced representation of states in the Rajya Sabha ( Council of States ). The number of Rajya Sabha seat for Manipur is only one whereas for UP, it is 31. Rajya Sabha is the Council of States. There should be balanced and equal representation of States-big or small as done in many countries of the world including USA. The number of Rajya Sabha MPs for Manipur should be at least seven. Possible Common Future and peaceful co-existence for all ethnic groups in Manipur While in India, we run the risk of being reduced to microscopic minority. While in India, we run the risk of losing our identity, our land, our language, our culture, our traditions, our resources. By following the Indian constitution, no Manipuri will be able to become the Prime Minister of India in the next 500 years. Considering the proud history of Manipur as a Sovereign Kingdom for more than 2000 years, recognizing the series of blunders the Government of India has committed against Manipur during the last 66 years, recognizing the need for having communal harmony, peaceful coexistence and recognizing the various demands of the ethnic groups, the people of Manipur should assert that the Government of India may grant “RESORATION OF PRE-MERGER STATUS OF MANIPUR” without further delay. All ethnic groups should be united as one person and assert for “Restoration of pre-merger status of Manipur” We should have a common vision, common goal, common language and common strategy. When there is no vision, the people perish. The PRE-MERGER POLITICAL STATUS OF MANIPUR may consist of the following components:- - Separate constitution 2 Separate Parliament 3. Separate Flag. 4. Separate Prime Minister 5. Separate Supreme Court 6. Separate Administrative System 7. Separate Legal System 8. Separate Passport 9. Separate Currency 10. Joint Defense system 11 Joint External Affaires 12. Permanent Representative to UN 13. Right to run Manipur Embassies in countries of our choice 14. Special Central Budget 15. Right to conclude commercial Business and Trading Agreements with foreign countries. Manipur may remain as a part of India with shared Sovereignty. The final shape of the Pre-Merger Political Status of Manipur may be decided after wide consultation among the various ethnic groups of Manipur and negotiation with the Government of India. Without restoration of PRE-MERGER STATUS OF MANIPUR, fulfillment of any other demands will be meaningless. We will have our own country with all the ethnic groups living in harmony and peace together and compete with the best brains of the world. We can sit together and frame our own constitution as per wishes of all ethnic groups. This will be possible only when we stand united. This will be possible only when the all political parties, all civil society organizations, all pressure groups, all insurgent groups have a common vision, common language, common goal, common strategy and initiate a people’s mass movement with passion, commitment, dedication and sacrifice. This will be possible only when the young people lead the people’s mass movement. All revolutions belong to the young people. On you, depends tomorrow. This will be possible only when we fight for it in do or die situation. The time for action is TODAY, Tomorrow may be too late.
<urn:uuid:de80e918-7f13-4132-b967-f34f33a1ca0d>
CC-MAIN-2022-33
https://manipurtimes.com/common-future-peaceful-co-existence-and-restoration-of-pre-merger-political-status-of-manipur-part-iii/?replytocom=5
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570868.47/warc/CC-MAIN-20220808152744-20220808182744-00095.warc.gz
en
0.956129
5,865
2.671875
3
Diagnosed as a medium sized frog with an elongated body; back red; flanks and extremities black, occasionally with red spots. A plump frog with a blunt snout and a prominent sacral region. Body elongated and depressed. A very long neck, which allows it to move the head from side to side. Males measure 37–47.3 mm, females 41–62.4 mm. Distance snout tip–eye shorter than interorbital distance. Tympanum often indistinct, reaching 0.5–0.8 of the eye diameter. Males have a single subgular vocal sac. Comparatively short hind legs without webbing. Enlarged inner metatarsal tubercle with highly variable size, reaching 0.5–1.1 of the shortest toe length. Tips of fingers and toes enlarged to form round or triangular discs. A smooth glandular skin. A 44 mm female weighed 3.6 g., another with 56 mm SVL 16.7 g. Males, 38–44 mm SVL, weighed 3.5–6.3 g. The size of one year old frogs ranged from 25 to 32 mm (weight: 1.4–3.5 g). Zug (1987) reports on a gravid female measuring 57 mm (SVL). Barbault gives average sizes of 45 (1974a) and 44 mm (1974b). According to Hermann (1989), males reach 45 mm, females 60 mm. The red color of the back extends from the snout tip to the vent. It is often interrupted by a more or less sharply marked black vertebral line. The head in particular may by densely spotted black. Some isolated red spots may be present on the otherwise black arms and legs. A black lateral band stretches from the snout to the vent. In the groin area this band diverges a short way towards the middle of the back. The throat in males is deep black, while in females it is dark brown to dark gray. The breast is usually dark with large white spots; the latter are also found in the groin area, where they are less distinct. The rest of the venter is white to drab gray. The red color is reported to turn silvery gray in dry or warm weather (Hermann 1989). My own observations have shown that this happens in other stress situations, too. For example, the frogs also changed their color when ants were present. Young animals are more slender than adult frogs. The back of metamorphosed froglets is brown, and a black vertebral line is always present. In alcohol the red color turns light gray to dark brown. The black parts turn dark brown. Voice: The amplitude modulated advertisement call of P. microps lasts 1.1 to 2.5 sec. All pulses last 0.02–0.03 sec. Their frequency ranges from 0.68 to 6.41 kHz. Three to four distinct harmonies are distinguishable. Grafe (1999) gives the dominant frequency with 1.26 kHz. The low melodic trill figured by Schiøtz (1964) is structurally similar. The calls of P. microps and P. bifasciatus, which were formerly considered synonymous, are compared in Van den Elzen & Kreulen (1979). The white to yellowish eggs are deposited in small to very large floating egg masses. The diameter of the egg incl. jelly ranges from 5.6 to 9.0 mm, depending on the respective position of an egg within the mass. Eggs younger than stage 13 (Gosner 1960) measure 1.6–3.0 mm. A darker pole is exclusively found in recently deposited eggs which have not yet swollen. An egg mass consists of 30–1400 eggs, arranged in tangled strings. Different, distinct clutch sizes may be distinguished. Large ones comprise 800–1400 eggs (N= 20); medium-sized ones comprise 200–500 (N= 16); small ones: 100–160 (N= 18) and tiny ones consist of approx. 60 eggs (N= 8). These egg masses either float free at the water surface, or they are attached to partly submerged vegetation. Smaller egg masses are more likely to be attached to vegetation in shallow water, whereas large ones are usually deposited in deeper water which lack vegetation. Zug (1987) found pigmented eggs in a dissected female. Duellman & Trueb (1986) also report on Phrynomantis species laying pigmented eggs. The P. bifasciatus spawn figured in Wager (1986, p. 63, mistakenly in the Bufo rosei chapter) looks like that of P. microps. The Phrynobatrachus egg mass figured in Passmore & Carruthers (1995, p. 233) is more likely to belong to Phrynomantis, too. In South and East Africa Phrynomantis eggs are deposited in fairly large rounded clumps just under the surface, attached to vegetation. The eggs are light green but turn gray a day or two later (Pickersgill pers. comm.). The tadpole of P. microps was first described by Lamotte (1964) from Lamto, Ivory Coast. Tadpoles hatch within 1.5–2 days. They measure approx. 5 mm TL and are almost white. They have external gills, a large yolk sac and two ventral adhesive discs. With these organs, the tadpoles stick to the jelly after hatching. They still lack an oral disc at this stage. Two vaguely defined black lines stretch from the eyes to the tail base. Four days later, all tadpoles swim in mid-water, showing the typical habitus of pelagic Microhylid larvae. They are transparent and lack horny beaks and teeth. The density of the black and golden spots on the body varies from pond to pond, most likely depending on how muddy the water is. The tail fin may be transparent or have a black border. The tip of the tail is usually filamentous. The tadpoles found in muddy water are always rather light colored, and the color morphs described by Lamotte (1964) are more likely to result from different habitats than represent geographic variations. Oval, air filled bubbles are visible from dorsal view, beside the base of the tail. It is not known if these bubbles are specific structures or simply air-filled parts of the intestines. The unpaired spiracle opens at a ventromedian position, approximately on a level with the base of the tail. Tadpoles of stages 25–30 had the following size range (TL, mm) and weight (g): 17/0.08 – 23/0.16. Development of hind legs is complete at a TL of approx. 30mm. The forelegs emerge when the tadpoles measure 11 mm BL (TL: 21–30 mm). The froglets leave the water within 40 days, measuring approx. 10 mm. The tadpoles of P. bifasciatus and P. annectens described by Power (1926b) and Gradwell (1974) cannot be distinguished from those of P. microps. P. annectens is a suspension feeder; but unlike P. microps, it swims with its head pointing downward (Gradwell 1974). Distribution and Habitat Country distribution from AmphibiaWeb's database: Benin, Burkina Faso, Cameroon, Central African Republic, Chad, Congo, the Democratic Republic of the, Cote d'Ivoire, Gambia, Ghana, Guinea, Guinea-Bissau, Mali, Niger, Nigeria, Senegal, Sierra Leone, South Sudan, Togo Schiøtz (1967) assumes that P. microps is restricted to grassland and open savannas. In Ghana and Nigeria, he never found this species in dense tree savannas (Schiøtz 1964). Lamotte (1966) characterizes this frog as a savanna-dweller which locally penetrates into the rainforest belt. In Togo and Benin, its range touches the coast in the "Dahomey Gap" (Lamotte 1967). I found this species near Ananda, Ivory Coast, in a region still covered with rainforest in the mid-20th-century. Nowadays the rainforest has mainly been replaced by farmland. Apart from Hughes (1988), who also mentions coastal scrub thickets, P. microps is generally quoted as a savanna species occurring both in the Guinea savanna (Lamotte 1967, Walker 1968, Barbault 1972, 1974, Zug 1987) and in the Sudan savanna (Schiøtz 1967, Schätti 1986, Hughes 1988). It is not restricted to arid savannas, as reported by Lamotte & Xavier (1981). P. microps is found in flooded meadows, in ponds and swamps (Schiøtz 1967), under rotten wood (Schätti 1986), buried in savanna soils (Lamotte 1967), but also in arboreal habitats (e.g., in the buds and tree-tops of palms), where it occurs together with several Hyperolius and and Afrixalus species (Lamotte 1967, Barbault 1972, 1974a, b). Poynton (1964a) notes that Phrynomantis species like to climb. However, he did not classify them as arboreal. Lamotte (1967) describe this frog as a more or less xerophilous species. Most often I found P. microps in subterranean cavities near open water, under rotten tree trunks in open savanna, on the edges of gallery forests and in forest islands. Range: P. microps inhabits large parts of the West African and Central African savannas (Frost 1985). Schiøtz (1967) describes this frog as a West African species whose range extends to Cameroon. Further east, it is replaced by P. bifasciatus. According to Lamotte (1966), the range stretches from Senegal to Nigeria. In particular, P. microps has been recorded from the following countries: Senegal, Sierra Leone, Ivory Coast, Ghana, Burkina Faso, Mali, Benin, Togo, Nigeria, Cameroon, Central African Republic (Peters 1875, Loveridge 1930, Schiøtz 1963, 1964a, c, 1967, Lamotte 1966, 1967, Barbault 1967, 1972, 1974, Vuattoux 1968, Walker 1968, Euzet et al. 1969, Amiet 1973a, Miles et al. 1978, Schätti 1986, Zug 1987, Hughes 1988, Joger 1990, Rödel 1996, 1998b, Joger & Lambert 1997, Rödel & Linsenmair 1997). The Tanzanian P. microps, mentioned in Loveridge (1930) has been confused with P. bifasciatus. The range of P. bifasciatus stretches from Somalia to Zaire and South Africa. The range of P. affinis extends from Zaire to northern Namibia and Zambia. P. annectens is reported to occur in Angola and north-western Cape Province, South Africa. Largen (1998) records P. somalicus (Scortecci 1941) from Ethiopia. Life History, Abundance, Activity, and Special Behaviors According to my personal observations, P. microps calls from cavities, termite hills and tufts of grass (compare to Schiøtz 1964). After heavy rains the frogs begin to call at dusk. Amplectant pairs usually arrive at the breeding sites after midnight. Spawning may therefore last until the early morning. Males have no fixed calling site. They vocalize while migrating towards the ponds. In this respect, P. microps clearly differs from P. annectens, which is territorial. In the latter species the males even wrestle for calling territories (Channing 1974, 1975, 1976). Reproduction is triggered both by rainfall and by seasonality. They start spawning in late April to early May and continue till June / July. A second spawning period starts in August / September. In the Comoé National Park, tadpoles are found shortly after the first heavy rains, i.e., in March/April, and till September. In drier years, at the beginning of the rainy season, P. microps only spawns when the amount of rainfall exceeds 18 mm. In moist years however, they spawn after nearly every rain (Rödel 1993, unpub. data). The restriction in dry years seems to be an adaptation against the risk that a pond might dry out before metamorphosis is completed. With more than 18mm of rain most ponds are likely to last long enough for metamorphosis to be completed. A pair observed spawning deposited several egg batches. Egg laying occurred while the female was submerged, the male partly above the water surface. The vent of the female was directed towards vegetation. The eggs were wound around the plant. The female raised her vent above the surface, slightly bent her back and laid several egg batches. The first small egg mass was deposited near the edge of the pond. The couple then swam to a site, approx. 10 meters away where another 100 eggs were laid. During the following 25 sec they swam 2 meters and laid 80 eggs. Another 15 sec and a meter away they laid 110 eggs. Subsequently, i.e., 12 sec later and 2.5 m apart, 180 eggs were deposited. One minute later, the couple went ashore, with the male still sitting on the back of the female. Thus it seems spawn may be deposited in small batches. Batching is most likely a strategy to minimize the risk of predation and/or desiccation. However, I do not know whether the spawn is distributed over a single night or over several consecutive nights, within a single pond or in several ponds. Two dissected females had heavily meandered oviducts. The number of eggs forming the tiny egg batches (compare "spawn") corresponded to the number found in one sling of a single oviduct; small clumps correspond to the contents of several slings; a medium-sized egg mass approximately contained as many eggs as a complete oviduct, and large ones comprise the content of two oviducts. In Comoé National Park, I found P. microps larvae almost throughout the rainy season. Contrary to my observations, Lamotte (1967) only recorded tadpoles in June and July at Lamto where the rainy season lasts from March to November (Barbault & Trefaut Rodriguez (1978). In Lamto the tadpoles are restricted to large deep savanna ponds harboring little or no vegetation. Amiet (1973a) wrote that populations in northern Cameroon are explosive breeders, reproducing within a rather short period. However, this impression possibly results from a misinterpretation of the evidence. First of all, the author observed the frogs only over a relatively short period; and secondly, P. microps call at most for 1–3 days after heavy rainfalls. A lot of other frog species continue calling, at least sporadically. It is probable he simply missed further spawning activities which took place later in the rainy season (see above). Savanna ponds of highly variable size, i.e., one to several hundreds of square meters, are acceptable spawning sites. These may be situated in the savanna as well as in gallery forests. The latter, however are only rarely accepted. At the start of the rainy season, spawning occurs only in larger ponds. As the rainy season advances, and desiccation becomes less and less likely, smaller pools are accepted as well. The larvae filter directly below the water surface with their heads pointing upward. They usually prefer to swim in deeper parts of the pond where there is less vegetation. They often form large swarms. The larger the swarm, the more likely they are to swim in open water. Swarm formation however, only occurs when predators are present. The latter are detected both by visual and olfactory cues (Rödel & Linsenmair 1997). The tadpoles react by swarming on detecting the release of body liquids, resulting from injuries caused by predators on conspecific larvae, as well as on other species, e.g., Kassina (Rödel & Linsenmair 1997). 60% of the swarms comprise less than 50 specimens, but the largest consisted of more than 4000 tadpoles. The tadpoles swimming direction lacks coordination except when the wind is blowing, when they turn towards the wind. Vibrations of the pond bottom or ripples in the water never cause the tadpoles to flee, but they immediately react to shadows by seeking refuge in deeper water zones. Unlike P. annectens, which is encountered exclusively on the ground or in subterranean cavities (Channing 1976, Loveridge 1976), P. microps was frequently found on palms, together with different hyperoliid species. Vuattoux (1968) reports this species in the buds of savanna palms (in 4.9 %), rarely they live in dead palms. However, this frog is capable of burying itself in loose soil. As it needs at least humid soil, it usually selects sites with grass or the buds of palms. Whereas I observed climbing P. bifasciatus in Kenya (Rödel 1990), P. microps from Comoé National Park were found exclusively on the ground, in cavities or under rotten wood. Mahsberg (pers. comm.) observed a frog calling from a hollow tree, about 1.5 m above the ground. It is uncertain whether P. microps at Lamto and at the Comoé National Park really select different refuges during the dry season. The Rônier palms, the main refuge in Lamto, are very rare in the Comoé National Park. Some refuges are possibly used over a longer period. For example, I found two P. microps under the same tree trunk in 1992 and 1993. However, it was not clear whether it were the same P. microps. According to Vuattoux (1968), Barbault (1974a, b) and Lamotte (1983), P. microps exclusively feeds on ants. The first author found dozens to hundreds of ants (of several genera) in the stomachs of 12 dissected animals. Vuattoux (1968) determined mainly Crematogaster. In contrast, the stomach of a frog which I dissected contained 126 termites of the genus Trinervitermes. Ants were absent, although the frog had been found in the midst of an ant colony. Captive P. microps are known to accept other arthropods, e.g., crickets and millipedes (Hermann 1989, Mahsberg, pers. comm.). P. microps is not only protected by its coloration, which Schiøtz (1964) and Lamotte (1983) interpret as a camouflage pattern, but this frog also shows a specific defensive behavior (Hermann 1989, Rödel & Braun 1999). The hind legs are bent forward, and the raised vent is presented to the predator. The head is directed towards the ground and bordered by the forelegs. The black groin bands may simulate eyes. P. bifasciatus, several Pleuroderma species and Physalaemus natteri are known to display a similar behavior (Duellman & Trueb 1986). P. microps is frequently found in association with different scorpions, particularly with Pandinus imperator (Mahsberg, pers. comm., pers. obs.) but also with Hottentotta hottentotta and other Buthidae. This might possibly be a defensive strategy. However, our observations on P. microps’ association with highly aggressive ants point in another direction. P. microps seems to spend the dry season in subterranean refuges, but cannot bury into the hard soil. As places that offer enough humidity are normally already occupied by ants or other arthropods, the frogs must reach an "agreement" with these dangerous neighbors. We showed that the tolerant behavior of the ants towards the frogs is due to chemical components of the frogs’ skin (Rödel and Braun 1999). The protective function is not necessarily a result of the toxicity of these components. Ants, like any other social insects, will kill an intruder even if some of the insects have to pay with their lives. P. microps has obviously developed some kind of "defensive coat" which includes a stinging inhibitor, that enables these frogs to survive in the midst of the ants. Kassina fusca, and possibly K. senegalensis, have also developed comparable strategies. Other frogs which normally do not associate with these ants are killed within a few seconds (Rödel and Braun 1999). P. microps is known to have a very toxic skin (Jaeger 1971, Hedges 1983). This account was taken from Rödel, M.-O. (2000), Herpetofauna of West Africa vol. I. Amphibians of the West African Savanna, with kind permission from Edition Chimaira publishers, Frankfurt am Main. For references in the text, see here. Rödel, M. O. (2000). Herpetofauna of West Africa, Vol. I. Amphibians of the West African Savanna. Edition Chimaira, Frankfurt, Germany. Originally submitted by: Marc-Oliver Rödel (first posted 2001-05-10) Edited by: Kellie Whittaker (2011-12-21) Species Account Citation: AmphibiaWeb 2011 Phrynomantis microps <https://amphibiaweb.org/species/2085> University of California, Berkeley, CA, USA. Accessed Aug 15, 2022. Feedback or comments about this page. Citation: AmphibiaWeb. 2022. <https://amphibiaweb.org> University of California, Berkeley, CA, USA. Accessed 15 Aug 2022. AmphibiaWeb's policy on data use.
<urn:uuid:d1e187c6-0dcd-4724-a722-ee1a3c10b503>
CC-MAIN-2022-33
https://amphibiaweb.org/cgi/amphib_query?where-genus=Phrynomantis&where-species=microps
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572215.27/warc/CC-MAIN-20220815235954-20220816025954-00696.warc.gz
en
0.929405
4,628
3.1875
3
(Frances Theresa Densmore) Born May 21, 1867 in Red Wing, Minnesota Died June 5, 1957 in Red Wing, Minnesota US-American anthropologist and ethnomusicologist, helped preserve the music and culture of many American Indian tribes 155th birthday on May 21, 2022 65th death anniversary on June 5, 2022 I heard an Indian drum…. I have heard it in strange places, in the dawn and at midnight, with its mysterious throb. Frances Densmore devoted her entire life to recording and preserving American Indians’ music and customs. Inspired from childhood by the sound of Indians singing and drumming, by the end of her life she had accumulated thousands of recordings and transcriptions of songs, and over twenty monographs and reports for public and professional journals. And she did it, in large part, on her own. By all accounts a very determined woman with a strong work ethic, she stated late in her life, “I have no special philosophy, but nothing downs me.” Densmore was a woman of contradictions. At a time when most accepted roles for women were housekeeping and child rearing, she nevertheless achieved success in a traditionally male profession. During her long career – from the late 19th to the mid 20th century – she made her mark as an independent anthropologist when, by contrast, most others who did so were supported by institutions such as universities and museums. As a woman aspiring to be a professional anthropologist, she succeeded at a time when most of the sciences were less than welcoming to women. Although anthropology was more open to women than some other sciences, women who entered the field were likely to do so as helpmates or research assistants for males. Densmore’s work was informed by the belief, widely held during most of her lifetime, in the “vanishing Indian” – the view that Indian peoples and their cultures would soon disappear as they assimilated to the dominant Euro-American culture; yet many elements of those cultures are in fact still alive and well in Indian communities. Some today question her work, considering it an appropriation of material that they think should have remained with Indians, while others greatly value the vast scope and detail of her achievement. Frances was born in 1867 to a middle-class family in Red Wing, Minnesota, a small town about 50 miles southeast of Minneapolis. Her father was a civil engineer; her mother was active in humanitarian causes. Frances had a younger sister, Margaret. From their house overlooking the Mississippi River, they could see campfires and hear drums from an encampment of Dakota Indians on Prairie Island. Frances later recalled that even when very young, she often fell asleep to the sound of an Indian drum – a sound that would inspire her in her professional work all her life. One of the Dakota people there, Maka Waste’ Win (Good Earth Woman, also known as Susan Windgrow), later provided information to Densmore about cultural aspects of her tribe. Frances grew up in a musical family. She obtained a thorough musical education; from 1884 to 1887 she attended the Oberlin Conservatory of Music in Ohio, where she studied piano, organ, and harmony. There, for the first time, she encountered people from other cultures. After she graduated, she returned to Minnesota, where she gave piano lessons and served as a church organist. She later furthered her studies with well-known musicians in Boston and at Harvard University. Densmore’s strong musical background and her fascination with Indian music came together when she began to read widely about the latter. In particular, she studied works by the American anthropologist Alice Cunningham Fletcher. Nineteen years Densmore’s senior, Fletcher had visited many tribes and written extensively on their culture and music, focusing in particular on the Omaha people in Nebraska and Iowa. To record their music, Fletcher used a phonograph – a new device invented by Thomas A. Edison in 1877, with which sound waves could be recorded as impressions on rotating wax (originally tin foil) cylinders, and then played back. Fletcher and her co-worker, Ponca ethnologist Francis La Flesche, published A Study of Omaha Music in 1893. That same year, Densmore attended the Chicago World’s Columbian Exposition, where she was further inspired. She later recalled, I heard Indians sing, saw them dance and heard them yell, and was scared almost to death. However, I read what Miss Alice Cunningham Fletcher was writing at the time about Omaha music, and became acquainted with John Comfort Fillmore who transcribed her phonograph records. For the next ten years I soaked my receptive mind in what army officers wrote about Indians, and what historians wrote about Indians, with some of the publications of the Bureau of American Ethnology, with which I was later to be connected. All this was preparation for my life work.” (She Heard an Indian Drum, Hoffman, p. 21) Eventually Densmore and Fletcher met, and they began to correspond. For ten years Densmore gave public lectures about Indian music, in Minnesota as well as Chicago and New York, drawing on Fletcher’s work, with her permission. At the time, lecturing was one of the few acceptable ways for women to be active in public. Recording Indians’ Music In 1905, Densmore made her first field trip to visit an Indian tribe and study their music. Accompanied by her sister Margaret, she traveled to two Chippewa (Ojibwe) villages in Minnesota on the north shore of Lake Superior, Grand Marais and Grand Portage. In Grand Marais, they hired an Indian guide named Caribou, who took them to meet the Grand Medicine man, Shingibis. The Densmores were able to attend a religious ceremony; Frances was thrilled, although she did not take notes – it would, she later said, have seemed “a sacrilege.” (Archabal, p. 99) In the summer of 1906, Densmore visited the White Earth Chippewa community in western Minnesota. One of the Indians agreed to sing and record songs for her. She hastily borrowed a phonograph from a local music store and recorded many of his songs. After visiting two other Chippewa communities in the state, she returned to Red Wing and wrote to the head of the Smithsonian Institution’s Bureau of American Ethnology (BAE) in Washington DC, requesting funds for future work. In reply, he sent her a grant for $150, which she used to buy her own Edison machine to use on subsequent trips. She replaced it the following year with (an evidently superior) Columbia gramophone, which she used on subsequent field trips until 1940. Densmore followed distinct steps when she wanted to record a tribe’s music. She began by approaching the “top” man, in a formal, businesslike but friendly way, to ask permission to proceed. Since she did not know any of the tribes’ languages, she carefully identified an interpreter known to be competent in the tribe’s language and in English. Then, after setting up the equipment, singers, and interpreter in an appropriate place – usually a building located some distance, but not too far, from the center of the tribe’s activities – she recorded the song or songs. Next she transcribed them, using (usually) Western musical notation. Finally, she analyzed the song’s musical characteristics and wrote descriptions of them and related cultural material before submitting her study, in most cases to the BAE for publication as BAE bulletins. On the basis of her recordings at the Chippewa communities, Densmore published two books – Chippewa Music I and Chippewa Music II, in 1910 and 1913. Over the next few years she ventured farther west and made recordings at several Sioux locations in the Dakotas, resulting in her lengthy book Teton Sioux Music, published in 1918. These early books are the most substantial of her many works on Indian music; Teton Sioux Music has been described as “monumental.” They include the great number of the songs she recorded, providing lists of them by name and type (such as love songs and songs for games and dances), transcriptions, and translations of words as well as detailed descriptions and analyses of the music in tables and charts, and in some cases graphs, “so the eye can get an impression that the ear does not receive when listening to the song.” (Hofmann, p. 96) In these and later works, she described such aspects of songs as structure, tonality, melodic progression, rhythm, the rhythm of the drum, and other characteristics, often comparing the music of the many tribes she visited with each other and with Western music. She frequently cited individual singers by name and provided detailed information about them. Her books also included many photographs of singers and their surroundings. In addition, Densmore – a prolific writer – wrote at length in these and later works about the meanings of Indians’ songs in their cultural context. Besides their music, she also wanted to learn as much as she could about other aspects of their culture – their dwellings, clothing, food, uses of plants, birth customs, burial customs, and more. She collected such material during her trips to Chippewa and other villages, and eventually published them in two books: How Indians Use Wild Plants for Food, Medicine and Crafts (1928) and Chippewa Customs (1929). She also collected Indian crafts and artifacts and musical instruments, many of which are now housed in the National Museum of the American Indian in Washington DC and in the Minnesota Historical Society in St. Paul. Densmore was sensitive, too, to the spiritual significance of many Indian songs. Of particular interest to her were dream songs. In many Indian cultures, she learned, dreams are understood in relation to the supernatural. In such cultures, dreams are received from the spirit realm when the mind is in a receptive state – for instance, after a period of fasting or sleeplessness. As Densmore put it, “The Indian waited and listened for the mysterious power of nature to come to him in song.” (The Belief of the Indian in a Connection Between Song and the Supernatural, Hofmann, p. 78) A song becomes the special property of the dreamer. Densmore also appreciated that songs “had a purpose. Songs in the old days were considered to come from a supernatural source and [the] singing was connected with the exercise of supernatural power.” (The Study of American Indian Music, Hofmann p. 108) Medicine or healing songs would be used to aid in treating someone who was ill. Densmore realized that such songs were viewed as sacred, and usually not appropriate for recording, although in the early years of her recording she did record some songs of the Chippewa Midéwiwin (Medicine) Society. Over the course of her long career, Densmore visited and recorded music of thirty-five Indian tribes, often with Margaret as her companion and helper. They traveled by car, train, and occasionally by boat. Especially on trips to distant tribes, traveling became arduous as they had to carry heavy equipment – phonograph, cameras, tripods, notebooks, and supplies – so they sometimes had to hire help. With a few exceptions, all her work was done under the auspices of Bureau of American Ethnology. Researcher Nina Marchetti Archabal has summarized the trips that resulted in significant studies: Densmore’s efforts yielded major published studies on the music of the Chippewa, Teton Sioux, Northern Ute, Mandan and Hidatsa, Tule Indians of Panama [during their visit to Washington DC], Papago, Pawnee, Menominee, Yuman and Yaqui, Cheyenne and Arapaho, Santo Domingo Pueblo, Nootka and Quileute, Choctaw, the Indians of British Columbia, Seminole and Acoma, Isleta, Cochiti, and Zuni Pueblos. (Archabal, p. 113) Having to Make Do During her recording visits, Densmore often found conditions less than conducive to recording music. For instance, when she first began recording among the Chippewa, she found that many of their songs were intended to be accompanied by an instrument, such as a drum or rattle. But these tended to overpower singing voices as they were recorded. Always resourceful, she came up with a solution: “Therefore it was necessary to find, by experiment, some form of accompaniment that would satisfy the Indian singer and also record the rhythm of the drum or rattle. Pounding on a pan was too noisy, but…songs were often recorded in an Indian schoolroom during vacation, and an empty chalk box was found an excellent substitute for a drum. I put a crumpled paper that touched the sides of the box but did not fill it. The box was closed and struck sharply with the end of a short stick, producing a sound that was heard clearly on the record…[making] possible the transcribing of the rhythm of the Native accompaniment.” (from “Songs of the Chippewa,” notes accompanying the album released in 1950) Recording Sioux Music In 1911 Densmore began to extend her travels westward, to record singers at several Sioux sites in North and South Dakota. Accompanied by Margaret, she first visited Fort Yates, North Dakota, the location of the Standing Rock Sioux tribe’s headquarters. Her experience recording music there was very positive; she returned for the next three summers to continue her work. She made such a good impression on Chief Red Fox that he wanted to adopt her into the tribe, in part because she resembled his own deceased daughter. Densmore described the ceremony marking her adoption: In 1911 when I was at Fort Yates, North Dakota, studying the Sun Dance ceremony, a very prominent Chief, Red Fox, announced to an assembly of chiefs and leaders that he intended to adopt me as his daughter! This was not so unusual, because everybody was aware that Red Fox had the right to adopt someone in place of his daughter that had died some years before. The assembly approved his intention, although you can imagine what a surprise it was for me! The deceased daughter’s name had been Ptesan ‘non’ pawin (which means Two White Buffalo Woman), and this was the name I received from Red Fox. He explained to me that I need never hesitate to use it, wherever I might be. He had a right to give it to his daughter because he had twice been selected to kill a white buffalo when his tribe was hunting. This was an honor when so chosen because an albino animal was only occasionally seen in a herd. A thousand Indians gathered at Grand River, South Dakota, on the Fourth of July, 1912, when I was present, and my adoption was ratified by Red Fox’s band. Songs were sung in my honor. Old praise songs and some new songs contained my name.” (A memorandum from Frances Densmore,” 1941, Hofmann, p. 33) Recording Under Austere Conditions Unlike her mentor Alice Fletcher, Densmore did not enjoy “roughing it” during field trips to remote Indian communities. She usually stayed in the homes of White government officials. But at least one experience was a bit more challenging: “I remember with queer affection an office at Fort Yates, that had been part of the kitchen of the old fort. Subsequently it had been used as a coal shed, and it had neither door nor windows when I took over. The agent let a prisoner from the guardhouse help me fix it up and he suggested boring holes in the floor to let the water run through, when the floor was cleaned. He made steps, rehung the door, and nailed window sash over the openings, and I pasted paper over the broken plaster and used packing boxes as tables. For many weeks I used that office and the Indians felt at home there, which is important. I stayed until the weather was bitter cold and the snow was piled high around the door. A little stove kept the place warm and I nailed a blanket over the door after entering, in order to keep out the bitter wind that blew down the Missouri River. One trial was that the mice did not move with the soldiers and their descendants had populated the building. They frisked around the floor and hid behind the paper on the wall. Once I found one under my typewriter when I came back at noon.” (Study of American Indian Music: Work in the Field, 1941, Hofmann, p. 105) Recording Ute Music Following her work with the Sioux tribes, in 1914 Densmore traveled farther west to the mountainous area of the Northern Ute tribes. There she encountered Indians who, in contrast to the Sioux, were not very welcoming. The Utes had recently gone through a particularly painful time, enduring extensive land loss and pressures to assimilate to the dominant culture. Chief Red Cap initially refused to let Densmore record any singers at all, but he changed his mind when she played her “trump card” – she told him that she was the Sioux Chief Red Fox’s adopted daughter. Then, after Chief Red Cap had recorded some songs, he also recorded his own speech criticizing the government agent’s treatment of the Utes. It is uncertain whether any officials at the BAE actually listened to his complaint. Densmore made a return trip to record more music of the Utes in 1916, but this time she developed breathing problems due to the high altitude. Questioned about her work, she replied, “I had to remember to breathe. If I did not keep my mind on it, I often stopped breathing for a long time.” (Jensen and Patterson, p. 107) Her doctor in Red Wing diagnosed heart problems and advised her not to visit any more sites in mountainous terrain. Recording in the Northwest: the Makah For the following nearly two decades, Densmore visited many other tribes. In 1923 she traveled to the Pacific northwest – to the Makah at Neah Bay on the northwestern tip of the Olympic Peninsula. She and Margaret traveled by train to Seattle, then by boat to the Makah community, which was then accessible only by sea; and after recording there, to British Columbia. Three years later she returned to make more recordings of songs among the Makah as well as communities in British Columbia and on the southwest coast of Vancouver Island. As Densmore had found in other tribes, the Makah had songs for just about every aspect of their lives. Biographer Joan M. Jensen describes this trait in their culture: As soon as a woman indicated she was expecting a child, her family gave her a feast, and old women came to sing songs to the child while still in the womb. After the child’s birth, the old women returned again to sing the child into its new home. Fathers gave their children feasts to mark their birthdays at which old women sang children’s songs. Fathers sang to their children and “danced” them. And at about five years of age, children had naming feasts at which they again heard songs…. Young men formed groups to sing around the village in the evenings. At marriage feasts, the young women were sung into their new homes. They took family songs with them into their new homes and passed them on to their children. “As adults both men and women received songs in dreams …. They dreamed healing songs. Women sang songs to call other women to collect berries or shellfish…. Men sang war songs, songs for contests of strength, and whaling songs. Those who stayed home had songs for protection and songs to calm the waters. The wealthier sponsored potlatches, feasts where hosts welcomed guests with song and gave them gifts. Guests sang their family songs as well…. When a person passed on, the family head sang the songs of the deceased. At the end of a mourning potlatch, kin sang their songs…. As one song went: ‘Let your song last as long as your wealth.’” (Jensen and Patterson, pp. 133-134) Recording in the Southeast: the Seminoles In 1931 and 1932 Densmore made several trips to Florida, where she recorded songs of the Seminole people. Because of their history of and strong resistance to persecution and threatened removal by the US government, many had moved deep into the Everglades and were difficult to reach. However, some Seminoles had set up “exhibition villages” – recreations of their actual villages, some along the famed Tamiami Trail – where they performed dances and songs for tourists and sold garments they made out of their colorful cloth, known as Seminole patchwork. These locations were more out in the open and easier to visit. During a subsequent trip to Florida, Densmore recorded a dozen songs by Josie Billie, who became her principal singer. But after he had recorded over sixty songs, he suddenly stopped and refused to sing any more, explaining that the tribe’s medicine men had objected. Billie himself refused to sing his own medicine song (intended to facilitate healing), fearing that doing so might cause the medicine to lose its power. As described above, Densmore often met with reluctance on the part of Indians to record songs that had, for them, possible adverse consequences for healing or were deemed too sacred to share. Losing BAE Funding, 1933 In 1933, well into the Depression, Densmore lost her financial support from the BAE. This made it difficult to continue her field trips. Margaret, who had been a school teacher, had quit her job in 1912 to became Frances’s full time assistant. They continued to live in their childhood home in Red Wing. Frances found various jobs to keep them going, such as lecturing and writing for pay. For two years she was unable to do any fieldwork. But in 1935 she received a grant to work as a consultant to the Southwest Museum in Los Angeles, whose director had formerly been at the BAE and had supported her there. Finally in 1936 she obtained funding from the Works Progress Administration and became supervisor of Indian handcrafts in Minneapolis. Further funding from a private donor enabled her to travel to the Southwest Museum and visit some tribes in the area and record their music. Creating an Archive During the 1940s Densmore spent much time organizing the fruits of her life’s labor. Besides the wax cylinders holding her recordings, her archive would include her letters, notes, bulletins, articles, and so on (but nothing pertaining to her personal life, which she explicitly designated to be destroyed). In 1940 her recordings were moved from the Smithsonian to the National Archives. Between 1941 and 1943, as a consultant at the Archives, she worked on organizing the Smithsonian-Densmore Collection of sound recordings of American Indian music. In 1948 the recordings were further transferred to the Library of Congress, where they now form part of the Archive of American Folk Song. The Library copied many of her recordings from cylinders onto sixteen-inch disks. As part of the effort to archive her recordings, Densmore began to work on making some of them available to the general public. She spent several years selecting songs from numerous tribes for a series of albums that would be issued on long-playing vinyl disks by the Library of Congress. Although her original plan was to create ten albums, she was able to complete only seven. The first one, “Songs of the Chippewa,” was issued in 1950. The others are “Songs of the Sioux,” “Songs of the Yuma, Cocopa, and Yaqui,” “Songs of the Pawnee and Northern Ute,” “Songs of the Papago,” “Songs of the Nootka and Quileute,” and “Songs of the Menominee, Mandan, and Hidatsa.” Some can be heard today on YouTube. In 1947, Densmore suffered a severe blow – her sister Margaret died of heart failure. She and Frances had been very close. Margaret was very helpful to Frances over the years – especially after she left her teaching job – taking care of their house in Red Wing, cooking, typing, driving. But, as Frances declared, “Nothing downs me.” She sold the family home and moved into a nearby rooming house, and continued on her own to record and analyze Indian music. Honoring Frances Densmore Densmore received numerous honors for her work. Oberlin College awarded her an honorary master’s degree in 1924. In 1941 she was given an award from the National Association of Composers and Conductors. St. Paul’s Macalester College in St. Paul, Minnesota, awarded her an honorary doctorate in 1950. And the Minnesota Historical Society presented her with its first “citation for distinguished service in the field of Minnesota History” in 1954. She was also made a member of several professional societies. Her Final Years, and After In 1954, when she was 87, Densmore made one more trip to visit the Seminole Indians in Florida, where she visited reservations in the Everglades and conducted seminars at the University of Florida. Back in Red Wing, she celebrated her 90th birthday in May 1957. Two weeks later, she died of pneumonia and heart failure. In the decades since her death, some critics – with the growth of Native sovereignty and increasing recognition of Native rights, and with the passage of the Native American Graves Protection and Repatriation Act (NAGPRA) in 1990 – have questioned Densmore’s practice of “salvage anthropology,” the imperative for non-Natives to capture elements of Indian culture before they would presumably disappear; and to preserve them in non-Native facilities. Some have alleged that she pressured Indians to share sacred songs and teachings that tribal members thought should be kept secret. For instance, a play by White Earth writer Marcie R. Rendon, “SongCatcher: A Native Perspective of Frances Densmore,” published in 2003, makes such charges. But others, including Indian educators and scholars, have valued Densmore’s work highly, consulting her recordings and books to learn about forgotten aspects of their cultures that were hidden for generations because of fears of government retribution. And the Federal Cylinder Project, started in 1979, includes the repatriation of Densmore’s recordings to their tribes of origin, where they can be edited and re-recorded by tribal members and used in educating tribes about their history. For instance, Chippewa teacher Larry Aitken has used her work in his teaching at the Leech Lake Tribal College. (“Speaking About Frances Densmore”) Drawn from her early years to Indian peoples, Densmore wanted to learn more about them and ultimately to support them. Studying and preserving their music was a way she could do so. She hoped to help prevent their cultures from being lost, as she – in keeping with the times – feared they might be. Perhaps her insistence on maintaining a strictly professional, although friendly, distance when interacting with Indians was a way of keeping her genuine sympathy for them from distracting her from her work. During her many travels and studies, Frances Densmore always remained focused on her goal. In an address in 1941 (in Hofmann, The Study of Indian Music, 1941, p. 114), she stated it clearly: Throughout this study the objective has been to record the structure of the Indian songs under observation, with my interpretation. Other students, scanning the material, may reach other conclusions. My work has been to preserve the past, record observations in the present, and open the way for the work of others in the future. Author: Dorian Brooks Literature & Sources Larry Aitken, “Speaking About Frances Densmore,” https://www.youtube.com/watch?v=upHEeJpfQwQ August H. Andresen, Congressional Tribute, “Dr. Frances Densmore,” address given on April 25, 1952 to the Minnesota House of Representatives (Hofmann, pp. 115-119) Nina Marchetti Archabal, “Frances Densmore: Pioneer in the Study of American Indian Music,” Chapter 6 in Barbara Stuhler and Gretchen Kreuter, eds., Women of Minnesota: Selected Biographical Essays (St. Paul, MN: Minnesota Historical Society Press, 1998) Frances Densmore, The American Indians and Their Music (New York: The Womans Press, 1936). A 1926 version is online at https://babel.hathitrust.org/cgi/pt?id=mdp.39015007933891&view=1up&seq=7 _____________, How Indians Use Wild Plants for Food, Medicine & Crafts (New York City: Dover Publications, Inc., 1974, reprint of “Uses of Plants by the Chippewa Indians,” pp. 275-397 of the Forty-fourth Annual Report of the Bureau of American Ethnology to the Secretary of the Smithsonian Institution, 1926-1927, 1928) _____________, Chippewa Customs, with an Introduction by Nina Marchetti Archabal (St. Paul, Minnesota: Minnesota Historical Society Press, 1979, reprint of 1929 edition) Ute D. Gacs et al, eds., “Frances Theresa Densmore,” in Women Anthropologists: Selected Biographies (Champaign, Illinois: University of Illinois Press, 1989) Charles Hofmann, comp., ed., Frances Densmore and American Indian Music (Museum of the American Indian Heye Foundation, Contributors, vol. 23 – New York, 1968). Hofmann was Densmore’s only student. This volume includes a chronology of Densmore’s life and numerous articles by and about her, selected by Hofmann (some of which are referred to in the present article), as well as numerous notes on annotated reports of the Bureau of American Ethnology from 1907-1946. Online at https://archive.org/details/francesdensmorea00hofm/page/n11/mode/2up Joan M. Jensen and Michelle Wick Patterson, eds., Travels with Frances Densmore: Her Life, Work, and Legacy in Native American Studies (Lincoln, Nebraska and London: University of Nebraska Press, 2015) Joan T. Mark, A Stranger in Her Native Land: Alice Fletcher and the American Indians (Lincoln, Nebraska: University of Nebraska Press, 1989) Marcie R. Rendon, “SongCatcher: A Native Interpretation of the Story of Frances Densmore,” in Jaye T. Darby and Stephanie Fitzgerald, eds., Keepers of the Morning Star: An Anthology of Native Women’s Theater (Los Angeles, CA: American Indian Studies Center, University of California, 2003) Margaret W. Rossiter, Women Scientists in America: Struggles and Strategies to 1940 (Baltimore and London: The Johns Hopkins University Press, 1982) Stephen Smith, producer, writer, narrator, “Song Catcher: Frances Densmore of Red Wing,” A radio biography/web documentary of Densmore. St. Paul, MN.: Minnesota Public Radio, 1994, 1997. Online at http://news.minnesota.publicradio.org/features/199702/01_smiths_densmore/docs/authorintro.shtml Rebecca S. West, “Points West Online: Recording the Spirit of a Culture,” January 25, 2016. Online at (originally published in 2000) Wikipedia, “Frances Densmore” If you hold the rights to one or more of the images on this page and object to its/their appearance here, please contact Fembio.
<urn:uuid:3231f0e1-021a-45c1-be0f-42dcc8713867>
CC-MAIN-2022-33
https://www.fembio.org/biographie.php/woman/biography/frances-theresa-densmore/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00496.warc.gz
en
0.974358
6,801
3.09375
3
History is the record of human events, the sum total of the entire human experience. But it is much more than a catalogue of the achievements and failures of people and nations. The study of history demands an objective analysis of the historical record. Historical records are not always objective and in many instances they can be one-sided. When doing research on the labor movement, immigrant life, and reforms in New Haven during the late nineteenth century, I encountered difficulty in uncovering evidence of the existence of the poor, slums, tenements, and problems of the laboring class. Most books written about New Haven, by the people of the time, saw the city as the best of all possible worlds. It was a place of middle and upper class living, of balls, parties, walks in the city’s beautiful parks, of shopping in the fine downtown stores. But New Haven was a typical city of the period; there was another side to life a side which many people chose to ignore or only haphazardly mention in passing. This unit is an attempt to show the other side of late nineteenth century New Haven. The purpose of this unit is to help the student understand the interrelationship of economic and social factors, and to help him or her understand how these factors shaped the lives of the American people. This will be done by an analysis of some of the attitudes, values and beliefs of the people. It will also show how these attitudes, values and beliefs were put into practice, either as the rejection of the poor or the establishment of reform movements, which were intended to remove existing inequalities. Rapid and momentous were the changes in America after the Civil War. This change could not have taken place without forcing a profound readjustment in American society. The reconstruction of the American economy affected the whole nation. The North was transformed into one complex industrial area. As industries moved into high gear, the output of the factories increased and the newly expanded rail system spread their goods to every part of the United States. The telegraph and telephone established direct contact between businessmen throughout the nation. High pressure salesmen assured ongoing expansion by educating the public to the new “necessities” of life. With this type of stimulation, every branch of industry expanded rapidly. The new American industries created 12 million new jobs between 1865 and the early l900s. Along with this development in the North came a greater concentration of wealth and population. The biggest impact came in those areas where industry was already in existence. A prime example is the city of New Haven, which by the middle of the 19th century was one of the most important manufacturing centers in the United States. New Haven was a typical city of the period. MAP I* It was a “walking city,” so called because most people walked to work, shops and recreation. The business district grew up near the homes of wealthy merchants. Near the shipyards and factories were the homes of the workers. The town poor lived near the outskirts of the city or in the alleyways and cellars of the central city. With industrial expansion, the need for more space grew rapidly. The larger cities met the need for space by improving the means of transportation which allowed the population to spread out As industrial America changed over to large scale methods of production, the gulf between workers and owners widened. Wages, hours and working conditions were fixed by the company. Earnings were greatly affected by an increase in the number of underpaid women and children in the factories and mills. In New Haven in the 1860s over 40% of the workers were women and children. To this figure was added the ever increasing number of immigrants willing to work for next to nothing. Along with the rise of large scale industry came the increase in the power of labor organizations. The objectives of many of these organizations were basically the same: the establishment of the 8-hour day; fair wages; industrial arbitration; the abolition of child labor; weekly payments; and factory inspections. Workmen in the large cities were glad to receive $2.00 a day. Their 10-hour day went from 7:00 a.m. to 6:00 p.m., with an hour at noon for lunch. In some businesses the hours were shorter, and in others a good deal longer. Women in industry were concentrated at the lowest level of work and pay, usually receiving about 1/2 to 1/3 the wages of men doing the same job. A woman’s salary averaged about $4.00 per week. *Maps, readings, charts and pictures mentioned in this unit refer to a teacher’s packet, New Haven and the Nation , which is available from the Teachers Institute office. Although unions fought for many of the same objectives, there were differences in their methods. Samuel Gompers, leader of the American Federation of Labor, opposed the philosophy of the Knights of Labor. The Knights hoped to organize all workers in the United States by organizing all workers in an industry into a single industrial union. By being united they could demand political action to bring about the changes they sought. Samuel Gompers was president of Local 144 of the Cigar makers Union when he fought for the establishment of trade or craft unions.(l877). He wanted unions to be comprised of men sharing a special skill. Gompers believed that unions should use the strike weapon, as well as boycotts, to make employers increase wages, shorten hours and develop safer working conditions. But Gompers did not believe that unions should have political goals or try to change the economic system. Workers in post-war Connecticut, like the rest of the nation, became interested in national labor organizations. Between 1850 and 1877 about fifty local unions were active in the state, but they usually did not last long and were ineffective. Lack of unity among the workers in their aims and methods; the opposition and power of the employer; public opposition to unions; strikes; and loss of membership due to a series of depressions weakened organized labor in its early years. The workers of New Haven participated in the development of organized labor and labor law reform. The five hundred members of the local coach makers union, under the leadership of Talbot H. Harrison, petitioned the legislature to establish an 8-hour day for their industry. In January 1877, the noted that “the workingmen of this city are preparing to present to the legislature a request for the enactment of a law preventing children under fourteen years of age from obtaining employment in the State.” One organization served as a channel to combine the working class on a nationwide basis. This was the Knights of Labor, founded in 1869. The Knights, like many other such groups, existed in secret for many years, for fear of reprisals by employers. The Knights of Labor came to Connecticut in the winter of 1878. The national leader, Terrance V. Powderly, established locals in Hartford, Middletown and New Haven. Organized workers throughout labor history have resorted to the strike as a principal weapon. The expansion of membership in labor organizations drew in many socialists and radicals; nationwide, unions became embroiled in an increasing number of strikes, boycotts and other disturbances. The Panic in 1873, like every other panic in United States history, was accompanied by problems in the labor field. The growth of unemployment, the drop in wages, and the hopelessness of the workers led to a period of violent revolts. Labor unrest in New England reflected this trend. One strike after another occurred; many were put down, some with great violence. A strike of the textile workers in New England attracted wide attention in 1875. The owners of the mills brought in French-Canadians to take the place of the striking workers. The situation in Connecticut can be illustrated by the strike at the Ponemak Cotton Mills in Taftville, the largest mill of its kind in the state. Labor unrest and strikes occurred frequently during this period in New Haven, sometimes over local problems, as in the case of the bricklayers employed in the construction of an addition to the Candee Rubber Company in 1878. The workers launched a strike for an increase in wages. Occasionally unrest would be the result of sympathy for workmen elsewhere. For example, in 1877, in sympathy for railway workers whose wages had been reduced by the Baltimore and Pittsburgh Railroad, city laborers held a mass demonstration resulting in a near riot on the Green. The city police force was called upon to quiet the demonstrators. In the 1880s and 1890s there was an upsurge of strikes in Connecticut. The state listed twenty-five strikes in 1881 and one hundred and forty-four in 1886. In the spring of that year, the carriage industry in New Haven was idled for months by a strike as workers held firm to their demands for more pay and shorter working hours. Italian immigrants in New Haven were often used by factory owners as strike breakers. They were at the bottom of the economic ladder and had no bargaining position with employers. With both labor and capital mobilized for defense and aggression, the stage was set for a trial of strength. In spite of setbacks, blacklists and legal prosecution, the labor movement had one important effect—it fostered a new spirit of unity among the workers. By 1881 trade unions in New Haven had developed considerable strength. In that year the Council of Trade and Labor Unions was formed, later changing its name to the Trade Council of New Haven. The following unions joined together to form the Council: Bakers and Confectioners; Brewery Workmen; Bricklayers and Plasterers; Building Laborers; Stereotypers; Typographers; Stonemasons; Carpenters and Joiners; Cigar makers; Iron Moulders, Journeyman Plumbers; Gas-fitters, Steam-fitters and Pressmen. On March 9, 1887, the Connecticut Federation of Labor was organized in Hartford by workers from New Haven, Meriden, Danbury, Waterbury and Hartford. By 1900 the state membership reached 14,000. One consequence of the depression that followed the periods of panic in the 1870s and 1890s was mass unemployment. There emerged throughout the country what was called the “tramp evil.” In some eastern sections of the Berkshires in Massachusetts, the “tramps” formed organized bands. Young men wandering in search of employment associated with professional criminals and beggars, living in the woods, stealing, drinking and fighting. From many towns and cities came reports of thefts, fires, rapes and murders committed by vagrants. In some New England towns after the Panic of ‘73, people on the outskirts were forced to abandon their homes out of fear of these “tramps.” New Haven had its share of the problem in 1874; 215 arrests were made related to the “tramp evil.” The problem continued throughout the 1880s. This was due in great part to the custom of furnishing, at all police stations in the city, free lodging to all applicants. This custom encouraged “tramps” to come into the New Haven area. In the late 1880s the city fathers changed the law, bringing an end to this problem. In answer to the heavy demands of industry on the labor market and the opportunities offered by American life, the number of immigrants rose rapidly after the Civil War. For the first time, American manufacturers sent agents to Europe to stimulate immigration. The steamship companies offered low steerage rates, enabling agents of American firms to prepay the passage of laborers who would agree to work for the company for wages that seemed fantastically high to Europeans, but which were very low by American standards. The demand for labor can be considered one of the main factors responsible for the great influx of immigrants into Connecticut. With the exception of Rhode Island and Massachusetts, Connecticut, in 1900, had a larger proportion of foreigners in its population than any state in the union. The new immigrants fell into established patterns of employment. Most had to put off their dreams of owning a farm. Their first thoughts had to be of earning a livelihood. They found jobs in construction camps, mines, and in the factories of mill towns and great Eastern manufacturing centers. American cities in the nineteenth century developed simultaneously as centers of industry and as areas with high concentrations of immigrants. It is here that they developed communities that attempted to reinforce and sustain the culture of the homeland. The majority of the European immigrants during the nineteenth century were not skilled or experienced in industrial work. Those workers who came with some skills tended to abandon their old world trades largely because they found little demand for their skills. Conditions were the same for nonskilled immigrant families—the husband could not earn enough to support his family; the wife and children also had to work to make ends meet. An abundance of labor was readily available at the lowest possible wages. Yet none of the immigrant entry ports offered any kind of public assistance to either immigrants seeking jobs or employers looking for workers. Unless the immigrant had sufficient money, a specific destination, or a relative or friend to lean on, he faced considerable problems. The new immigrants did not fit quickly into the American way of life. Because of their difficulties with the language and ignorance of the social system they were often taken advantage of. A New Haven resident in July 1873 requested that an investigation be made into the living conditions of the Italian boys on Oak Street. of July 18, 1873 reported: These boys wander from street to street earning money from the charitable for their lazy taskmasters by playing upon dilapidated musical instruments, by blacking boots, or by any pursuit that will bring a daily specified sum of money to the idle and vicious men who have bought them for a term of years. Loneliness, bitterness and confusion frequently overtook the new immigrants. This was especially true of those who came from rural areas of the old country to highly urbanized cities in America. Both the urban working class and the newly arrived immigrants suffered from a lack of adequate housing. Poverty compelled them to live where rents were cheap. They were forced to live in overcrowded, run down tenements and wooden cottages without sufficient air, light or sanitation facilities. These living conditions fostered the spread of disease, fire and crime. In many cities groups of reformers worked to improve the housing situation. The reformers’ main concern was bad housing because they believed that poor living conditions breed poor citizens. The rise in the Italian population of Connecticut was phenomenal. From only 61 in 1860 their number increased to 19,105 at the close of the century. The United States Census does not report any Italians in Connecticut at all until 1880; the reason being that they hid their identity for fear of persecution. The steady employment given to a number of Italians by the Candee Rubber Company and the Sargent factory in New Haven encouraged the workers to send for their families. In a few years a complete Italian community was formed in New Haven. This type of community was not only characteristic of Italians, but of all immigrant groups. In New Haven the Italian community was located in the Wooster Square area. Originally it was a wealthy Irish neighborhood. In the late 1860s industry moved into the area and the Irish moved out to the edge of the city. They often commuted to work in the downtown area by the newly installed streetcars. As the wealthy moved out, they left the area to the factories and tenements. The once beautiful mansions were turned into apartment houses, overcrowded and neglected. It was into this area that the Italian immigrant moved in search of work. Workers preferred to walk to their place of employment to save the expense and time required for commuting. The immigrants chose places where they could associate with people like themselves in language, habits and religion. The newest arrival knew the addresses before he stepped off the boat. They worshipped in churches and synagogues like those in the homeland, they formed fraternal and benevolent societies to assist their own. The Sons of Italy, B’rith Abraham, the Order of Vasa (Swedish), the Sons of Norway, the Knights of Columbus (Irish), the Wreath of the Eagle (Slovak), and the Sons of Herman (German) were some of the mutual aid societies with lodges in New Haven. Since colonial times, people from other lands had been welcome in America. But by the 1880s a less than cordial attitude was developing. Anti-foreignism or “nativism” once again flared up. Because of the tension the new wave of immigrants created, the legislature in Connecticut passed an amendment to the state constitution regarding voting privileges. This 1898 amendment provided that “every person shall be able to read in the English language any part of the Constitution or any section of the statutes of the state before being admitted as an elector.” New Haven voted its approval. The Democratic platform of 1892 echoed the feelings of many New Haven residents: “We heartily approve all legitimate efforts to prevent the United States from being used as the dumping ground for the known criminals and professional paupers of Europe.” The increase in immigration caused great problems in the field of education. In Connecticut, the school system was encountering problems with immigrant children because of their inability to read and speak English. Most of these children were in the larger cities and towns of the state. In many schools a large percent of the children were foreign born or of foreign born parents. To meet the needs of these children New Haven established the ungraded or unclassified school. The purpose of this type of school was to accommodate the children who could not speak English and who therefore had difficulty in the regular classroom. New Haven had three such ungraded schools in 1881: the Whiting School, the Hamilton School and the Skinner School, with a total of 150 pupils. Truant officers finding a child wandering on the streets, dirty and in rags, would bring the child to the ungraded school. Two other schoolrooms were also established, one in the Fair Street School and the other in the Dixwell Avenue School, for those who were habitually truant or incorrigible in the regular classroom, By 1896 the New Haven School System had taken a hostile attitude toward the immigrant. In the Annual Report of the Board of Education they referred to New Haven’s “increasing and promiscuous population—a population containing a large foreign element,” mainly Italian and Russian. Innumerable problems had been created for the school system, the Board reported, “by the influx into the city of people who were ignorant of our institutions, laws and language, people who had not been accustomed to send their children to school in the country from whence they came and who seemed to care but little for their education, especially in the English language.” The Board took a strong stand on Anglo conformity and had little apparent sensitivity to the immigrant’s cultural background. The New Haven School System did attempt to meet the needs of the younger immigrants. Kindergartens were opened for what they called “the large class of neglected children now growing up in this city, whose parents are obliged to leave them alone in squalid homes or let them run in the streets.” The immigrants did have certain attitudes about the formal American educational institutions. They did perceive correctly that the schools were hostile to the family. Many did not see any value in the education provided by the American high school. In school the immigrant children were advised to train for manual, working class occupations. Many educators judged that they lacked the mental ability necessary for other professions. The cities of the nineteenth century were very crowded. The poor lived in crowded tenements, wooden shacks or cellars, The streets were filthy; there were few sewers or indoor toilets; and the citizens often drank water that was polluted. Pressure groups were formed to force city leaders into doing something to improve conditions. Streets were paved and sidewalks were installed. Improved street lights and larger police forces helped to reduce crime. Many cities, under pressure from the federal government, adopted codes to regulate the sale of food and drugs. Women were prominent in the drive to improve the quality of life. Women’s clubs were in the forefront of campaigns for clean streets and air, and were active in working with children and working women. By the end of the nineteenth century, most cities had housing codes requiring landlords to install fire escapes, plumbing and adequate lighting in their buildings. The slums had become a firmly-rooted American institution. So few were the sanitary precautions throughout them that death ran almost unchecked. Typhoid, smallpox, scarlet fever and cholera were the leading causes of death. New Haven was the first city in the state to possess a Department of Health. Organized as an independent branch of the city government, it came into existence in 1872. The most important progress in public health was made by local groups such as this, not by the State Board of Health. As a community, New Haven was especially liable to contagious diseases. New Haven was a center for travel and thus in daily communication with areas where, for example, smallpox was prevalent. In 1882 reports from Baltimore indicated that there were several thousand citizens down with smallpox. The disease was also prevalent in New York City and Patterson, New Jersey. The only known protection from it was general vaccination, yet there was widespread indifference to this easy, safe method of protection. New Haven’s remarkable exemption from smallpox in 1882 can, in part, be attributed to the care taken by the Board of Health in promptly isolating every case that occurred and in vaccinating all who had been exposed. For many years before the Civil War, most states and local governments maintained almshouses, orphanages, homes for inebriants, insane asylums and institutions for the deaf and dumb. Most cities possessed, in addition, a number of private benevolent societies. With the development of industrial society, the need for such agencies grew constantly greater. Official provisions for the poor in urban and rural areas were crude, unsystematic, and poorly administered. When a family fell into hopeless pauperism, its members were packed off to the poorhouse. We have all too clear a picture of the degradation and suffering which these institutions represented. During the last half of the nineteenth century, the institutions already established made rapid progress, especially in the area of specialized care for children. Connecticut followed the national trend in the reform movement. One of the most important acts established the State Board of Charities (1873). The Board was authorized to inspect almshouses, homes for neglected or dependent children, asylums, hospitals, and all institutions for the care and support of criminals. The Board not only supervised all public and private relief institutions, but suggested means of improvement. The homeless and penniless who lived in or came to New Haven could be assured of shelter and food, either through public relief or private charities. Private charities in New Haven were divided into three categories: (1) institutions, (2) societies providing for home care of the poor and (3) churches. The United Workers Society, established in 1877, carried out its work in five areas. These included (1) almshouse visits, (2) an employment bureau, which gave work to poor women, (3) the boys club (operating during the winter months in the basement of the old State House), which offered a place of recreation and reading to many newsboys and bootblacks who would otherwise roam the streets, (4) a committee for the relief of the sick poor, and (5) a sewing school for girls (open from September until July). The Mothers Aid Society was established in 1883 to supply work for poor women. The society established the Leila Day Nursery to take care of their children during the day. The New Haven Dispensary was opened in 1872 as a private society for the care of the sick poor. It was located on York Street, next to the Yale Medical School. Here the needy could be treated free of charge. The Organized Charities Association on Church Street was an organization where, for a reasonable amount of work in the woodyard or laundry, the poor and needy could secure lodging and meals. The Grand Army of the Republic cared for old soldiers and the Ladies Seamen’s Friend Society was organized to help sailors. Churches showed a great interest in urban social problems. The Young Men’s and Young Women’s Christian Associations provided social and physical activities for city youth. The Salvation Army sought to help the poor of the city. The Army brought religion to the tenements and distributed food and clothing to the needy. The orphans were provided for by three church run asylums. The New Haven Orphan Asylum was organized in 1883 under Protestant management. The asylum took up the entire city block from Elm Street to Edgewood Avenue and from Platt Street to Beers Street. This is the block where the Troup School is located today. The St. Francis Orphan Asylum, organized in 1865, was under the management of the Catholic Church. Located along Highland Street between Prospect Street and Whitney Avenue, its name has been changed to Highland Heights. The third asylum was operated by the Jews. It was located in York Square and called the Hebrew Ladies Orphan Society. In 1882 there were 369 children receiving aid, “300 whose fathers were common drunkards; 19 whose mothers were profligate; 50 who had shiftless and idle fathers and mothers,” according to a report of the Mothers Aid Society for that year. Many churches had organizations to care for the sick and poor of the parish; the First Church and Trinity Church had homes for aged and indigent women; the St. Vincent de Paul Society had branches in all Catholic Churches to help the sick and poor; three Jewish benevolent societies took charge of their sick and poor. In the late 1800s public relief in New Haven was given in four different ways: (1) direct money aid; (2) support of insane asylums; (3) support of the almshouse; and (4) “pauper labor.” given to persons who could not obtain full wages for their services. The almshouse was built in 1861 on the town farm. It was located far out in the northwestern part of the city on Marlin Street, now the corner of Edgewood Avenue and Brownell Street. By 1897, the City Department of Charities and Corrections was created. There “the honest poor may find a home, the unfortunate a temporary asylum, the truant a place of wholesome correction and the illiterate a school for usefulness.” The cities became places of contrast. Many rich Americans lived in cities, but so did the poor. In some areas one could find the mansions of the rich at a short distance from the slums of the urban poor. These were the parts of the city where few traveled unless they lived there or had business dealings in the area. Many Americans lived their entire lives without ever seeing a slum. To them this other side of life was remote and unreal. The social activities of the wealthy were chronicled in the newspapers and read by all. The slum dwellers had no “social” life that was newsworthy except in the form of the annual Police Reports.
<urn:uuid:4601d5f8-74e1-4754-af45-4aa73467ca3e>
CC-MAIN-2022-33
https://teachersinstitute.yale.edu/curriculum/units/1979/3/79.03.07.x.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00496.warc.gz
en
0.979362
5,710
3.578125
4
Anxiety is an adaptive response that promotes harm avoidance, but at the same time excessive anxiety constitutes the most common psychiatric complaint. Moreover, current treatments for anxiety—both psychological and pharmacological—hover at around 50% recovery rates. Improving treatment outcomes is nevertheless difficult, in part because contemporary interventions were developed without an understanding of the underlying neurobiological mechanisms that they modulate. Recent advances in experimental models of anxiety in humans, such as threat of unpredictable shock, have, however, enabled us to start translating the wealth of mechanistic animal work on defensive behaviour into humans. In this article, we discuss the distinction between fear and anxiety, before reviewing translational research on the neural circuitry of anxiety in animal models and how it relates to human neuroimaging studies across both healthy and clinical populations. We highlight the roles of subcortical regions (and their subunits) such as the bed nucleus of the stria terminalis, the amgydala, and the hippocampus, as well as their connectivity to cortical regions such as dorsal medial and lateral prefrontal/cingulate cortex and insula in maintaining anxiety responding. We discuss how this circuitry might be modulated by current treatments before finally highlighting areas for future research that might ultimately improve treatment outcomes for this common and debilitating transdiagnostic symptom. - psychology, experimental Statistics from Altmetric.com If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. Fear and anxiety are adaptive, defensive reactions to threat across species. However, excessive fear or anxiety can interfere with quality of life. Indeed, anxiety disorders are the most prevalent psychiatric disorders, and excessive anxiety is implicated in most psychiatric disorders, as well as a number of other medical and neurological conditions. Anxiety is thus accompanied by a high financial cost.1 Compounding this, response rates to first-line pharmacological and psychological treatments are less than 50%2: most patients fail to respond to the first treatment that targets their anxiety. Other review papers focus on the neural circuitry of fear3–5; this review focuses on our understanding of the underlying neurobiology of anxiety, as a construct not only more closely related to anxiety disorders such as generalised anxiety disorder (GAD) but also highly relevant transdiagnostically to other psychiatric and neurological disorders. In this review, which is intended as a broad narrative introduction for those new to the field, we argue that understanding this neurobiology, as well as the features that differentiate adaptive from pathological anxiety, is key to identifying pathological mechanisms and treatment targets. Fear and anxiety While fear and anxiety share many subjective and physiological symptoms, they can be differentiated based on behavioural profiles determined by certainty,6 7 which can be further subdivided into the contingency, temporal precision and spatial precision of the threat: In fear, the danger is imminent, unambiguous and mobilises the organism to take immediate action. Fear is above all a rapid behavioural response that leads to active avoidance (e.g. fight-or-flight) or other automatic responses, such as freezing in prey animals or piloerection (goosebumps). Pathological fear is seen in specific phobias, which are characterised by a marked fear of specific objects. In anxiety, the threat is more diffuse and uncertain. Anxiety is a lasting state of apprehension of potential future threats, accompanied by negative affect, autonomic symptoms, worry, increased vigilance and passive avoidance.8 Excessive anxiety symptoms can be found in GAD and panic disorder (PD) (though panic attacks themselves may be better characterised by models of acute fear, PD (including apprehension of subsequent attacks) is considered to be better modelled by anxiety).8 There is a body of research indicating that fear and anxiety are tractable and are associated with distinct pathologies.6 7 9 Early research into fear and anxiety found that there was a double dissociation between neural structures relating to threats that were phasic (fear) and sustained (anxiety),6 which led to a theoretical model in which anxiety and fear are putatively separate processes. Although the true neurobiological picture is likely to be more nuanced than this,10 11 this distinction is also reflected in the Research Domain Criteria (RDoC) matrix, which distinguishes between acute threat (fear) and potential threat (anxiety)12 13 as well as in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) where specific phobias are defined as ‘cued by the presence or anticipation of specific objects or situations’, while GAD, by contrast, is defined as ‘Excessive anxiety and worry (apprehensive expectation) … about a number of events or activities’.14 The widest array of research to date involves Pavlovian cued fear conditioning in rodents.15 Unfortunately, while fear conditioning is a useful model of fear, it is insensitive to drugs that are anxiolytic in humans16–19 and is thus a poor model of anxiety disorders such as GAD.20 21 Comparatively little is known about anxiety, especially its many human-specific cognitive-affective features. Indeed, many animal models of anxiety, such as the elevated-plus maze, have few analogues in humans22 (although see Biederman et al 23 and Bach et al 24), and of course the impact of psychological therapies cannot be studied in animals. However, innovative approaches to study anxiety experimentally in humans have recently been developed. This article reviews this emerging literature and suggests a model of its neural underpinnings. Phenomenology of pathological anxiety Following Freud, who distinguished chronic anxiety from anxiety (panic) attack, clinicians have long recognised that anxiety is not a unitary phenomenon.8 This non-unitary view of anxiety is reflected in the DSM-5, which identifies several anxiety disorders characterised by shared ‘features of excessive fear and anxiety’, including PD, GAD, social anxiety disorder and simple phobia. Other disorders within the DSM also have anxiety as a core symptom, such as obsessive-compulsive disorders and addiction disorders. Additionally, a number of neurological disorders feature elevated anxiety, including variants of dementia such as frontotemporal dementia, vascular dementia and Alzheimer’s disease,25 along with Parkinson’s disease26 and traumatic brain injury.27 28 However, despite symptom heterogeneity, we do not at present have clear objective markers that can differentiate between disorders which feature anxiety, and there is, moreover, strong symptom overlap. All share common enduring behavioural, cognitive and physiological characteristics, potentially arising from impairments in transdiagnostic features, as seen in the RDoC12 domain of negative valence systems13 which highlights exaggerated or problematic responses to ‘potential threat (anxiety)’. In this article, based on the assumption that similar phenomenological presentations of ‘anxiety’ reflect true underlying neurobiological similarities, we discuss the shared neural circuitry which may underlie sustained anxiety symptoms. Translational neuroscience of anxiety Early work in fear conditioning in animal models highlighted the key roles of two amygdala nuclei, the basolateral amygdala (BLA) and the central nucleus of the amygdala (CeA), in anxiety. The BLA integrates sensory information from the environment and, via its projections, excites the CeA. The amygdala subsequently triggers defensive responses via efferent projections to regions such as the stria terminalis, the hippocampus, the ventral striatum, the orbitofrontal cortex, the periaqueductal gray and the hypothalamus.29 While the amygdala is important for fear conditioning, its direct role in maintaining sustained anxiety symptoms has been more difficult to establish. Lesions of the amygdala do not reduce defensive responses in models of anxiety such as the elevated plus-maze30 and the anxiolytic benzodiazepine does not act via the amygdala.31 This contrasts with another structure tightly coupled with the CeA, the bed nucleus of the stria terminalis (BNST), which does appear to be preferentially involved in maintaining sustained anxiety.6 7 32 The BNST is a part of the ‘extended amygdala’,33 which is well-situated to regulate defensive responses such as anxiety via its GABA (gamma-aminobutyric acid)-ergic projections to various limbic, hindbrain and cortical structures.6 7 32 Consideration of the conditions that determine the involvement of the BNST in defensive responses emphasises the role of temporal unpredictability of the threats34and the sustained duration of the response.7 35 Early evidence of a differentiation between fear and anxiety in the BNST came from studies using the startle reflex. ‘Fear-potentiated startle’ refers to the increased startle reflex amplitude in the presence of a short-duration threat, whereas ‘anxiety-potentiated startle’ refers to the increased startle amplitude during a long-duration unpredictable threat. A series of studies by Davis and collaborators established a double dissociation between the CeA and BNST; lesions of the CeA abolish fear-potentiated startle, but not anxiety-potentiated startle, while lesions of the BNST suppress anxiety-potentiated startle, but not fear-potentiated startle.6 7 According to Davis’ group, anxiety is thus maintained by activation of corticotrophin receptors in the BNST.7 More recent studies have, however, provided evidence that there is more nuance to the role of the BNST in anxiety. This is perhaps unsurprising given that the BNST is small but heterogeneous, with up to 18 functionally distinct subregions.36 Some BNST lesions can up-regulate anxiety,37 while optogenetic stimulation of discrete BNST subregions can down-regulate anxiety.38 39 Different efferent connections of the BNST also control different features of anxiety,38 39 suggesting that distinct BNST subregions dynamically control different aspects of defensive behaviour38–44 (see table 1). Of note, the story is almost certainly the same with regards to the role of the amygdala in fear, which is gradually being broken down into component microcircuits.45 46 Moving into the cortex, the medial prefrontal cortex (mPFC) is implicated in attention and affective information processing47 and hence also involved in responses to threat. The rodent mPFC is composed of several subregions including the infralimbic (IL) and prelimbic (PL) cortex, which are densely interconnected with the amygdala48 and play opposing roles in defensive behaviours,49 with the IL inhibiting the amygdala (and defensive behaviours) and the PL exciting the amygdala. Empirical evidence of a role for the PL in sustained anxiety has been established using a wide range of paradigms including the elevated plus-maze and the open field test,50 context conditioning51 and fear conditioning with long-duration conditioned stimuli (CS).52 Electrophysiological recordings in rodents show that PL neurons maintain persistent firing that correlate with freezing throughout the duration of sustained threat51 52 and that PL firing persists during trace fear conditioning,53 a paradigm during which the unconditioned stimulus (US) is delivered after an empty interval following the end of the CS. These results suggest PL involvement in both defensive behaviours and the neural representation of threat (ie, US). The hippocampus also conveys contextual information about environmental threat54 to the PL.55 Specifically, brain structures communicate via synchronised activity both in local networks56 and over longer distances.57 This can be observed in theta oscillations, which have been linked to anxiety in rodents and humans58 and display synchronised activity between the ventral hippocampus and PL in anxiogenic contexts.54 Additionally, research using operant conflict tasks has suggested that the hippocampus has a key role in decision-making in situations where there is a conflict between approaching rewards and avoiding punishments, a scenario which induces anxiety across species.59 Interoceptive information from visceral changes may also be conveyed, via the anterior insula, to the PL.60 Specifically, robust visceral changes (heart rate, gastrointestinal, blood pressure) caused by anxiety may generate a feedback loop between the PL and anterior insula that contributes to the maintenance of anxiety. Threat representation in the PL could then influence anxiety responses via directs1 or indirect input to the BNST through the BLA.60 Finally, prefrontal cortex (PFC) regions are also involved in anxiety via their role in working memory (WM). In rodents, WM deficits are associated with increased anxiety.s2 Primates have larger prefrontal cortices with additional medial and lateral dissociations than rodents. In monkeys, as in humans, WM relies in part on the dorsolateral PFC (dlPFC),s3 a structure that is also implicated in anxiety and stress. Maternal separation stress in young monkeys activates the right dlPFc, but deactivates the left dlPFC.s4 In young monkeys, as in children, heightened reaction to novelty and potential threat characterises an anxious temperament disposition, which is a well-established risk for the development of anxiety.s5 Critically, such anxious disposition in monkeys is associated with dlPFC malfunction.s6 A possible neural model of anxiety that emerges from these studies in animals is that environmental signals from the ventral hippocampus and interoceptive signals from the anterior insula help maintain threat representation in the PL, which is then used to guide defensive behaviours via the BNST and the amygdala and may be under the control of the dlPFC (see figure 1). Neuroscience of human anxiety and its pathology In recent years, innovative translational experimental models that attempt to mimic and quantify sustained anxiety in humans have emerged. These include darknesss7, low concentration (7.5%) of CO2 inhalations8 and vigilant threat monitorings9 (see table 2). However, one widely used anxiety model in humans relies on long-duration threat of aversive stimuli such as shock (i.e. threat of shock or context conditionings10), where subjects are informed that shocks may be delivered unpredictably. Other paradigms investigating fear (eg, cued fear conditionings11) may seem at first sight to be investigating similar constructs. However, these paradigms are generally much more precise as to the temporal (and spatial) precision of the shock association (notably, further research is needed to ascertain the precise temporal/spatial conditions under which anxiety-related—rather than fear-related—circuitry is activated). One key advantage of the startle probe methodology, discussed in the animal section above, is that it can also be employed in humans. Healthy human participants display heightened startle sensitivity during unpredictable threat relative to baseline, but, importantly, this startle sensitivity is further elevated in those with disorders that feature anxiety.7,s12 Specifically, multiple studies have now shown exaggerated anxiety-potentiated (but not fear-potentiated) startle during unpredictable threat in PDs13–s16 and in post-traumatic stress disorder (PTSD).s17–s19 GAD does not follow quite the same pattern—it is associated with increased startle overall during a threatening experimental environment. Exaggerated anxiety-potentiated startle may, therefore, constitute a risk factor or early biomarker for developing disorders including heightened anxiety.s18,s19 Findings from unpredictable threat of shock studies in humans have implicated many of the same behavioural and neural responses that underpin (adaptive) anxiety in animals. In turn, these responses have also been implicated in pathological anxiety in humans. For instance, early functional MRI (fMRI) studies demonstrated amygdala involvement in processing both predictables20 and unpredictable threat.9 Activity in the amygdala was also shown to be elevated in those with social phobia relative to controls,s21,s22 and it was argued that this elevated response was critical for the development of the excessive anxiety. Subsequent work has, however, tempered these conclusions by demonstrating that the amygdala is sensitive to appetitive as well as aversive stimuli.s23 Moreover, consistent with animal data, many studies fail to report selective amygdala activation during sustained anxiety symptoms.s24,s25 It has therefore been argued that perhaps a key role of the amygdala is in goal-directed cognitive processing and/or behaviour toward relevant/salient information.s26 Given the clear asymmetries between appetitive and aversive stimuli value (i.e. the consequences are generally worse if one misses a threat than a reward), it may be that prior work associating the amygdala with anxiety reflects a more fundamental role of the amygdala that happens to be correlated with symptoms of anxiety (e.g. elevated harm avoidance in anxiety promoting the relevance/salience of threats) rather than any selective role in anxiety. Animal models also highlight the role of the PFC in anxiety. In humans, elevated fMRI activity in dorsal regions of the PFC (dorsomedial PFC (dmPFC) and dorsal anterior cingulate (dACC)) has been associated with both unpredictable threat processing and pathological anxiety.s27 It has been argued, moreover, that it is specifically the rostral part of the dmPFC that drives conscious threat appraisal and worry.s28 In healthy humans, dACC/dmPFC activation has been associated with a reduced ability to extinguish fear responding,s29 and patients with PTSD show increased activation in the dACC.s30 As with the amygdala, again, it should be noted that the dACC/dmPFC is responsible for a wide range of functions including goal-directed behaviour, vestibular function, social responding and interoception.s31 Critically, it is hyperactive in most psychiatric disorderss32 and, as with the amygdala, this may be due to a more generic role in anticipating emotional stimulis33 and hence a key role in appraising and expressing behavioural responses to the level of environmental threat.s34 In other words, hyperactivity in psychiatric disorders probably reflects this region’s more fundamental role—such as directing cognitive processing and/or behaviour toward relevant/salient informations32—that is important for harm avoidance. Perhaps unsurprisingly given the links between both amygdala and dACC/dmPFC activity and anxiety-relevant symptoms, connectivity between these regions has also been implicated in the pathophysiology of anxiety. Specifically, functional imaging studies show that connectivity between the amygdala and the mPFC increases when healthy individuals are exposed to threat of unpredictable shock, and that the strength of this connectivity is greater in individuals with higher dispositional anxiety.s35–s37 This extends to the pathological state, such that in those with social anxiety or GAD, this same circuitry displays greater connectivity without overt anxiety induction.s38 As such, the same circuitry that drives heightened attention toward threats under adaptive anxiety is also critical in pathological anxiety. To this end, it has been argued that this circuitry represents a human (functional) homologue of the rodent PL-amygdala/BNST circuit highlighted above.s30,s34–s37 If so, the key role that such circuitry plays in behavioural response to salient aversive information could underlie the harm avoidant negative bias toward threats in pathological anxiety. As also highlighted in the animal work above, the hippocampus is involved in anxiety, due, perhaps, to its key role in contextual learning/memorys39 and prospections40 or, alternatively, a role in avoidance.24,s41,s42 Consistent with functional differentiation along the longitudinal axis of the rodent hippocampus,s43 theta activity (2–8 Hz) from anterior hippocampus (or ventral in rodents) correlates with anxiety level in humans58; and at the same time theta from posterior hippocampus correlates with spatial memory performance in a simulated human version of the rodent Morris water maze task.s44 Notably, this theta-based coupling between hippocampus and mPFC scales up with increased threat probabilitys45 and patients with PTSD exhibit aberrant activity and connectivity involving the hippocampus in the resting state.s46 This theta coupling evidence comes from magnetoencephalography (MEG): the evidence is more mixed in fMRI, with both hypoactivation and hyperactivation of the hippocampus reported in patients.s47 Finally, threat of shock also modulates memory performance, improving contextual learning under threatening conditions,s48,s49 which may be one mechanism by which the hippocampus maintains traumatic memories in disorders which feature elevated anxiety such as PTSD.s50 Avoidance, on the other hand, can be studied using approach-avoidance conflict tasks (a variant on operant conflict tasks outlined above). Such tasks have been translated from non-human primate works51,s52 into humans.24,s41,s42,s53 This paradigm incorporates a range of tasks that set up a conflict between the inherent bias to avoid (learnt or prepotent) negative outcomes and an approach response. This is thought to either be anxiogenic in itself,24 ,s41,s42 or, at the very least, to elicit avoidance responsess54 which are a core feature of anxiety disorders. Critically, such tasks implicate the hippocampus24,s41,s42 and this conflict has been shown to be exacerbated, leading to increased avoidance, in humans during induced anxiety,s55 and in pathological anxiety.s56 The insula also plays a complex role in anxiety in humans. Together with the ACC and mPFC, the insula is a part of a network that detects, interprets and reacts to internal bodily signals.s57 Interoceptive signals are thought to be integrated in the insula following a posterior-mid-anterior pattern, with processing in the anterior insula producing conscious awareness of the information.s58 The anterior insula has been suggested to make a key contribution to the anticipation and emotional experience of aversive stimuli, and, via the ACC, to the allocation of attention and initiation of appropriate action.s57 Experimental psychopathology studies show that the anterior insula is broadly involved in anticipation of aversive events, but more specifically during anticipation of unpredictable compared with predictable threats59 and sustained versus transient anticipation.9 Thus, the anterior insula is one of the structures that reliably maintains sustained activation during experimentally induced anxiety.9 It is also hyperactive in pathological anxiety, including PDs60 and GAD,s61 during sustained threat. In this instance, pathophysiology is not associated with chronic activation of the anterior insula but rather with heightened response during anxiety induction, possibly reflecting a feeling of lack of controls62 as well as autonomic and emotional distress during threat.s23 While animal studies have implicated the BNST in anxiety for some time now, it is only recently that progress in the spatial resolution of MRI has led to comprehensive exploration of this small structure in humans.s63,s64 Older studies found activity in regions overlapping the BNST during induced anxiety—that is, unpredictable threats9,s9,s24—as distinct from amygdala activity during predictable threats (ie, fear).9,s65 In addition, BNST activation during unpredictable shock correlates positively with the magnitude of autonomic arousal.s62 Consistent with findings from startle studies, heightened sensitivity to unpredictable threat in PTSD and PDs is associated with elevated sustained BNST activation, making this structure a promising biomarker of disease and treatment targeting.s60,s66 These studies also identified several structures that are coactivated with the BNST, including the dmPFC, ventrolateral PFC, dlPFC and anterior insula, attesting to the complexity of a putative ‘anxiety network’. These results suggest an important role for the BNST in mediating the hyperarousal and hypervigilance symptoms of pathological anxiety. Additionally, high-resolution imaging using resting-state fMRI has revealed connectivity of the BNST in humans to many of the other regions discussed above.s64,s67 Critically, anxiety induced by threat of shock reveals reduced BNST connectivity with the ventromedial prefrontal cortex and ACC.s63 These are early findings, and research exploring clinical anxiety with high-resolution imaging is still in its infancy, but given the highlighted role of the BNST in the animal literature, clinical research targeting the BNST, its microcircuits and its functional connectivity offers great promise in the development of novel therapeutic interventions to treat pathological anxiety.32 Finally, as in non-human primates, the dlPFC is involved in anxiety regulation and dispositional anxiety in humans,s68 potentially because of a role in emotion regulation and attention control. In healthy subjects, the dlPFC is activated during anxiety induction procedures and the strength of this activation is negatively correlated with anxiety, indicating that poor dlPFC activation is associated with less anxiety downregulation (although this is speculative).s69 Moreover, the dlPFC plays a role in explicit emotion regulation (for a review across all types of regulation strategies, see s70). When subjects perform a cognitive task that engages the dlPFC (e.g. a WM task—a ‘distraction’ form of emotional regulation), dlPFC activation concomitantly reduces anxiety induced by unpredictable shock, via top-down control exerted on the dACC and ventrolateral PFC.s71 The dlPFC is, therefore, a key structure for healthy functioning. When the dlPFC is activated to keep task goals in mind, dlPFC engagement also suppresses emotional interference and alleviates anxiety. It is now well-established that the dlPFC is hypoactivated in those with psychiatric disorders featuring anxiety during cognitive tasks, emotion regulation studies and anxiety induction procedures.s32,s72 To summarise, research shows overlapping neural circuitry in response to anxiety between humans and animals. Specifically, connectivity between the hippocampus, BNST amygdala and medial prefrontal/cingulate cortex may contribute to a putative ‘anxiety network’ that may also be (down)regulated by dlPFC. Notably, this network of regions bears some similarity to the ‘salience network’, which incorporates regions such as the dACC and insular regions.s73 Nevertheless, many of the regions (especially the PFC) are not clear translational homologues across species and, perhaps more importantly, human work highlights that many of these patterns of activity are not unique to anxiety and may instead reflect more fundamental cognitive processes, such as salience processing (in accordance with the key identity of many of those nodes within the salience network), which happen to constitute a key facet of anxiety. Role of treatment on anxiety circuitry Characterising the behavioural effects of current treatments for pathological anxiety as well as their underlying neural mechanisms is crucial for the development of new treatments and the improvement of existing ones. Cornwell et al (2017) recently reported MEG evidence that hypervigilant responding under threat of unpredictable shockss74 is reduced by the benzodiazepine alprazolams75 and that this is driven by increased feedback signalling from ventrolateral prefrontal to sensory cortices. Although not directly implicating a role for any of the structures reviewed above, these data point to possible alternative avenues to examine the efficacy of novel anxiolytic treatments. When given chronically to healthy individuals, the selective serotonin reuptake inhibitor (SSRI) citalopram selectively reduces anxiety-potentiated startle (but not fear-potentiated startle) to unpredictable threat.s76 Interestingly, acute citalopram increases anxiety-potentiated startle,s77 an effect consistent with the clinical observations of transiently increased anxiety during initial SSRI treatment.s78 Both the anxiolytic effects of chronic SSRI administration and the anxiogenic effects of acute SSRI administration have been replicated in rodentss79 and the current view is that these effects involve the BNST7 and result from interactions between serotonin and corticotropin-releasing factors (CRFs).7,s79 Acute tryptophan depletion (ATD) is a dietary manipulation that can be used to temporarily reduce levels of serotonin. ATD is associated with increased engagement of the dmPFC-amygala circuit that has been shown to be hyperengaged by both threat of shocks35 and pathological anxietys38 in humans. SSRIs may therefore work, at least in part, by elevating synaptic serotonin availability and hence reducing engagement of this dorsal prefrontal-amygdala circuit.s80 Indeed, it has been argued that the magnitude of SSRI response is predicted by greater pretreatment reactivity to threats in pregenual ACC and lesser reactivity in the amygdala.s81 In anxious patients, SSRIs have also been shown to attenuate connectivity between the BNST and limbic and paralimbic structures.s82 One hypothesis, therefore, is that SSRIs work by attenuating putative ‘anxiety network’ hyperactivity,s83 but this remains to be tested. Another broad class of treatment for anxiety disorders is psychological interventions. The goal of the most common psychological intervention, cognitive–behavioural therapy (CBT), is to attenuate negative mood states through cognitive reappraisal and emotion regulation strategies.s84 Medial prefrontal and amygdala activity have been argued to predict treatment response to CBT in anxiety,s85 perhaps through modulation of circuitry between the mPFC and amygdala. Indeed, a recent meta-analysis suggested that the most robust predictors in response to therapy were significant post-therapy decreases in anterior cingulate/paracingulate gyrus, inferior frontal gyrus and insula activity.s86 One problem with cross-sectional observational studies, however, is that it is unclear if neural circuitry change is specifically due to the intervention or rather reflects a generic change in symptoms (ie, it reflects symptom change, not mechanistic change). One way to study this is to explicitly modulate basic processes targeted by CBT in the absence of symptom changes. To this end, in healthy individuals, simple attentional instruction can alter the engagement of threat of shock-induced dmPFC-amygdala circuitry.s37 Specifically, asking individuals to re-appraise emotional stimuli as neutral dampens down activity in the circuitry thought to drive heightened response to threats in anxiety. Ongoing work is exploring whether this is a mechanism by which CBT for anxiety works. Exposure is another important psychological intervention, with similar efficacy to CBT.s87 Exposure is primarily targeted at fear responding (eg, phobias), but can also be used to reduce anxiety. A more detailed review on findings from extinction paradigms—extinction is thought to be the mechanism by which exposure therapy works—in both humans and animals can be found in Milad and Quirk,s88 but studies have implicated some of the same circuitry—activity was reduced in the amygdala, insula and anterior cingulate, but increased in the dlPFCs89 following extinction. In sum, preliminary work suggests that current treatments for anxiety may be effective through modulation of the translational circuitry outlined above. This work is in its infancy, but ultimately, a mechanistic understanding of treatment response may eventually enable improvement of existing treatments, better targeting of existing treatments to patients who will respond and provide targets for the design of novel treatments, hence resulting in increased recovery rates for anxiety disorders. One key problem with the current categorical disease classifications for anxiety disorders is that of heterogeneity within diagnostic categories and overlapping symptoms across disorders. Specifically, anxiety seems to be an obvious candidate for an RDoC-based approach, with the RDoC matrix highlighting several constructs (acute threat, potential threat (anxiety) and sustained threat) that map onto the distinction made in this article between anxiety and fear. This distinction may map onto different types of vulnerability: perhaps one associated with predictable threat (ie, fear) and the other with unpredictable threat (ie, anxiety). Of relevance, this distinction appears in line with data of factor analytic studies showing a distinction for ‘fear disorder’ and ‘anxious-misery’ in the anxiety disorders.s90 Large-scale studies will be necessary to obtain sufficient power to determine the extent to which anxiety circuitry is compromised (in similar and in different ways) across psychiatric and neurological disorders, which could improve classification, treatment planning and pave the way for precision medicine approaches. Anxious patients with heightened sensitivity to unpredictable threat (e.g. PDs13), for example, may particularly benefit from treatment that downregulate the anxiety circuit (eg, SSRIss76). This approach can be applied more broadly: those with elevated sensitivity to unpredictable threat, whatever their psychiatric/neurological diagnosis, may benefit from the same treatments. The BNST has long been overlooked. It is a small but functionally complex structure comprised of at least 18 subregions36 involved, among other functions, in opposing anxiolytic and anxiogenic circuits. Basic research in rodents on intrinsic circuits of the BNST and how these circuits are impacted by stress hormones and neurotransmitters will be crucial to increase our knowledge of normal and pathological anxiety and to develop treatment.s91 For example, the actions of neuropeptide CRF on the BNST in relation to sustained anxiety provided a strong rationale for the therapeutic development of CRF1 antagonists to treat mood and anxiety disorders. However, these have failed in human modelss92 and in clinical trials.s93 More studies on the local and more distal connectivity of the BNST will be necessary to understand why CRF antagonists are anxiolytic in animal models but not in humans. On this note, a question remains about how best to translate our understanding of such subunits into humans. Improved, higher resolution, fMRI techniques will undoubtedly help, but fMRI signal is still several steps away from the underlying neural activity. Work with specific positron-emission tomography ligands and even direct electrical recording from those with clinical implants may be needed to ultimately corroborate translational patterns. Relatedly, direct translation across humans and animals is difficult. Animal models are essential to advance mechanistic understanding of behaviours, but their limitations must be acknowledged. Pathological anxiety is, above all, a disorder of feelings supported by conscious and unconscious experiences, and animal models rely on overt behaviour and physiological measures, rather than cognitions or subjective experiences. Additionally, many processes, for example, the psychological symptoms targeted by CBT cannot be directly studied in animal models, and many regions—especially cortical areas—are likely not direct functional homologues across animal models (if they exist at all in animals with smaller cortices). This is of particular concern because many animal models of treatment have failed to translate to humans. To overcome this, future work may seek to use models of anxiety in healthy humans alongside the exact same models in animals as putative anxiolytic drug screens to ensure successful translation. Furthermore, cross-fertilisation may be important where roadblocks in treatment development occur: theories and biological insights from one type of research may produce advances in the other. Beyond pharmacology, neurostimulation research is starting to bear exciting results. Deep brain stimulation of the BNST reduces anxiety in a rodent model and in humans.s94 Repetitive transcranial magnetic stimulation (rTMS) is a less invasive neurostimulation technique. As low-frequency oscillations within cortical networks decrease cortical excitability,s92 targeting the dmPFC or dlPFC-amygdala coupling with low (1 Hz) rTMS could downregulate activity in this circuit and ultimately reduce negative affect. rTMS of the dlPFC is approved by the Food and Drug Administration as a second-line treatment for depression, and research is currently undergoing to examine its effectiveness in anxiety.s95 Future research should also focus on the emergence of pathological anxiety, ideally taking a developmental perspective. There are a number of difficulties inherent in developmental work in anxiety, with the most important being the ethical issues with exposing children and adolescents to aversive events such as shock.s96 However, researchers have recently successfully collected data on fear-potentiated startle in adolescents using alternative aversive stimuli such as cold air blasts or aversive screams.s96,s97 These are promising steps toward creating paradigms that can elucidate the developmental basis of anxiety. Our understanding of anxiety circuitry has grown considerably thanks to experimental psychopathology models exploring the impact of unpredictable threats. Consistent with animal models, fear and anxiety in humans may be mediated by dissociable, although partly overlapping, neural mechanisms. Structures implicated in anxiety but not fear include the BNST, hippocampus, dmPFC, insula and dlPFC. Fear-related and anxiety-related defensive responses are mediated by the CeA and BNST, respectively. From a clinical and treatment perspective, there is overlap in the circuitry implicated in disorders featuring anxiety and in the mechanism of action of reference anxiolytics. Pathophysiological mechanisms are found in the same neural structures that respond adaptively to anxiety induction procedures in healthy humans. Neural dysfunction can take two forms, chronic activation (inappropriate activation in the absence of anxiety induction challenge, for example, heightened amygdala-dmPFC connectivity) or exaggerated activation in response to an unpredictable threat (i.e. insula). Despite this progress, much remains to be learnt. With advances in technologies in basic (optogenetic, molecular biology, transgenic and knockout mice) and clinical research (high spatial resolution of fMRI, better statistical tools), the focus should be on improving knowledge of local microcircuits and neural oscillations among distant regions that supports behaviour. Clinically, research on fear and anxiety circuitry might be used to create an evidence-based nosology.s18 Ultimately, improved, personalised and new treatment strategies will be difficult to develop without a better understanding of the underlying mechanisms of anxiety; the work reviewed here constitutes a step forward, but a precise mechanistic understanding is still far off. Additional references can be found in supplementary file. Contributors OJR, ACP, BC and CG are all responsible for drafting and editing this manuscript. Funding This work was supported by the personal fellowships MR/K024280/1 and MR/R020817/1 from the Medical Research Council to OJR and the Intramural Research Program of the National Institute of Mental Health, project number ZIAMH002798 (clinical protocol 02-M-0321 (NCT00047853), 01-M-0185 (NCT00026559)) to CG. OJR has completed consultancy work for IESO digital health and Brainbow and is running an Investigator Initiated Trial with Lundbeck. He holds an MRC Industrial Collaboration Award with Cambridge Cognition. Competing interests OJR has completed consultancy work for Ieso Digital Health and Brainbow and is running an Investigator Initiated Trial with Lundbeck. He holds an MRC Industrial Collaboration Award with Cambridge Cognition. Patient consent for publication Not required. Provenance and peer review Commissioned; internally peer reviewed.
<urn:uuid:c43e18b9-a1f9-48b8-845c-e3db86187114>
CC-MAIN-2022-33
https://jnnp.bmj.com/content/90/12/1353
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572021.17/warc/CC-MAIN-20220814083156-20220814113156-00097.warc.gz
en
0.922927
8,131
2.65625
3
|• Total||9,826,675 km2 (3,794,100 sq mi)| |Coastline||19,920 km (12,380 mi)| |Borders||Canada: 8,864 km (5,508 mi)| Mexico: 3,327 km (2,067 mi) 6,190.5 m (20,310 ft) |Lowest point||Badwater Basin,| −85 m (−279 ft) |Longest river||Missouri River,| 3,767 km (2,341 mi) |Largest lake||Lake Superior| 58,000 km2 (22,394 sq mi) |Climate||Diverse: Ranges from Temperate in the North to Tropical in the far south. West: mostly semi-arid to desert, Mountains: alpine, Northeast: humid continental, Southeast: humid subtropical, Coast of California: Mediterranean, Pacific Northwest: cool temperate oceanic, Alaska: mostly subarctic, Hawaii, South Florida, and the territories: tropical| |Terrain||Vast central plain, Interior Highlands and low mountains in Midwest, mountains and valleys in the mid-south, coastal flatland near the Gulf and Atlantic coasts, complete with mangrove forests and temperate, subtropical, and tropical laurel forest and jungle, canyons, basins, plateaus, and mountains in west, hills and low mountains in east; intermittent hilly and mountainous regions in Great Plains, with occasional badland topography; rugged mountains and broad river valleys in Alaska; rugged, volcanic topography in Hawaii and the territories| |Natural resources||coal, copper, lead, molybdenum, phosphates, rare earth elements, uranium, bauxite, gold, iron, mercury, nickel, potash, silver, tungsten, zinc, petroleum, natural gas, timber, arable land| |Natural hazards||tsunamis; volcanoes; earthquake activity around Pacific Basin; hurricanes along the Atlantic and Gulf of Mexico coasts; tornadoes in the Midwest and Southeast; mud slides in California; forest fires in the west; flooding; permafrost in northern Alaska| |Environmental issues||severe water shortages, air pollution resulting in acid rain in both the US and Canada| |Exclusive economic zone||11,351,000 km2 (4,383,000 sq mi)| The term "United States", when used in the geographical sense, refers to the contiguous United States, the state of Alaska, the island state of Hawaii, the five insular territories of Puerto Rico, Northern Mariana Islands, U.S. Virgin Islands, Guam, and American Samoa, and minor outlying possessions. The United States shares land borders with Canada and Mexico and maritime borders with Russia, Cuba, The Bahamas, and other countries,[note 2] in addition to Canada and Mexico. The northern border of the United States with Canada is the world's longest bi-national land border. From 1989 through 1996, the total area of the US was listed as 9,372,610 km2 (3,618,780 sq mi) (land + inland water only). The listed total area changed to 9,629,091 km2 (3,717,813 sq mi) in 1997 (Great Lakes area and coastal waters added), to 9,631,418 km2 (3,718,711 sq mi) in 2004, to 9,631,420 km2 (3,718,710 sq mi) in 2006, and to 9,826,630 km2 (3,794,080 sq mi) in 2007 (territorial waters added). Currently, the CIA World Factbook gives 9,826,675 km2 (3,794,100 sq mi), the United Nations Statistics Division gives 9,629,091 km2 (3,717,813 sq mi), and the Encyclopedia Britannica gives 9,522,055 km2 (3,676,486 sq mi) (Great Lakes area included but not coastal waters). These sources consider only the 50 states and the Federal District, and exclude overseas territories. The US has the 2nd largest Exclusive Economic Zone of 11,351,000 km2 (4,383,000 sq mi). By total area (water as well as land), the United States is either slightly larger or smaller than the People's Republic of China, making it the world's third or fourth largest country. China and the United States are smaller than Russia and Canada in total area, but are larger than Brazil. By land area only (exclusive of waters), the United States is the world's third largest country, after Russia and China, with Canada in fourth. Whether the US or China is the third largest country by total area depends on two factors: (1) The validity of China's claim on Aksai Chin and Trans-Karakoram Tract. Both these territories are also claimed by India, so are not counted; and (2) How US calculates its own surface area. Since the initial publishing of the World Factbook, the CIA has updated the total area of United States a number of times. See also: Borders of the United States The United States shares land borders with Canada (to the north) and Mexico (to the south), and a territorial water border with Russia in the northwest, and two territorial water borders in the southeast between Florida and Cuba, and Florida and the Bahamas. The contiguous forty-eight states are otherwise bounded by the Pacific Ocean on the west, the Atlantic Ocean on the east, and the Gulf of Mexico to the southeast. Alaska borders the Pacific Ocean to the south and southwest, the Bering Strait to the west, and the Arctic Ocean to the north, while Hawaii lies far to the southwest of the mainland in the Pacific Ocean. Forty-eight of the states are in the single region between Canada and Mexico; this group is referred to, with varying precision and formality, as the contiguous United States, and as the Lower 48. Alaska, which is included in the term continental United States, is located at the northwestern end of North America. The capital city, Washington, District of Columbia, is a federal district located on land donated by the state of Maryland. (Virginia had also donated land, but it was returned in 1849.) The United States also has overseas territories (Insular areas) with varying levels of autonomy and organization: in the Caribbean the territories of Puerto Rico and the U.S. Virgin Islands (formerly the Danish Virgin Islands, purchased by the US at the beginning of WW2), and in the Pacific the inhabited territories of American Samoa, Guam and the Northern Mariana Islands, along with a number of uninhabited island territories. Some of the territories acquired were a part of United States imperialism, or to gain access to the east. Nearly all of the United States is in the northern hemisphere — the exceptions are American Samoa and Jarvis Island, which are in the southern hemisphere. Main article: United States physiographic region The eastern United States has a varied topography. A broad, flat coastal plain lines the Atlantic and Gulf shores from the Texas-Mexico border to New York City, and includes the Florida peninsula. This broad coastal plain and barrier islands make up the widest and longest beaches in the United States, much of it composed of soft, white sands. The Florida Keys are a string of coral islands that reach the southernmost city on the United States mainland (Key West). Areas further inland feature rolling hills, mountains, and a diverse collection of temperate and subtropical moist and wet forests. Parts of interior Florida and South Carolina are also home to sand-hill communities. The Appalachian Mountains form a line of low mountains separating the eastern seaboard from the Great Lakes and the Mississippi Basin. New England features rocky seacoasts and rugged mountains with peaks up to 6200 feet and valleys dotted with rivers and streams. Offshore Islands dot the Atlantic and Gulf coasts. A recent global remote sensing analysis suggested that there were 6,622 km² of tidal flats in the United States, making it the 4th ranked country in terms of tidal flat area. The five Great Lakes are located in the north-central portion of the country, four of them forming part of the border with Canada, only Lake Michigan situated entirely within United States. The southeast United States, generally stretching from the Ohio River on south, includes a variety of warm temperate and subtropical moist and wet forests, as well as warm temperate and subtropical dry forests nearer the Great Plains in the west of the region. West of the Appalachians lies the lush Mississippi River basin and two large eastern tributaries, the Ohio River and the Tennessee River. The Ohio and Tennessee Valleys and the Midwest consist largely of rolling hills, interior highlands and small mountains, jungly marsh and swampland near the Ohio River, and productive farmland, stretching south to the Gulf Coast. The Midwest also has a vast amount of cave systems. The Great Plains lie west of the Mississippi River and east of the Rocky Mountains. A large portion of the country's agricultural products are grown in the Great Plains. Before their general conversion to farmland, the Great Plains were noted for their extensive grasslands, from tallgrass prairie in the eastern plains to shortgrass steppe in the western High Plains. Elevation rises gradually from less than a few hundred feet near the Mississippi River to more than a mile high in the High Plains. The generally low relief of the plains is broken in several places, most notably in the Ozark and Ouachita Mountains, which form the U.S. Interior Highlands, the only major mountainous region between the Rocky Mountains and the Appalachian Mountains. The Great Plains come to an abrupt end at the Rocky Mountains. The Rocky Mountains form a large portion of the Western U.S., entering from Canada and stretching nearly to Mexico. The Rocky Mountain region is the highest region of the United States by average elevation. The Rocky Mountains generally contain fairly mild slopes and wider peaks compared to some of the other great mountain ranges, with a few exceptions (such as the Teton Mountains in Wyoming and the Sawatch Range in Colorado). The highest peaks of the Rockies are found in Colorado, the tallest peak being Mount Elbert at 14,440 ft (4,400 m). In addition, instead of being one generally continuous and solid mountain range, it is broken up into a number of smaller, intermittent mountain ranges, forming a large series of basins and valleys. West of the Rocky Mountains lies the Intermontane Plateaus (also known as the Intermountain West), a large, arid desert lying between the Rockies and the Cascades and Sierra Nevada ranges. The large southern portion, known as the Great Basin, consists of salt flats, drainage basins, and many small north–south mountain ranges. The Southwest is predominantly a low-lying desert region. A portion known as the Colorado Plateau, centered around the Four Corners region, is considered to have some of the most spectacular scenery in the world. It is accentuated in such national parks as Grand Canyon, Arches, Mesa Verde and Bryce Canyon, among others. Other smaller Intermontane areas include the Columbia Plateau covering eastern Washington, western Idaho and northeast Oregon and the Snake River Plain in Southern Idaho. The Intermontane Plateaus come to an end at the Cascade Range and the Sierra Nevada. The Cascades consist of largely intermittent, volcanic mountains, many rising prominently from the surrounding landscape. The Sierra Nevada, further south, is a high, rugged, and dense mountain range. It contains the highest point in the contiguous 48 states, Mount Whitney (14,505 ft or 4,421 m). It is located at the boundary between California's Inyo and Tulare counties, just 84.6 mi or 136.2 km west-northwest of the lowest point in North America at the Badwater Basin in Death Valley National Park at 279 ft or 85 m below sea level. These areas contain some spectacular scenery as well, as evidenced by such national parks as Yosemite and Mount Rainier. West of the Cascades and Sierra Nevada is a series of valleys, such as the Central Valley in California and the Willamette Valley in Oregon. Along the coast is a series of low mountain ranges known as the Pacific Coast Ranges. Alaska contains some of the most dramatic scenery in the country. Tall, prominent mountain ranges rise up sharply from broad, flat tundra plains. On the islands off the south and southwest coast are many volcanoes. Hawaii, far to the south of Alaska in the Pacific Ocean, is a chain of tropical, volcanic islands, popular as a tourist destination for many from East Asia and the mainland United States. The territories of Puerto Rico and the U.S. Virgin Islands encompass a number of tropical isles in the northeastern Caribbean Sea. In the Pacific Ocean the territories of Guam and the Northern Mariana Islands occupy the limestone and volcanic isles of the Mariana archipelago, and American Samoa (the only populated US territory in the southern hemisphere) encompasses volcanic peaks and coral atolls in the eastern part of the Samoan Islands chain.[note 3] The geography of the United States varies across its immense area. Within the continental U.S., eight distinct physiographic divisions exist, though each is composed of several smaller physiographic subdivisions. These major divisions are: The Atlantic coast of the United States is low, with minor exceptions. The Appalachian Highland owes its oblique northeast–southwest trend to crustal deformations which in very early geological time gave a beginning to what later came to be the Appalachian mountain system. This system had its climax of deformation so long ago (probably in Permian time) that it has since then been very generally reduced to moderate or low relief. It owes its present-day altitude either to renewed elevations along the earlier lines or to the survival of the most resistant rocks as residual mountains. The oblique trend of this coast would be even more pronounced but for a comparatively modern crustal movement, causing a depression in the northeast resulting in an encroachment of the sea upon the land. Additionally, the southeastern section has undergone an elevation resulting in the advance of the land upon the sea. While the Atlantic coast is relatively low, the Pacific coast is, with few exceptions, hilly or mountainous. This coast has been defined chiefly by geologically recent crustal deformations, and hence still preserves a greater relief than that of the Atlantic. The low Atlantic coast and the hilly or mountainous Pacific coast foreshadow the leading features in the distribution of mountains within the United States. The east coast Appalachian system, originally forest covered, is relatively low and narrow and is bordered on the southeast and south by an important coastal plain. The Cordilleran system on the western side of the continent is lofty, broad and complicated having two branches, the Rocky Mountain System and the Pacific Mountain System. In between these mountain systems lie the Intermontane Plateaus. Both the Columbia River and Colorado River rise far inland near the easternmost members of the Cordilleran system, and flow through plateaus and intermontane basins to the ocean. Heavy forests cover the northwest coast, but elsewhere trees are found only on the higher ranges below the Alpine region. The intermontane valleys, plateaus and basins range from treeless to desert with the most arid region being in the southwest. The Laurentian Highlands, the Interior Plains and the Interior Highlands lie between the two coasts, stretching from the Gulf of Mexico northward, far beyond the national boundary, to the Arctic Ocean. The central plains are divided by a hardly perceptible height of land into a Canadian and a United States portion. It is from the United States side, that the great Mississippi system discharges southward to the Gulf of Mexico. The upper Mississippi and some of the Ohio basin is the semi-arid prairie region, with trees originally only along the watercourses. The uplands towards the Appalachians were included in the great eastern forested area, while the western part of the plains has so dry a climate that its native plant life is scanty, and in the south it is practically barren. Main article: Climate of the United States Due to its large size and wide range of geographic features, the United States contains examples of nearly every global climate. The climate is subtropical in the Southern United States, continental in the north, tropical in Hawaii and southern Florida, polar in Alaska, semiarid in the Great Plains west of the 100th meridian, Mediterranean in coastal California and arid in the Great Basin and the Southwest. Its comparatively favorable agricultural climate contributed (in part) to the country's rise as a world power, with infrequent severe drought in the major agricultural regions, a general lack of widespread flooding, and a mainly temperate climate that receives adequate precipitation. The main influence on U.S. weather is the polar jet stream which migrates northward into Canada in the summer months, and then southward into the US in the winter months. The jet stream brings in large low pressure systems from the northern Pacific Ocean that enter the US mainland over the Pacific Northwest. The Cascade Range, Sierra Nevada, and Rocky Mountains pick up most of the moisture from these systems as they move eastward. Greatly diminished by the time they reach the High Plains, much of the moisture has been sapped by the orographic effect as it is forced over several mountain ranges. Once it moves over the Great Plains, uninterrupted flat land allows it to reorganize and can lead to major clashes of air masses. In addition, moisture from the Gulf of Mexico is often drawn northward. When combined with a powerful jet stream, this can lead to violent thunderstorms, especially during spring and summer. Sometimes during winter these storms can combine with another low pressure system as they move up the East Coast and into the Atlantic Ocean, where they intensify rapidly. These storms are known as Nor'easters and often bring widespread, heavy rain, wind, and snowfall to New England. The uninterrupted grasslands of the Great Plains also lead to some of the most extreme climate swings in the world. Temperatures can rise or drop rapidly and winds can be extreme, and the flow of heat waves or Arctic air masses often advance uninterrupted through the plains. The Great Basin and Columbia Plateau (the Intermontane Plateaus) are arid or semiarid regions that lie in the rain shadow of the Cascades and Sierra Nevada. Precipitation averages less than 15 inches (38 cm). The Southwest is a hot desert, with temperatures exceeding 100 °F (37.8 °C) for several weeks at a time in summer. The Southwest and the Great Basin are also affected by the monsoon from the Gulf of California from July to September, which brings localized but often severe thunderstorms to the region. Much of California consists of a Mediterranean climate, with sometimes excessive rainfall from October–April and nearly no rain the rest of the year. In the Pacific Northwest rain falls year-round, but is much heavier during winter and spring. The mountains of the west receive abundant precipitation and very heavy snowfall. The Cascades are one of the snowiest places in the world, with some places averaging over 600 inches (1,524 cm) of snow annually, but the lower elevations closer to the coast receive very little snow. Florida has a subtropical climate in the northern part of the state and a tropical climate in the southern part of the state. Summers are wet and winters are dry in Florida. Annually much of Florida, as well as the deep southern states, are frost-free. The mild winters of Florida allow a massive tropical fruit industry to thrive in the central part of the state, making the US second to only Brazil in citrus production in the world. Another significant (but localized) weather effect is lake-effect snow that falls south and east of the Great Lakes, especially in the hilly portions of the Upper Peninsula of Michigan and on the Tug Hill Plateau in New York. The lake effect dumped well over 5 feet (1.52 m) of snow in the area of Buffalo, New York throughout the 2006-2007 winter. The Wasatch Front and Wasatch Range in Utah can also receive significant lake effect accumulations from the Great Salt Lake. See also: U.S. state temperature extremes In northern Alaska, tundra and arctic conditions predominate, where the temperature has fallen as low as −80 °F (−62.2 °C). On the other end of the spectrum, Death Valley, California once reached 134 °F (56.7 °C), the highest temperature ever recorded on Earth. On average, the mountains of the western states receive the highest levels of snowfall on Earth. The greatest annual snowfall level is at Mount Rainier in Washington, at 692 inches (1,758 cm); the record there was 1,122 inches (2,850 cm) in the winter of 1971–72. This record was broken by the Mt. Baker Ski Area in northwestern Washington which reported 1,140 inches (2,896 cm) of snowfall for the 1998-99 snowfall season. Other places with significant snowfall outside the Cascade Range are the Wasatch Mountains in Utah, the San Juan Mountains in Colorado, and the Sierra Nevada in California. In the east, the region near the Great Lakes and the mountains of the Northeast receive the most snowfall, although such snowfall levels do not near snowfall levels in the western United States. Along the northwestern Pacific coast, rainfall is greater than anywhere else in the continental U.S., with Quinault Rainforest in Washington having an average of 137 inches (348 cm). Hawaii receives even more, with 404 inches (1,026 cm) measured annually in the Big Bog, in Maui. Pago Pago Harbor in American Samoa is the rainiest harbor in the world (because of the 523 meter Rainmaker Mountain). The Mojave Desert, in the southwest, is home to the driest locale in the U.S. Yuma, Arizona, has an average of 2.63 inches (6.7 cm) of precipitation each year. In central portions of the U.S., tornadoes are more common than anywhere else on Earth and touch down most commonly in the spring and summer. Deadly and destructive hurricanes occur almost every year along the Atlantic seaboard and the Gulf of Mexico. The Appalachian region and the Midwest experience the worst floods, though virtually no area in the U.S. is immune to flooding. The Southwest has the worst droughts; one is thought to have lasted over 500 years and to have hurt Ancestral Pueblo peoples. The West is affected by large wildfires each year. The United States is affected by a variety of natural disasters yearly. Although drought is rare, it has occasionally caused major disruption, such as during the Dust Bowl (1931–1942). Farmland failed throughout the Plains, entire regions were virtually depopulated, and dust storms ravaged the land. The Great Plains and Midwest, due to the contrasting air masses, see frequent severe thunderstorms and tornado outbreaks during spring and summer with around 1,000 tornadoes occurring each year. The strip of land from north Texas north to Kansas and Nebraska and east into Tennessee is known as Tornado Alley, where many houses have tornado shelters and many towns have tornado sirens due to the very frequent tornado formations in the region. Hurricanes are another natural disaster found in the US, which can hit anywhere along the Gulf Coast or the Atlantic Coast as well as Hawaii in the Pacific Ocean. Particularly at risk are the central and southern Texas coasts, the area from southeastern Louisiana east to the Florida Panhandle, peninsular Florida, and the Outer Banks of North Carolina, although any portion of the coast could be struck. The U.S. territories and possessions in the Caribbean, such as Puerto Rico and the U.S. Virgin Islands, are also vulnerable to hurricanes due to their location in the Caribbean Sea. Hurricane season runs from June 1 to November 30, with a peak from mid-August through early October. Some of the more devastating hurricanes have included the Galveston Hurricane of 1900, Hurricane Andrew in 1992, Hurricane Katrina in 2005, and Hurricane Harvey and Hurricane Maria in 2017. Hurricanes (known as cyclones in the Pacific Ocean) fail to landfall on the Pacific Coast of the United States due to water temperatures being too cool to sustain them. However, the remnants of tropical cyclones from the Eastern Pacific occasionally impact the western United States, bringing moderate to heavy rainfall. Occasional severe flooding is experienced. There was the Great Mississippi Flood of 1927, the Great Flood of 1993, and widespread flooding and mudslides caused by the 1982–83 El Niño event in the western United States. Flooding is still prevalent, mostly on the Eastern Coast, during hurricanes or other inclement weather, for example in 2012, when Hurricane Sandy devastated the region. Localized flooding can, however, occur anywhere, and mudslides from heavy rain can cause problems in any mountainous area, particularly the Southwest. Large stretches of desert shrub in the west can fuel the spread of wildfires. The narrow canyons of many mountain areas in the west and severe thunderstorm activity during the summer lead to sometimes devastating flash floods as well, while nor'easter snowstorms can bring activity to a halt throughout the Northeast (although heavy snowstorms can occur almost anywhere). The West Coast of the continental United States makes up part of the Pacific Ring of Fire, an area of heavy tectonic and volcanic activity that is the source of 90% of the world's earthquakes. The American Northwest sees the highest concentration of active volcanoes in the United States, in Washington, Oregon and northern California along the Cascade Mountains. There are several active volcanoes located in the islands of Hawaii, including Kilauea in ongoing eruption since 1983, but they do not typically adversely affect the inhabitants of the islands. There has not been a major life-threatening eruption on the Hawaiian islands since the 17th century. Volcanic eruptions can occasionally be devastating, such as in the 1980 eruption of Mount St. Helens in Washington. The Ring of Fire makes California and southern Alaska particularly vulnerable to earthquakes. Earthquakes can cause extensive damage, such as the 1906 San Francisco earthquake or the 1964 Good Friday earthquake near Anchorage, Alaska. California is well known for seismic activity, and requires large structures to be earthquake resistant to minimize loss of life and property. Outside of devastating earthquakes, California experiences minor earthquakes on a regular basis. There have been about 100 significant earthquakes annually from 2010 to 2012. Past averages were 21 a year. This is believed to be due to the deep disposal of wastewater from fracking. None has exceeded a magnitude of 5.6, and no one has been killed. Other natural disasters include: tsunamis around Pacific Basin, mud slides in California, and forest fires in the western half of the contiguous U.S. Although drought is relatively rare, it has occasionally caused major economic and social disruption, such as during the Dust Bowl (1931–1942), which resulted in widespread crop failures and dust storms, beginning in the southern Great Plains and reaching to the Atlantic Ocean. The United States holds many areas for the use and enjoyment of the public. These include national parks, national monuments, national forests, wilderness areas, and other areas. For lists of areas, see the following articles: Main article: Demography of the United States In terms of human geography, the United States is inhabited by a diverse set of ethnicities and cultures.
<urn:uuid:3b1b0db9-4e6a-4a78-9da4-1aa671a496e0>
CC-MAIN-2022-33
https://db0nus869y26v.cloudfront.net/en/Geography_of_the_United_States
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570921.9/warc/CC-MAIN-20220809094531-20220809124531-00297.warc.gz
en
0.942394
5,839
3.09375
3
Teaching your children to take care of their bodies is probably a parent’s most desired lesson. Whether you want to focus on the food groups and eating a balanced diet, exercise and fitness, or even just focussing on the fascinating human body, this Theme Day offers a lot for your family. Print out the Family Theme Day Planner and decide which activities you’d like to do and in what order. This Theme Day can be used in many ways to open many different discussions depending on the age of your children and your family’s present needs. You can talk about nutrition and healthy eating, exercise and fitness, hygiene, the importance of visiting the doctor and dentist, safety (from using helmets and seatbelts to not talking to strangers), puberty and growth, eating disorders, drugs/alcohol/smoking, etc. Play your child’s favourite music to encourage him/her to get up and dance for this theme day, or choose some of your own favourites! Here are two tunes to get your family moving: “I like to move it” by Reel 2 Reel and “Footloose” by Kenny Loggins. There are many children’s songs about fruits and veggies like “Fruit Salad” by the Wiggles or “Vegetable Town” by the Barenaked Ladies, plus there are children’s songs about your body like “Head, shoulders, Knees and Toes,” and “You Brush Your Teeth.” You can find many free colouring pages online by using your favourite search engine and typing in “Healthy Coloring Pages” or print out my Stay Healthy Coloring Page. Ask your child what he/she thinks the various pictures on the colouring page present/how they relate to staying healthy. JOURNALING QUESTION PROMPT: Write out one or more of the following questions in your Family Theme Day Scrapbook or on a piece of paper to glue in your scrapbook: What do you do to stay healthy? Can you name all the food groups? How can you be more healthy? What things are unhealthy? Choose the level of your child: ¨ Toddler – discuss the answer(s) out loud first and have your child draw a picture of the answer ¨ Preschooler/Kindergartener – discuss the answer(s) out loud first and write the answer down for him/her leaving one word for him/her to write out himself/herself with your help. You could also encourage him/her to draw a picture as well. ¨ Early Grade School – have your child either write out the answer himself/herself (encourage phonetic spelling) without your help, or offer to help with spelling each word out loud one word at a time. ¨ Grade School – have your child write a sentence or two on his/her own and then read over and discuss the response. (You decide whether to correct the spelling or not) ¨ Older Child – have your child write a longer response (paragraph). ¨ As A Challenge – instead of a question ask your older child to write a story or poem about staying healthy. Print out a “Stay Healthy” Word Search: Check here for the answer keys: Raid your child’s bookshelves to find any books about healthy, nutrition, exercise or the body. Go to the library with your child to find some books on being healthy. Go to the library on your own to find books about healthy from both fiction and nonfiction to have already on hand for your theme day. Many libraries allow you to go online and search for titles based on subject (search for “Health”, “Nutrition,” “Body,” “Exercise,” “Safety,” etc. under “Children’s Books”). Reserve them if you can to save time. Try to find some of these nonfiction/learning titles: · Fit For Life, written by Alexandra Parson and illustrated by John Shackell and Stuart Harrison, Franklin Watts, 1996 – This is a good book for older children as it has more writing and more topics than some of the others listed below, plus it mentions serious subjects like anorexia, alcohol, smoking, and drugs · Growing Strong: A Book About Taking Care of Yourself, by Christina Goodings and illustrated by Masumi Furukawa, A Lion Children’s Book, 2009 – This gives a good general overview of different things your younger child can do to be healthy. · The Kid’s Guide to Becoming the Best You Can Be, by Jill Frankel Hauser and illustrations by Michael Kline, A Williamson Kids Can! Book, 2006 – This is a bigger book and recommended for ages 8 to 13 on the cover but it covers important subjects like confidence, resilience, initiative, perseverance, responsibility, among other topics, in short sections; it even offers different activities and articles about real people who possess these qualities. · The Monster Health Book: a Guide to Eating Healthy, Being active & feeling Great for Monsters & Kids!, by Edward Miller, Holiday House, 2006 – This is a favourite of my kids who keep asking for me to take it out from the library. It has pictures of a green monster, a boy and a girl, as they explore the five food groups, nutrition, exercise, check-ups, self-esteem etc., in order to be healthy. This is also a good book to start a discussion about smoking, alcohol and drugs as the second to last section is entitled “Say No to Bad health.” · Why Must I...Eat Healthy Food?, by Jackie Gaff and photography by Chris Fairclough, Cherrytree Books, 2005 – This is a goof book for younger readers as it has big print and bright photographs, plus easy to understand text emphasising the importance of healthy eating. · You Can’t Take Your Body to a Repair Shop, by Harriet Ziefert and Fred Ehrlich, M.D. and drawings by Amanda Haley, Blue Apple Books, 2004 – An interesting but not frightening look at everyday illness. Here are some picture books: · Looking after Me: Exercise, by Liz Gogerly and Mike Gordon, Crabtree Publishing, 2009 – Twins Tom and Lily learn how to appreciate exercise by watching their fit and healthy grandmother. · My Friend The Doctor, by Joanna Cole and illustrated by Maxie Chambliss, Harper Collins Publishers, 2005 – Part of being healthy is going to the doctor for check-ups; if your little one is afraid to go to his/her check-up use this gentle picture book to help. · Showdown at the Food Pyramid, by Rex Barron, G.P. Putnam’s Sons, 2004 – The food pyramid is unbalanced when King Candy Bar and his friends Hot Dog, Donut, plus others, try to take over. · Here are some books about the body: · The Little Brainwaves Investigate...Human body, illustrated by Lisa Swerling and Ralph Lazar, DK Publishing, 2010 – Illustrations of the Little Brainwaves (tiny people) are mixed with photographs in this fact-filled book about the body. · Under Your Skin: Your Amazing Body, by Mick Manning and Brita Granström, Albert Whitman & Company, 2007 – This is a lift-the-flap book and it still offers a lot of information about the body with its playful illustrations. If you want to cook with your children try these cookbooks: · Pretend Soup and Other Real Recipes, by Mollie Katzen and Ann Henderson, Tricycle Press, 1994—This is a great preschooler cookbook that both my boys have loved. My eldest especially loved it when I’d read the children’s comments out loud. The book has simple recipes with a parents section for each recipe and a two page spread of illustrated instructions for children to follow. · Salad people and More Real Recipes, by Mollie Katzen, Tricycle Press, 2005—This is the sequel to the above cookbook and is just as good with more simple and healthy recipes for you to make with your child. Here are two cookbooks for parents wanting to “add nutrition” to the meals of picky eaters by adding pureed vegetables: · Deceptively Delicious, by Jessica Seinfeld, Collins, 2007—This is a good one to start with as the purees are all one ingredient blends making them easy to whip up and freeze. This cookbook has breakfast, dinner and dessert ideas, among others, and many have become family staples in our house like the grilled cheese (with sweet potato or butternut squash) and the macaroni and cheese (with butternut squash or cauliflower). · The Sneaky Chef, by Missy Chase Lapine, Running Press, 2007—This one has coloured purees made up of blends of vegetables making it a bit more work (but nutritional worth the effort), and it also offers a healthy flour blend as well as a breading recipe, giving more nutritional boosts to favourite meals and baking treats. The Coca Chocolate Chip Pancakes (made with the flour blend plus a purple puree of blueberries and spinach) were a big hit with my boys as well as the easy “breakfast ice-cream” recipes (using frozen fruit and regular yogurt). I also liked the “quick fix” recipes included. NOTE: You may want to review the five food groups with you children before you do this craft. These are great coloring sheets to use: http://www.choosemyplate.gov/kids/downloads/ColoringSheet.pdf and http://www.choosemyplate.gov/kids/downloads/ColoringSheetBlank.pdf Materials: Old magazines that can be cut, a paper plate, child safe scissors, glue stick, damp facecloth for sticky fingers. Step 1: Look through magazines with your child and help him/her to find pictures of food for each of the following food groups: Grains, Meat or Alternate Protein, Dairy Product, Vegetable, and Fruit. Step 2: Help your child cut out the five pictures. Step 3: Let your child glue the pictures to a paper plate using the glue stick. Step 4: Review the choices on the plate. Materials: White construction paper, a metal paper fastener, a pencil, markers or crayons, child safe scissors, a pin or needle (for adult use only). Step 1: Cut out a circle (we used a cup to trace) from the white construction paper. Step 2: Cut out a rectangle (we used a plastic lid from a deck of cards to trace) from the white construction paper. Step 3: Show your child (illustrate on a scrap piece of paper) how to draw a sort of star shape of legs on the circle (you’ll need about five or six legs) and let him/her draw the picture of legs and feet on the circle. Step 4: Let your child draw a person from the waist up on the rectangular piece of paper. Step 5: Have your child colour the two pictures. Step 6: (Parent step) Using the pin or sewing needle prick a hole in the centre of the circle picture of legs and then prick a hole near the base of the rectangular picture of a person. Step 7: insert the paper fastener through the rectangular paper and then through the circle (through the pin pricks) and then spread the fastener tabs at the back to ensure the fastener stays attached to the two pieces of paper. Step 8: Make the paper circle spin with your finger which will make the picture of the person appear to be running. Step 9: Discuss how exercise is an important part of being healthy. RED, GREEN, YELLOW, BLUE ACTION STICK: Materials: An empty paper towel roll, red paper, green paper, yellow paper and blue paper, a pencil, a bowl to trace, child safe scissors, a glue stick, a stapler, tin foil. Step 1: Trace around the bowl on each of the four pieces of paper to make four circles (red, green, yellow and blue). Step 2: Help your child cut out the four circles. Step 3: Wrap the paper towel roll in tin foil. Step 4: Apply glue to half of the red circle and attach the blue circle. Step 5: Apply glue to half of the yellow circle and attach the green circle. Step 6: Open the two attached (red and blue) circles gently (remember half is glued) and insert one end of the paper towel roll. Then press the circles together to squish the paper towel roll at the end and staple into place. Step 7: Open the two attached (yellow and green) circles gently (remember half is glued) and insert the other end of the paper towel roll. Then press the circles together to squish the paper towel roll at the end and staple into place. Step 8: Use this craft for the game listed below under FOR FUN! POP STICK FRAME: NOTE: A positive self-esteem is an important part of health that is sometimes forgotten. Remember if you love yourself you will make good choices for your body! Materials: Four popsicle or craft sticks, white glue, markers, waxed paper, stickers (optional), paper. Step 1: Have your child decorate the popsticks using markers. Step 2: Glue the four popsicle sticks together with the white glue to form a square, leaving them to dry on top of waxed paper. Step 3: When the wooden frame is dried your child may decorate it some more with stickers if desired. Step 4: While the frame is drying, cut out some paper to the size of the frame. Step 5: Have your child draw a self-portrait on the paper. Step 6: Apple glue to the back of the frame and press the self-portrait onto it so the decorated side frames the drawing. Step 7: Allow to dry and then display proudly! BEING HEALTHY COLLAGE: Materials: a sheet of coloured paper, old magazines that can be cut up, child safe scissors, a glue stick, a damp cloth for sticky fingers. Step 1: Look through old magazines with your child and find examples of healthy things (food, activities, etc.) to cut out. Step 2: After the pictures are cut out have your child glue them to a piece of coloured paper. Step 3: Display or glue in your Family Theme Day Scrapbook. You could also have a Top Food Challenge in conjunction with this Theme Day! Check here to learn how to make a Top Food Challenge for your kids: http://familythemedays.ca/Themes/TopFood.htm For a special start to the day serve some whole wheat pancakes with berries on top; search your favourite cookbook or online for a recipe. For a quicker healthy breakfast serve homemade muesli topped with banana and yogurt. You can search for a recipe or just mix oats, bran flakes and other healthy cereals with dried fruits, shredded coconut and chopped nuts (if there are no allergies in your family). For a healthy snack today offer veggies and dip (you can make your own with low-fat sour cream or yogurt and a package of dried salad dressing or onion soup mix). For something sweeter make some fruit kabobs (better for older kids who won’t use the skewers as weapons) or a fruit salad. A family favourite of ours is sliced apple spread with peanut butter. My one son likes them squished together as a sandwich, while my other son prefers them open faced. Another healthy energy-packed snack is a homemade trail mix: mix favourite dried fruits, nuts (if there are no allergies in your family), Cheerios and pretzels. For a healthy lunch serve some vegetable soup and make a food group sandwich by Another healthy choices is to make some baked whole wheat macaroni and cheese. Make a Five Food Groups Pizza: Make some homemade whole wheat dough (use your favourite recipe or search online). While the dough is rising make some homemade pizza sauce (use a simple can of tomato sauce and add fresh or dried herbs and garlic, then heat on the stove top). Roll out the dough on a floured surface and move to a pre-heated pizza stone or a lightly oiled baking sheet. Add some red bell pepper (for the vegetable food group) and some fresh pineapple (for the fruit food group) on top of dough. Pomegranate Smoothie: Blend 1 cup of pomegranate juice, 1 cup vanilla yogurt, 1 frozen banana, and a hand full of frozen fruit like mango, strawberries or raspberries in a blender. Make a Yogurt Parfait: Layer flovroued yogurt, granola (or cereal), and fresh berries (or other fruit) in a glass and serve with a spoon. Check on these sites to see the daily food requirements for each member of your family: For the USDA’s recommendations check here: http://www.choosemyplate.gov/ For Canada’s Food Guide check here: http://www.hc-sc.gc.ca/fn-an/food-guide-aliment/index-eng.php Print out a copy of my Weekly Food Chart for each member of the family and keep track of your food choices for the week. At the end of the week go through the chart and see if your child can name the food areas he/she needs to focus on more. HEALTHY vs JUNK CHART: Print out a copy of my Healthy vs. Junk Food Chart. Together as a family either write words under each column or find stickers or magazine pictures to fit under each category: either healthy or a “sometimes” food. Take out some cereal boxes and other food products and read the labels together as a family. My children ended up making this into a fun activity. My eldest grabbed some play money from a board game and my youngest set up the chairs to be a car. They then went to “the store” where I’d offer two or three choices (boxes). We’d read the labels together and then they’d have to choose what to “buy.” We examined cereals, crackers, pastas, soups, lunch snacks, sandwich spreads, among others. MENU PLAN AND WRITE A SHOPPING LIST : Based on your family’s knowledge learned throughout this Theme Day, together come up with a healthy weekly menu and write out the shopping list. You can look through cookbook together for inspiration if desired. FAMILY FITNESS CHALLENGE: Print out my Family Fitness Challenge Chart and as a family keep track of your fitness activities for the week. Add up the hours at the end of the week and then do it again for a second week to see if you can beat your first time! Print out my Safety Brainstorm Worksheet and together as a family brainstorm ways to be safe (wearing a helmet, wear a seat belt, looking both ways, don’t do dangerous dares, no drugs, no smoking) and write them on the worksheet. NOTE: As a further craft, you could also encourage your children to make a safety poster based on one item from the brainstorm. For Games, Activity Sheets, Songs, Recipes and more just for kids check out this page: http://www.choosemyplate.gov/kids/index.html There’s a lot of info on this site for kids, parents and teens: http://kidshealth.org/kid/index.jsp This one has recipes and games among other things: http://nutritionexplorations.org/kids/ This one is a teen site: http://thecoolspot.gov/ Thank you to Meghan from Washington, D.C. for recommending this guide covering preschool child nutrition, snack foods for preschoolers, food safety tips, as well as a healthy balanced diet, various food groups, physical activity, nutritional needs during pregnancy and breast feeding, nutrition for the preschool child, and more.: http://krilloil.com/blog/nutrition-pyramid/ She also recommended this page which covers preschool child nutrition: http://krilloil.com/blog/nutrition-pyramid/#NutritionPreschoolYears and this one entitled “Serve Up Good Nutrition for Preschool children” - http://www.webmd.com/parenting/features/serve-up-good-nutrition-for-preschool-children Many games get your body moving and hence are good for you. Hopscotch and jump rope are just two examples. You could brainstorm games/ and types of exercise as a family and keep it as a list of healthy activities. RED, GREEN, YELLOW and BLUE: Use the craft from above and play “Red, Green, Yellow and Blue.” Make up different actions for each colour. We had red as stop, green as run, yellow as slow, and blue as spin. One person is the caller and they show the colour from the wand (craft) while everyone else must move as commanded. Search through your child’s DVD/ video collection (or visit your local library before hand or the Video Store) to find your child’s favourite shows focusing on health. Try to find these titles: · The Magic School Bus: The Human Body, Scholastic, 2005 – Teacher, Ms. Frizzle takes her students on three different journeys, each exploring the human body. · Sid the Science Kid: Feeling Good Inside and Out, The Jim Henson Company, 2009 – This has four health related episodes examining brushing one’s teeth, eating healthy foods, being active instead of watching T.V all weekend long, and washing one’s hands. Plus there is a bonus episode about vaccinations. This is a movie that would fit this Theme Day: · Osmosis Jones – a movie where a white blood cell is the hero, it features both animation and live action (be warned some parts are gross!!!) Family Walk: soak up some vitamin D which will help absorb calcium and help your immune system. Be sure to wear sunscreen though. Go for a family walk around your neighbourhood or through a park. Visit a community fitness facility and see what activities you can do together as a family to get active: skating, swimming, running, climbing... Grab your running shoes and get moving! Healthy Plates Craft Spinning Feet Craft Red, Green, Yellow, Blue Action Stick Popsicle Stick Frame for Self-Portraits Apple and Peanut Butter Snack 5 Food Groups Pizza Being Healthy Collage Photo: C Wright Healthy vs. Junk Food Chart
<urn:uuid:dcf85432-e038-4d8b-b154-98621edcd41c>
CC-MAIN-2022-33
http://familythemedays.ca/Themes/HealthyMe.htm
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570651.49/warc/CC-MAIN-20220807150925-20220807180925-00497.warc.gz
en
0.915164
5,064
3.0625
3
Introduction: Environmental health (EH) services have a long history in Ethiopia, but data on environmental health services quality and the magnitude of environmental health professionals’ engagement has never been addressed. This study was conducted to assess the quality of environmental health services in different sectors and professionals’ level of engagement in Eastern Ethiopia. Methods: Institution based cross-sectional mixed study design was implemented. A cluster sampling technique was employed to select 83 participants. Data were collected using a pretested questionnaire and an interview guide. Descriptive, bivariate, multivariate, and thematic analysis was carried out. Results: Professionals’ performance in most services were reported to be average or low. Only 19.5% of participants responded as having good satisfaction in their job. The multiple logistic regression analysis showed factors associated with selected environmental health services. The odds of identifying environmental problems was associated with profession (adjusted odds ratio (AOR): 4.1; 95% confidence interval (CI): 1.3-7.6) and level of education (AOR: 3.1; 95%CI: 0.9-5.9). The factors contributing to introducing innovative solutions to EH problems were type of institution (AOR: 3.1, 95%CI = 1.6-9.3), profession (AOR: 3.4, 95%CI = 1.1-12.2), and level of support and emphasis offered (OR: 5.6, 95% CI = 2.2-11.9). Level of job satisfaction was also associated with the above-mentioned independent variables. Conclusion: The current study showed low level of professionals’ engagement and factors associated with the quality of environmental health services in different sectors. Therefore, Ethiopian Federal Ministry of Health and other concerned ministries, agencies, and authorities should intervene accordingly to improve the service and level of professionals’ engagement. Environmental health may be defined as the art and science of controlling physical, chemical, and biological factors external to a person, and all related factors impacting behavior. It encompasses the assessment and control of those environmental factors that can potentially affect human health.1–5 Environmental health (EH) is a critical component of most government public health programs at national, regional, and local level.6,7 Environmental health services are delivered through Ministry of Health (MOH), public health agencies, environmental protection agencies, or other related sectors.5,7,8 The scope of Environmental health services in Ethiopia is wide and varies in each sector.9 The service includes environmental protection (solid and liquid waste management, air quality monitoring, and environmental impact assessment), disease prevention and control (vector and rodent control, routine community diagnosis and inspection), occupational health and safety, food safety (registration, licensing, and inspection of food establishments), and institutional and residential health services.6,9,10 Despite the involvement of a wide array of sectors in environmental health services (EHS), the principles of environmental health are universal and applicable in each sector.4,11 Environmental health services in Ethiopia dates to 1908 when hygiene and sanitation activities were included as a single service in the first formal health services program of the country.9,12 This was followed by a proclamation formulated by Ministry of Interior in 1942 to 1943 about hygiene and sanitation.12,13 Although there was little progress over the next some decades, a recognizable hierarchy and specific responsibility was established during the shift in health policy to a primary health care setting in the 1970’s.14 At the time, the Department of Environmental health under the Ministry of Public Health was given the responsibility for water and sanitation, food hygiene, industrial hygiene, and quarantine services.12 Following the change of government in 1991, the new government acknowledged the importance of EH as indicated in Article 44/1 of the Federal Democratic Republic of Ethiopian constitution which describes the right of all peoples to have a clean and healthy environment.15 Moreover, environmental health service development was mentioned among the priorities in the National health policy.14,16,17 Although environmental health service has a long history in Ethiopia, there was no specific EH personnel designated for the service until the UN Relief and Rehabilitation Administration (UNRRA) trained sanitary personnel for the duration of 6 months in 1946.14 The establishment of Gondar Public Health College and Training Center in 1954 enabled to train sanitarians at diploma level and the first batch graduated in 1957.18 In 1988, Jimma Health Sciences Institute started to produce EH technicians at advanced diploma level, and initiated to train Environmental health graduates at Bachelor degree level in 1993.14 Later, other institutions including Haramaya University (formerly known as Alemaya University) and Debub University, currently known as Hawassa University, started to train EH technicians at diploma level which later upgraded to produce Bachelor of science graduates as Environmental health officers (EHOs) since 2003.12,19 The involvement of these professionals was mainly in the Ministry of Health and the focus was on water, sanitation and hygiene, and disease prevention and control.12 Although EH service has a long history in Ethiopia, it has given very little attention to date.9 Currently, the service encompasses components such as solid waste management, liquid waste management, water supply, food safety, residential and institutional health, occupational health, personal hygiene, vector control, and related activities.13,16,20 Now, the provision of environmental health services is not also restricted under the Ministry of Health. In addition, Ministry of Urban Development and Construction, Ministry of Water, Irrigation and Energy, Environment, Forest and Climate change Commission, Ethiopian Environmental protection agency, Ethiopian Standard Agency, Food, Medicine and Health Care Administration and Control Authority, and non-governmental organizations (NGOs) are widely involved in Environmental health services delivery.9,12 The Ministry of Urban Development and Construction is responsible for solid waste management, parks and green areas development, and housing safety related environmental health services.21,22 Meanwhile, environmental health services related to water quality and wastewater management, are mainly the responsibility of Ministry of Water, Irrigation, and Energy.23 The implementation of these programs is mainly through city and town municipalities. Environmental protection agency, Ethiopian Standard Agency and, Food, Medicine, and Health Care Administration and Control Authority of Ethiopia are primarily regulatory bodies and law enforcers associated to environmental emissions, food safety, institutional health, and related services.24 Similarly, the composition of professionals is from a wide array of educational backgrounds. It ranges from Environmental health officers and Environmental health technicians to medical doctors and engineers.12,25–27 Data on environmental health services and professionals’ level of engagement are also limited.28–30 So far there are no studies conducted to assess environmental health services and professionals’ level of engagement in the study area as well as in Ethiopia. Thus, this study aimed to assess environmental health services in different sectors, professionals’ level of engagement and its associated factors. The findings of this study will provide significant evidence related to the planning and implementation of environmental health services. Therefore, the findings will offer pertinent information to policy makers, health planners, town municipalities, researchers, health bureaus, non-governmental organizations, and other sectors involved in environmental health service provision. Moreover, the results of this study could be used for integrated environmental health services delivery among different stakeholders. Study design and setting An institution-based mixed methods cross-sectional study was conducted in selected areas of Eastern Ethiopia, namely East Hararghe Zone, Harari region, and Dire Dawa city administration council from June 01 to August 31, 2014. The selected areas are geographically dispersed and divided into a number of “woreda” (districts) and sub-cities (in the case of Dire dawa city). Moreover, the selected areas are under 3 different regional administrations (Eastern Hararge Zone is under Oromia Region, Dire dawa city administration council is under the administration of the federal government and Harari region is a separate regional administration by itself). Institutions identified as environmental health service providers (health bureaus, hospitals, health centers, town municipalities, urban beautification agencies, environmental protection agencies, different NGOs, and Food, Medicine and Health Care Administration and Control Authorities at city and region level) were included in this study. Study population and sample size All professionals recognized and responsible for environmental health services provision in different sectors in the selected areas were the source population. Cluster sampling was employed because of geographically dispersed areas and politically separate regional administrations. We have considered each “woreda” (district) in East Hararghe zone and Harari region as clusters. Meanwhile, sub-cities were taken as clusters in the case Dire dawa administrative council. Initially we have collected the number of professionals from different sources through and identified 125 professionals in the 3 areas (71 from East Hararghe zone, 26 in Harari region, and 28 Dire dawa city administration). Based on this number we have calculated the required number of sample size using single proportion formula (95% confidence interval, 50% proportion, 5% margin of error) and we got a sample size of 95. Based on the assumption of homogeneity among clusters (districts and sub cities) of each area, we have selected clusters using simple random sampling technique. The number of subjects selected were proportional among the 3 selected areas. Higher number of professionals were identified in East Haraghe zone. Thus, greater number of clusters (13 out of 22 districts) were selected from East Haraghe zone, followed by Dire Dawa city administrative council (8 out of 13 sub-cities) and Harari region (6 out of 9 districts). All professionals who were engaged in EH service provision in the selected clusters were included. From the initial assessment, we have assumed the selected clusters were sufficient enough to provide 95 participants. However, we were able to acquire only 83 individuals (41 from East Hararghe, 23 from Dire Dawa city council administration, and 19 from Harari region) from included clusters. For the qualitative study, professionals with recognized experience were selected using a purposive sampling method and the interview was continued until the information got saturated which amassed 24 participants. Some of the participants partook in both the quantitative and qualitative data inquire. Data collection tools (questionnaires and in-depth interview guide) were developed based on different sources. The nationally harmonized Environmental health undergraduate program curriculum, Ethiopian hygiene and environmental health program, Center for Disease Control (CDC’s) Environmental Public Health Performance Standards31 were used to develop the tools. Dependent variables were measured using item questions in each category which later formed the composite score. Composite scores were used to provide a quantitative measure of the dependent variables. The type item questions were organized in the form of 5 priority scaling (1 = None, 2 = Minimal, 3 = Moderate, 4 = Significant, and 5 = Optimal). The questions were verified and modified through senior professionals’ comments and pretests. The quantitative data was collected using a self-administered questionnaire and the investigators were responsible for coordinating the assessment. For the qualitative data, a trained data collector was employed to conduct the in-depth interviews. Moreover, 1 assistant interviewer was assigned to handle tape recording and note-taking activities during each interview. After each interview, the investigators transcribed the tape recordings. Data quality assurance To minimize bias and ensure quality, experienced, and trained data enumerators were employed, and a pretest of data collection tools was conducted. The investigators checked the completed questionnaires in a daily basis to maintain its accuracy, completeness, clarity, and consistency. Any error related to clarity, ambiguity, incompleteness, or misunderstanding was solved on the following day before beginning data collection. To make the subjects respond freely, the data collection process was conducted privately. The collected quantitative data were coded and entered into EPIDATA software and later checked for the consistency of data entry. Data were cleaned accordingly and exported to STATA version 14 for further analysis. The frequency distribution of dependent and independent variables was computed. Responses of quantitative variables (identification of environmental problems, perception about having the appropriate knowledge and skill for environmental health services, introduction of innovative solutions to environmental health problems) were recategorized into binary variables “Yes” and “No”) as required for the appropriate analysis. The sum of item questions was calculated and the mean score below significant and, significant and above was used as a cut-off point to categorize professionals’ level of engagement as “No” and “Yes” respectively. Level of job satisfaction was also measured with item questions and later classified into a binary category as “Good” and “Poor.” Bivariate and multivariate logistic regression analysis were performed to ascertain the association between dependent and independent variables. After identifying potential predictor variables in bivariate analyses, we have conducted multivariable analysis to identify the explanatory variables. We have calculated the odds ratio (OR) and a 95% confidence interval (CI) for each analysis. For all statistical significance tests, the cut of value was set at P < .05. The qualitative data was analyzed using thematic analysis approach whereby 2 members of the research team (YTD and BND) independently read transcripts, identified themes and coded the data. Later all authors reviewed coded themes and discussed any inconsistencies in coding to refine the code structure. Finally, all data were re-coded using the final, revised code structure and a framework was developed to highlight key emerging themes representing different aspects of environmental health services. The major themes are presented in the result section with the related quantitative findings. All the qualitative data analysis process was performed manually. Environmental health services and professionals level of engagement related to: identification of environmental problems, regular environmental health activities, involvement of professionals’ in policies, standards, and guidelines development and training, perception about having the appropriate knowledge and skill for environmental health services, professionals involvement in introducing innovative solutions to environmental health problems, and level of job satisfaction were dependent variables. Meanwhile, Socio-demographic factors (Sex, Age, Year of Experience in Environmental Health Service), Level of education, Work experience, Level of support and emphasis given to environmental health services, Type of profession, Types of institutions, and delivery system were independent variables. Environmental health services are the variables of this study which were comprised different services, namely: services regarding environmental problem identification, services regarding to regular/routine activities, services related to implementation of environmental health policies, standards and planning, environmental health knowledge, skills, innovation, solutions, and research. Each service was measured with several item questions which had 5 response options (None, minimal, moderate, significant, and optimal). Definitions for the response options are described as follows: None: 0% or absolutely no activity; Minimal: greater than zero but not more than the 25% of the activity described within the question is met within the EH system or program; Moderate: greater than 25%, but not more than 50% of the activity described within the question is met within the EH system or program; Significant: greater than 50% but not more than 75% of the activity described within the question is met within the EH system or program; Optimal: greater than 75% of the activity described within the question is met within the EH system or program. Good job satisfaction: It means the worker is satisfied to the extent that the individual will not leave the job for other types of jobs and wants to continue in the current job. Fair job satisfaction: It means the worker is satisfied to the extent that the individual will not leave the job for other types of jobs but wants a change with current arrangements and work environment. Poor job satisfaction: The individual reached on a decision to leave the job for other type of jobs and doesn’t want to continue with the current career. Good support and emphasis: Higher officials are very supportive to the extent that they are showing their motive toward environmental health services and provide full support and necessary budget for environmental health activities. Fair support and emphasis: Higher officials are supportive to the extent that they are showing their motive toward environmental health activities but little support in the process of acquiring environmental health budget and service delivery. Poor support and emphasis: Higher officials are not supportive to the extent that they are showing no motive toward environmental health activities and no support in the process of providing budgets for environmental health services. Health services: A wide array of services that affect health, including those for physical and mental health. Environmental health officers: Individuals who have at least Diploma or Bachelor degree titled as “Bachelor of science or diploma in Environmental health” from a recognized college or University which adopted a nationally harmonized curriculum for Environmental health professionals and with a license from FDRE Ministry of Health. Other professionals: Any professional engaged in EH service provision other than Environmental health officers. Ethical approval was obtained from Haramaya University, College of Health and Medical Science Institutional Health Research Ethics Review Committee (IHRERC). An official communication was made with the concerned institutions. During data collection, the purpose of the study was explained, and written consent was obtained from each participant. Confidentiality and privacy of participants were kept throughout the research process. All respondents were encouraged to participate in the study while at the same time they were informed that they had the right to leave participation at any time during the data collection process. The study included all institutions which were known to provide environmental health services. The response rate was 86.8 % and the majority (63.9%) of the respondents were male (86.1%) and from the health sector. Disease prevention and health promotion was the major (61.1%) type of environmental health service and 68.1% of participants were environmental health officers by profession (Table 1). Characteristics of professionals involved in environmental health services in different sectors in eastern Ethiopia, 2014. Majority (65.3%) of the respondents reported to have an average annual plan achievement related to environmental health services. The rest 19.4% and 15.3% reported high and low level of annual plan achievement, respectively. Participants in the qualitative inquire also responded that environmental health services were below the expected level. One participant said: “The service is compromised to the extent that activity plans were not available in place for the work I am assigned and mostly I am engaged in services other than environmental health.” Issues to the extent of the service were also related to job description. Another participant responded that: “Job descriptions for the services was unavailable for most of the professionals in different sectors and problems related with EH services provision was getting worsened from time to time.” The level of professionals’ engagement in environmental problems identification is presented in Figure 1. About 41.7% and 40.3% of them were not engaged in any community diagnosis or pollutant identification activities, respectively. Moreover, 61.1% of the workers never performed any environmental sample analysis. In addition, 70.8% of participants didn’t have an opportunity to conduct any intervention based on laboratory results. Professionals’ level of engagement related to routine/regular activities is presented in Figure 2. Around 33.3% of them stated inspection as not a regular activity. Furthermore, more than 50% of participants have never been involved in environmental impact assessment, quarantine, town sanitation, and vector control activities. Involvement of environmental health professionals in national and regional planning and development of environmental health policies, standards, and guidelines was assessed as well. Only 15.3% said they have participated in development of regional environmental health policies and 20.8% of them had an opportunity to participate minimally in national and regional environmental health services planning. The rest (63.9%) never had a role and opportunity to participate in the planning or development of environmental health service policies, standards, and guidelines. It is not only lack of opportunity to participate in development of standards, guidelines, and policies but also some of the respondents admitted that they did not recognize the available policies and regulations in relation to their job. A participant who was working in the solid waste management sector responded: “I did not remember or know the available policies and regulations related to environmental health activities.” Conversely, participants working in the disease prevention and health promotion sector claimed that they have been using the available regulations. However, almost all participants believed that the implementation of available policies and regulations was so poor and unrecognizable. An individual who was engaged in the health sector described as: “Ministry of health has policies and regulations related to environmental health services, but the implementation focused on curative services rather than preventive activities such as environmental health.” Respondents who never had an opportunity for any form of leadership and skill development trainings accounted 47.2% of the total. The rest 33.3%, 15.3%, and 4.2% got minimal, moderate and significant opportunities for leadership and skill development trainings respectively. About 22.2% and 27.8% of respondents perceived for not having knowledge and skill for environmental health services, and never had regular skill and knowledge update respectively. More than two-third of the respondents didn’t participate in research activities and another 58.3% of them lacked support to identify areas that needs additional research to improve EH services (Figure 3). Regarding the level of support and emphasis given to environmental health services, only 41.7% reported to have a fair or good level of support. The rest (59.3%) reported, poor level of support and emphasis. The qualitative findings revealed that theoretically the government has shown its support for the service through introducing environmental health services and preventive health policy. However, they have questioned implementation of the policy at regional and local level. Concerning the resources required for the service, all professionals working in disease prevention and control sector responded that budget was not allocated for environmental health services. An Environmental health officer working in a district health bureau said: “Prevention health policy is only in theory as the system was designed based on the curative service. . .. it is impossible to perform environmental health services in such system and it is almost a forgotten service. . .every budget is related to curative services and we never had a specific budget for environmental health services.” Professionals in the health sector also declared that the budget allocated in their institute was directly to the curative service and a budget for environmental health services is either unavailable or limited. A respondent working as a head of environmental health activities described the struggle as: “Environmental health services are considered as extracurricular activities and any support is on the will of institute directors and head of programs. . .You are supposed to perform bulky services without budget, trainings and appropriate professionals suitable for the service. . .Even if you perform certain activities after averting all these obstacles, there is no recognition for the professionals providing the service.” On the contrary, professionals working outside the health sector got better support and the service delivery system was comfortable to perform environmental health services. An individual working in a non-governmental organization as a coordinator of water and sanitation program replied as: “Despite some constraints, we are exerting a great effort with the available capacity to accomplish services and if some of the problems get solved, I am sure there will be a better service.” While workers in the health sector felt the system made it difficult to provide environmental health services. An environmental health officer working in disease prevention and health promotion unit of a regional health bureau added: “In the new business process reengineering of our institution, environmental health department was totally neglected and wasn’t considered in the main departments of the bureau. . . it was annexed with supporting units such as finance and human resource.” Correspondingly, 58.3% the professionals responded a poor level of satisfaction with their current job in EH service provision. The remaining 41.7% responded good level of job satisfaction. Although most of them reported poor level of job satisfaction, they still believe they have bigger contribution compared to other professionals in their respective sectors. A participant working in a water, sanitation and hygiene program told: Determinants of Environmental Health Services Performance and Level of Job Satisfaction Bivariate and multivariate logistic regression analysis were performed and significant variables from the bivariate analysis were fitted in multivariable analysis. In the bivariate analysis, educational level, profession, the level of support and emphasis given to EH services, and type of institutions were significantly associated (P < .05 at 95% CI) with 4 outcome variable namely, identification of environmental problems, perception about having the appropriate knowledge and skill for EH service, introducing an innovative solution to EH problems and level of job satisfaction. However, in the multivariable analysis profession, the type of institution and, the level of support and emphasis offered to EH services were statistically significant (P < .05 at 95% CI) with the outcome variables-introducing innovative solutions to EH problems and level of job satisfaction (Table 2). Multivariable logistic regression analysis of determinants of the level of job satisfaction and introducing innovative solutions to EH problems. Accordingly, being an Environmental health officer was significantly associated with higher odds of introducing innovative solutions to EH problems compared to other professionals (AOR = 3.4, 95%CI: 1.1-12.2). Similarly, having a good or fair level of support and emphasis offered to EH services was significantly associated with higher odds of introducing innovative solutions to EH problems compared to poor level of support and emphasis (AOR = 5.6, 95%CI: 2.2-11.9). Moreover, professionals working outside health institutions were 3.1 times more likely to introduce innovative solutions to EH problems (AOR = 3.1, 95% CI: 1.6-9.3). The final model from the multiple logistic regression also showed the level of support and emphasis given to EH services, the type of institutions, and profession were statistically significant (P < .05) with the level of job satisfaction. The likelihood of a good level of job satisfaction was 6.4 times higher (AOR = 6.4, 95%CI: 1.6-14.7) among professionals working in other institutions compared to workers in health institutions. Level of job satisfaction was also found to be statistically associated with the level of support and emphasis given to EH services. The odds of having a good level of job satisfaction was 5.9 times (AOR = 5.9, 95%CI: 1.9-17.61) times higher with good or fair level of support & emphasis given to EH service in comparison with those who had poor level of support and emphasis from their organization. Compared to Environmental health officers, the odds of having good or fair level of job satisfaction was 4.5 times higher among other professionals (AOR = 4.5, 95%CI: 1.9-9.62). Meanwhile, only the profession and educational level determines the outcome of the 2 variables: identifying environmental problems and perception of professionals about having the appropriate knowledge and skill for EH service (Table 3). Environmental health officers were linked to higher odds of identifying Environmental problems (AOR = 4.1, 95%CI: 1.3-7.6). Furthermore, professionals who had a Master’s degree were 3.1 times more likely to be involved with identifying environmental health problems compared to those who had diploma and Bachelor degree (AOR = 3.1, 95% CI: 0.9-5.9). Meanwhile, the odds of perception about having the appropriate knowledge and skill for EH service was 3.3 times higher (AOR = 3.3, 95%CI: 1.3-7.2) among Environmental health officers compared to other professionals. Moreover, having Master’s degree was associated with higher odds of perceiving about having the appropriate knowledge and skill for EH service (AOR = 2.9, 95%CI: 0.8-12.3) compared to professionals who had a diploma or Bachelor degree educational qualification. Multivariable logistic regression analysis of determinants of involvement in environmental problems identification and perception about having the appropriate knowledge and skills for EH service. Environmental health services are those services which implement environmental health policies through monitoring and control activities.32,33 Despite a wide scope of environmental health services in Ethiopia, the service delivery system and level of professionals’ engagement were not studied well. The current study included all institutions which were known to provide environmental health services and it revealed that disease prevention and health promotion as major (61.1%) type of environmental health service. Professionals’ level of engagement and service delivery achievement for environmental health activities were found to be average and below average. Majority of respondents believed to have the appropriate knowledge and skill for environmental health service. Meanwhile, professionals’ level of job satisfaction was reported to be low. These findings are similar with previous studies conducted in other countries.2,11,32 Environmental health practitioners are responsible in providing environmental health services in Ethiopia.12 Previous studies of environmental health services and environmental health practitioners underscore the fact that there is a broad variety of perceptions of environmental health, the administration of environmental health services and the selection of professionals involved in providing such services.34,35 In our study, we found professionals involved in environmental health services come to practice with an array of educational backgrounds. Given the complex and diverse nature of environmental health services, multidisciplinary professionals maybe appointed in different sectors.7,36 Although an array of professionals involved in environmental health services, graduates of accredited environmental health programs are deemed the industry standard.11,19,37,38 However, the absence of clear legislations, credentials, and qualifications required for environmental health services instigated a gap to prioritize an appropriate profession for the service.7,9 Moreover, the limited number of environmental health officers created a hole for other professionals to grab the opportunity and occupy the job.19 The engagement of individuals with diverse educational and credentialing requirements results in a poorly defined profession and deterred the services delivered.11 The findings in our study were also indicated in other studies which revealed recruitment of unqualified individuals to fill the gap through lowering qualifications and/or competencies required for environmental health jobs.32,34 There was also a disparity in the level of professionals’ engagement in different environmental health services. Most of the respondents were engaged in disease prevention and health promotion activities. It is not a surprise as major environmental health services in Ethiopia are under the Ministry of Health.9,12,18,26 Moreover, the health policy of Ethiopia states disease prevention as a priority and MOH hires environmental health officers as front line workers to implement the policy.9,12,30,39 Despite the policy and high level of professional engagement, the level of support given to environmental health workers in the health sector was the lowest compared to other sectors. Although, the MOH declares prevention as a priority, in practice environmental health services were treated badly with resource allocation, staffing and emphasis given.19,37 The situation resulted in a high turnover of environmental health workers who learn needed skills on the job and moved into NGOs, industries, and other sectors. The extent of environmental health services in different sectors were also compromised and the annual plan achievements were average and below the expected level. Activities were designed to protect the local communities from any environmental hazards through structured core functions of environmental health services.9,30,40 Thus, a low-level achievement in 1 service will impact other services directly or indirectly. The low level of performance could be attributed to leadership, financial, personal or institutional reasons. Environmental health services require complex investigations, routine inspections, equipment and resources.11,41 However, inconsistent and unsustainable financial and administrative support impend the service.42 Lack of understanding the contribution, value and benefit of environmental health services to the wider public health caused budget cuts and failures in annual plans.3,42,43 Biased and poor understanding of the costs of environmental health services delivery hinders progress toward an effective provision of services.41 In addition, addressing the rapidly emerging EH problems require updated knowledge and skills of professionals.11 Remarks related to the need for increased professional development and training was made. The finding of this study revealed that professionals with Master’s degree were 13.4 times more likely to identify environmental problems compared to their counter parts (AOR = 13.4 with 95% CI: 0.9-5.9). Findings from similar studies and better performance of professionals with higher level of education and profession (particularly EHOs) were supporting the need for training and development programs.44 Furthermore, the figures which indicated the lack of environmental sample analysis and absence of intervention based on laboratory results showed services were delivered without empirical evidence. Any decision related to environmental health services has political, economic or social implications.45,46 With the emergence of environmental pollutants with unknown characteristics, scientifically sound evidence is necessary for such decisions.8,47 However, as the findings indicated, the environmental health service is too far from such evidence and requires further work. The participation level of EH workers in planning and development of national, regional or local policies also cause to question the system. The low level of participation could be the result of many participants included in the study were mainly working at a local jurisdiction. However, the planning and development of policies, standards, and guidelines should involve appropriate professionals and stakeholders from all level.7 Otherwise, the implementation of such policies, standards, and guidelines will not be effective.36 The qualitative findings indicated the extent to which EH workers are unaware of the available standards and guidelines for environmental health services. The findings of this study also showed low level of job satisfaction among professionals involved in environmental health services. The findings revealed profession, type of institution, and level of support and emphasis for environmental health services as factors determining level of job satisfaction. Studies showed numerous variables as contributors to low level of job satisfaction.44 The degree of job satisfaction was directly linked to the way the position is seen and respected by government officials, their colleagues, and within the broader community.37,48 Literature showed that individuals’ perceived organizational support is associated with level of job satisfaction.49 If the staff feel sidelined and had no contribution to their organizations’ they will develop a low level of satisfaction in their current job.12,27,50–52 Underutilization of environmental health officers skills, which is a recurrent problem, also resulted in a low morale and job satisfaction.2,32 The study indicated the gaps in service delivery systems, level of engagement in environmental health services and professionals job satisfaction. A limited spectrum of technical and enforcement procedures, which become synonymous with the practice of environmental health also resulted dissatisfaction among professionals. This pattern has been detrimental to the wider concept of environmental health practice and resulting in both displeasure among current environmental health officers and young prospective graduates. Limitations of the Study The limitation for the quantitative data could be the number of professionals obtained that weakens the appropriate statistical analysis. However, to fill the gap in having credible information, the qualitative findings has provided supportive evidence. The absence of any such type of research in the country or in Africa makes it difficult to compare the results. On the basis of both the quantitative and qualitative results of this study, it can be concluded that the environmental health service rendered by the practitioners has been dented, as many of the services have not been provided accordingly. The study indicated the gaps in the service delivery systems. Lack of required resources and support, lack of knowledge and skill trainings, loose inter-sectoral coordination and alignment, inappropriate service delivery system, and lack of professionals’ commitment were among the key contributing factors to the quality of the service observed. There is also an overall need to standardize EH professional credentials and educational standards to produce qualified labor that can deliver environmental health services. The Ethiopian government, particularly the federal Ministry of Health, FMHACA, sanitation and beautification agency, environmental protection agency and other concerned bodies should give focus on environmental health services by allocating the necessary resources, providing the support, exerting capacity building efforts, restructure the framework for EH service delivery and facilitate co-operation among different sectors. There is also a need to produce more effective environmental health work force (such as EHOs) with a specific practice that can accurately deliver environmental health services. On the other hand, professionals should strive to provide the expected level service quality by keeping work ethics and exerting the maximum commitment. The authors would like to thank Haramaya University for supporting this study under grant “Government fund—2013/2014 for faculty research-Ministry of Education.” We would also like to thank all the study participants in different sectors. Conceptualization: YTD, BND, and AS. Methodology: YTD, BND and AS. Data collection: YTD, BND, and AS. Formal analysis: YTD, BND and AS. Writing—original draft manuscript: YTD and BND. All authors read and agreed on the manuscript.
<urn:uuid:72551026-a0e9-4b84-a81d-e2593faa6fa0>
CC-MAIN-2022-33
https://complete.bioone.org/journals/environmental-health-insights/volume-15/issue-1/1178630220988554/Environmental-Health-Services-and-Professionals-Level-of-Engagement-in-Different/10.1177/1178630220988554.full
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573029.81/warc/CC-MAIN-20220817153027-20220817183027-00296.warc.gz
en
0.957812
7,666
2.671875
3
86 notecards = 22 pages (4 cards per page) Disease causing microorganisms are called pathogens. Pathogenic microorganisms have special properties that allow them to invade All living cells can be classified into two groups: prokaryotes and eukaryotic based on certain structural and functional characteristics. These cells are similar in their chemical composition and chemical reactions. Both contain nucleic acids, proteins, lipids, and carbohydrates. They have a sticky glycocalyx that surrounds them, the glue that holds the cells in place. Most bacteria are found sticking to solid surfaces, including other cells, rather than free floating. They use the same chemical reactions to metabolize food, build proteins, and store energy. What distinguishes prokaryotes from eukaryotes? the structure of the cell wall, membrane and the absence of organelles. The DNA is usually a single circularly arranged chromosome and is not surrounded by a membrane. DNA is not associated with histones. Lack organelles- special structure that carry on various activities. Their cell walls almost always contain the complex polysaccharide peptidoglycan. They usually divide by binary fission- DNA is copied and the cell splits into 2 cells. This involves fewer structures and and processes that eukaryotic cell division. small unicellular organisms that include Bacteria and archaea (majority are bacteria) The DNA is found in the cells nucleus, which is separated from the cytoplasm by a nuclear membrane, and the DNA is found in multiple chromosomes. DNA is associated with chromosomal proteins called histones and with non histones. Have a number of membrane enclosed organelles, including mitochondria, endoplasmic reticulum, golgi complex, lysosomes, and sometimes chloroplasts. Cell walls when present are chemically simple. Cell division involves mitosis, chromosomes replicate and and identical set is distributed into each of the two nuclei, cells produced are identical. plants, animals, fungi, yeast, mold, protozoa, and algae. The two domains of prokaryotes are __________ bacteria and archaea reproduction and adaption bacteria reproduce quickly by binary fission and can divide every 1-3 hours Shapes of bacteria (morphology) coccus/cocci- spherical. berries. usually round, but can be oval, elongated, or flattened on one side. When they divide to reproduce the cells can remain attached to one another. Cocci that remain in pairs are diplococci. Those that divide and remain attached in chainlike patterns are called streptococci. Those that divide in two planes and remain in groups of four are known as tetrads. Those that divide in 3 planes and remain attached in cubelike groups of 8 are called sarcinae. Those that divide in multiple planes and form grapelike clusters or broad sheets are called staphylococci. bacillus/bacilli- rod shaped. Little rods or walking sticks. Divide only across their short axis so there are fewer groupings of bacilli than cocci. Most bacilli appear as single rods called single bacilli. Diplobacilli appear in pairs after division. Streptobacilli occur in chains. Coccobacilli are oval and look like cocci spiral bacteria have one or more twists, they are never straight. Bacteria that look like curved rods are called vibrios. Spirilla have a helical shape, like a corkscrew, and fairly rigid bodies, flagella used to move. Spirochetes are a group of spirals that are helical and flexible and move by means of axial filaments that resemble flagella. arrangement of bacterial cells: spherical- coccus or cocci- berries. Diplococci or streptococci-division in one plane Tetrad-division in two planes and in groups of four. rod shaped- bacillus or bacilli -little rods or walking sticks. Sarcinae- divide in three planes and attach in cubelike groups of eight. staphylococci- divide in multiple planes and form grape like clusters The prokaryotic cell: Bacteria unicellular, multiply by binary fission, differentiated by morphology, chemical composition, nutritional requirements, biochemical activities, and source of energy. Are bacteria harmful? <1% of the different types of bacteria make people sick. many bacteria serve important functions in our body- bacteria in our intestines can digest fiber (plant polysaccharides) and in return produce vitamins necessary for our health. bacteria on our skin provides protection from other harmful bacteria. Bacteria can be harmful if they get into the wrong place or if they acquire the wrong genes. Microorganisms that establish more or less permanent residence (colonize) but that do not produce disease under normal conditions. Can also be referred to as normal flora. may be present for several days, weeks, or months and then disappear. Microorganisms are not found throughout the entire human body but are localized in certain regions. pg. 390 and 392. The human microbiome project began in 2007 to analyze microbial communities called microbiomes that live in and on the human body. Its goal is to determine the relationship between changes in the human microbiome and human health and disease. pg 390 Types of bacteria present differ depending on the location of the body. Distribution and composition of normal microbiota are determined by many factors: 10 times more bacteria cells than human cells. pg 390 relationship between normal microbiota and the host: Symbiosis is the relationship between normal microbiota and the host, a relationship between 2 organisms in which at least one organism is dependent on the other. Some normal microbiota are opportunistic pathogens. pg 392 commensalism: in a symbiotic relationship one organism benefits and the other is unaffected. Many of the microorganisms that make up our normal microbiota are commensals. Ie: bacteria on our skin that eat dead skin cells. pg. 392 mutualism: is the type of symbiosis that benefits both organisms. Ie: bacteria like e. coli that inhabit our intestines and eat fiber, synthesize vitamin K and some B vitamins that are abosrbed into our bloodstream and distributed for use by our body cells.. pg 393 parasitism: symbiosis that one organism benefits by deriving nutrients at the expense of the other. Many disease causing bacteria are called parasites. pg 393 Opportunistic pathogens are? normal microbiota if they stay in the correct location on our body we can have a mutualistic relationship, but if they get into another part of our body they can become an opportunistic infection. How does our normal microbiota help us? It can benefit us because it prevents the growth of organisms that might actually harm us also called Microbial antagonism or competitive exclusion, a competition between microbes. Normal microbiota protect the host by: a supplement of live microbial cultures, harmless bacteria, that repopulates your body with good bacteria and crowd out detrimental bacteria. provides bacteria for commensalism, bacteria that can be benifical for us. Why doesn't our immune system kill resident bacteria? when bacteria are present all the time it promotes a tolerance. There is a balance between activation and repression of the immune system caused by gut bacteria. Receptors on cells sense bacteria and their products and activate the immune response, release chemokines, and cytokines ( attract immune cells) make more mucous (protection) However our body doesn't mount as large a response when certain types of bacteria are present. peace-keeper bacteria (i.e.: B. Fragilis) makes tolerant dentritic cells and T reg cells. Some other bactria (segmented filamentous bacteria) make activated dendritic cells and activated T helper cells the activating type of bacteria keep our immune system on alert, but anti-inflammatory types keep it from becoming over-active and causing damage. Important to maintain a balance. Peace keeping bacteria eat fiber from our diet and sugars in the mucus layer. their by-products fuel healthy intestinal cells that produce mucus that keep bacteria trapped and release chemicals that activated regulatory immune response reduction in peace-keepers (genetics, lifestyle, medications) and /or lack of fiber for them to eat reduces mucus gut barrier and allows bacteria to attach the intestinal cells activating inflammation. can result in inflammatory bowel disease (IBD) or crohns disease. Normal microbiota can benefit the host by preventing the overgrowth of harmful microorganisms. This is called microbial __________. Microbial antagonism - the normal microbiota can benefit the host by preventng the overgrowth of harmful microorganisms, can also be called competitive exclusion. Escherichia coli synthesizing vitamins K and B in the large intestine would be an example of which type of symbiosis? Pathology is the scientific study of disease. Pathology is first concerned with: Disease causing microorganisms are called pathogens. The invasion or colonization and growth of pathogens in the the body by pathogenic microorganisms. An infection may exist in the absence of detectable disease. Patient may have the infection but not show s/s of the disease. The presence of a microorganism in a part of the body where it is not normally found is also called an infection, and may lead to disease. Ie: large numbers of E. Coli are normal in a healthy intestine but would cause a infection if noted in the urinary tract. occurs when an infection results in any change from a state of health. disease is an abnormal state in which part or all of the body is incapable of performing its normal functions. When a microorganism overcomes the body's defenses a states of disease results. How do Microorganisms enter a host? Portals of entry: many microorganisms can cause infections only when the gain access through their specific portal of entry The ability to cause disease by overcoming host defenses. The degree of pathogenicity. Adherance or adheasion All pathogens have a means of attaching themselves to host tissues at their portal of entry to cause pathogenicity. surface molecules on the pathogen called Adhesins (ligands) that bind specifically to complementary surface receptors on the cell of certain host tissues. Mannose is the most common receptor Microbes have the ability to come together in masses cling to surfaces, and take in and share available nutrients. in communities called biofilms- ie: dental plaque. Penetration into the host cell cytoskeleton microbes attach to host cells by adhesions, this triggers signals in the host cell that activate factors that can result in the entrance of some bacteria. The mechanism is provided by the host cytoskeleton. A major component of the cytoskeleton is a protein called actin which is used by some microbes to penetrate host cells and by others to move through and between host cells. microbes produce surface proteins called invasins that rearrange nearby actin filaments of the cytoskeleton. membrane ruffling is the result of disruption in the cytoskeleton of the host cell. The microbe sinks into the ruffle and is engulfed by the host cell. The easiest and MOST frequently traveled portal of entry for infectious microorganisms is the __________. The degree of pathogenicity of an organism is known as the _________. E. coli is an example of what type of bacteria? Identification of bacteria by gram stain: Some bacterial strains have a plasma membrane surrounded by a cell wall, a peptidoglycan layer or sugar layer. Other strains have an outer membrane surrounding the peptidoglycan layer. These bacteria tend to be more resistant to antibiotics because of this extra layer of protection. Gram staining technigue staining means coloring the microorganisms with dye that emphasizes certain structures. - fixed- kills the microbes and fixes them to the slide - smear- a thin film of material containing microorganisms, passed through the flame, and stain called crystal violet is applied. - alcholol to decolorize - safrinin red dye and washed again. Gram positive vs gram negative bacteria: Gram positive bacteria- can be treated with antibiotics if pathogenic type bacteria. Gram negative bacteria- generally more resistant to antibiotics due to the extra peptidoglycan plasma membrane layer. Types of gram positive bacteria are: Firmicutes- Bacilli and Clostridia. gram + rods and cocci Types of gram negative bacteria are: Proteobacteria- Alphaproteobacteria, betaproteobacteria, gammaproteobacteria, deltaproteobacteria, epsilonproteobacteria. This includes vibrio, salmonella, helicobacter and escherichia. Cyanobacteria- oxygenic photosynthetic bacteria chlorobi photosynthetic anoxygenic green sulfur bacteria chlamydiae- grow only in eukaryotic host cells plantomycetes- aquatic bacteria bacteroidetes, flavobacteria, sphingobacteria- phylum members include opportunisttic infections. fusobacteria- anaerobic some cause tissue necrosis and septicemia in humans. Spirochetes- pathogens that cause syphilis and lyme disease. Pathogenic gram positive bacteria: Firmicutes Will stain purple from the crystal violet. Endospore forming bacteria -Bacillus (genus)- rod shaped bacteria and clostridium- rod shaped cells. cause Tetanus and botulism -Staphylococcus (cocci) Photo shown is staph aureus. Has alot of antibiotic strains. produces endotoxins. Pathogenic gram positive bacteria: Streptococcus Pathogenic gram-negative bacteria: Gram negative bacteria that have an extra plasma membrane around the outside of their cell wall. Also have a polysaccaride cell layer capsule in addition to the extra plasma membrane makes them really resistant to antibiotic treatment and can evade host immune responses as well. E. coli pathogen is very resistant to abx Pathogenic gram-positive bacteria proteobacteria: Exceptions to Koch's postulates: Streptococcus pneumoniae is an example of what type of bacteria? mechanisms of pathogenesis When a microorganism invades a body tissue how does it cause disease or pathogenesis? it initially encounters phagocytes of the host, they destroy the invader or the pathogen will overcome the host's defenses, allowing the microorganism to damage the cell in 4 ways: (bacteria) Production of toxins: How do bacterial pathogens damage host cells? 1. Using the hosts nutrients: Siderphores Iron is required for the growth of most pathogenic bacteria. The concentration of free iron in the human body is low because most of the iron is tightly bound to iron transport proteins and hemoglobin. For pathogens to obtain iron they secrete proteins called siderophores that take iron away from iron-transport proteins by binding the iron even more tightly, forms a complex and is taken up by the siderophore receptors on the bacterial surface and brought into the bacteria. Siderophores are proteins secreted by pathogens that bind iron more tightly than host cells. How do bacterial pathogens damage host cells? 2. Production of toxins toxins are the primary factor contributing to the pathogenic properties of those microbes. Two class types: toxin genes can be spread between species via horizontal gene transfer. Production of toxins: Toxigenicity- the capacity of microorganisms to produce toxins Toxemia- the presence of toxin's in the blood. Intoxications- caused by the presence of a toxin, not by microbial growth. Exotoxins are proteins that are produced by bacteria and released into the extracellular environment. The action of an A-B Exotoxin: They are proteins- polypeptides. Consist of two parts: A and B, both of which are poylpeptides. Most exotoxins are A-B toxins. The part A is the active enzyme component and the part B is the binding component. releases the A-B toxin, attaches to the host cell, taken into the host cell, the A and B come apart. The receptor B is released outside of the cell. Diptheria is an example of a A-B toxin. Membrane disrupting exotoxins Cause lysis of the host cells by disrupting their plasma membranes. Some form protein channels in the plasma membrane (The cell -lysing exotoxin of Staphylococcus aureus is an example of an exotoxin that forms protein channels) others disrupt the phospholipid membrane (Clostridium perfringens). Membrane disrupting toxins contribute to virulence by killing host cells. Protein synthesis disrupting exotoxin. This toxin disrupts protein synthesis. A-B type exotoxin, the A fragment disrupts translation. This gets into our GI tract exotoxin known as Shigella Second messenger-activating exotoxin. last class of exotoxin: (Binds the receptor that Ligand binds activating the G protein inside the cell to activate the series of events inside the cell.) An exotoxin produced from e. coli. that exotoxin binds the receptor we get formation of GTP from cyclic GMP and the end result is that we end up secreting chloride ions outside of the cell, water follows and results in diarrhea. other exotoxins: Superantigens and genotoxins: Super antigens: cause an intense immune résponse, through a series of interaction with various immune cells. In response to superantigens enormous amounts of chemicals called cytokines are released from the host cells (T cells). Excessively high levels of cytokines causes symptoms of fever, N/V/D, shock, and death. Staphylococcus aureus- can cause food poisoning and TSS. Genotoxins: alter the hosts DNA, damage DNA directly causing mutations, disrupting cell division and leading to cancer. helicobacter- causes breaks in DNA leading to stomach CA. Table of diseases caused by exotoxins Enterotoxins- in GI tract results in diarrhea Cholera toxin- Vibrio cholera Cholerae releases an A-B toxin. binds and taken up into the cell activates a second messenger cell Chloride and water are pumped out of the cell produces diarrhea. Can be deadly in third world countries. Drugs used to combat this problem- encephalins- active a G protein to stop the over release of chloride Endotoxins are part of the outer portion of the cell wall of gram negative bacteria. Released during bacterial multiplication and when gram-negative bacteria dies and their cell wall lyses. Endotoxins exert their effects by stimulating macrophages to release cytokines in very high concentrations- at these toxic levels they produce fever, chills, weakness, generalized aches, sometimes shock and death. How do endotoxins produce a febrile response in a cell? a macrophage ingests a gram negative bacteria, releases endotoxins, induce macrophages to produce cytokines ( IL1 & TNF alpha) which are released into the bloodstream, travel to the hypothalamus, and induce the hypothalamus to produce prostaglandins which reset the body's thermostat higher producing fever. Testing for endotoxins- LAL assay lab test uses to identify the presence of endotoxins in drugs, medical devices, and body fluids. Limulus amebocyte lysate assay used to test for endotoxins. Blood from horseshoe crabs contain amebocytes that lyse in the presence of endotoxin producing a clot. Comparison of exotoxins and endotoxins gram negative bacteria produce endotoxins and gram positive produce exotoxins. exotoxins have a wide range of effects endotoxins tend to have similar effects in the host cell. Bacterial transformation- release of DNA from one cell makes its way into a recipeint cell. Bacterial transduction- movement of genes by viruses that infect bacteria- bacteriophage. Acquires a gene from one bacteria and when it infects another bacteria injects that genetic material Bacterial conjugation- transmit extra chromosomal plasmids and toxins along with antibiotic resistance. Bacterial pathogenesis can be a local infection or can be systemic if it enters the blood stream. How do microbes (bacteria or viruses) evade hosts? entry into the host cell or tissue evasion of the host defenses damage to the host cells either directly or by toxins replicate and exit the body to hopefully go on and infect a new host. The outer portion of Gram-negative cell walls contain __________. Proteins secreted by pathogens that bind iron are known as __________. The antibiotic revolution and antibiotic resistance. Timeline of the discovery of PCN used in 1942, mass produced and 3 years later staph aureus shows resistance. 1947 new antibiotic is used- streptomycin. 3 years later streptomycin resistance tetracycline, vancomycin, methicillin all produced and lead to resistance. pathogenic bacteria with no abs to treat them. The process in which individuals that have certain heritable traits survive and reproduce at a higher rate than other individuals because of those traits over time, can increase the match between organisms and their environment. in response to a change in environment may result in adaptation to new conditions. normal genetic mutations occur that make bacteria slightly different making that bacteria now resistant to abs. gene transfer- if the bacteria has the means of transferring its genetic material it can now become resistance. transformation- release of DNA transduction- transfer via bacterialphage conjugation- small plasmids are transferred to produce proteins to another bacteria. overview of how antibiotic resistance develops. natural genetic variation -population of bacteria where there is few resistance to abx. abx kill most of the bacteria, but the few left form abx resistance strains this is the result of natural mutation. bacteria pass via gene transfer or other mechanisms to other bacteria providing resistance. How does antibiotic resitance spread in the community? Simply using antibiotics creates resistance. These drugs should only be used to treat infection. antibiotics were being misused to for bacteria vs viral infections. abx should only be given for true bacterial infections. abx given to animals on the farm that then provides resistance strains in our food. exposed to contaminated foods or from a contaminated environment. prevalent in hospitals, resistant to most abx, tends to be untreatable. skin disease, can look like a spider bite, no obvious wound. AKA staph infections. the chromosomal map of MRSA. Causes huge amounts of tissue damage that is difficult to treat due to abx drug resistance. frequent hand washing keep wounds covered develop new abx reduce the use of the ones we already have. The removal of plasmids reduces virulence in which of the following organisms? A encapsulated bacterium can be virulent because? It resists phagocytosis and continues growing ie: streptococcus pneumoniae and klebsiella pneumoniae produce capsules that r/t their virulence. A drug that binds to mannose on human cells would? Prevent the attachment of pathogenic e. Coli.
<urn:uuid:f10472f1-a0d6-43ac-bafb-48b733ea9c74>
CC-MAIN-2022-33
https://www.easynotecards.com/print_cards/68529
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00296.warc.gz
en
0.877866
6,112
4.09375
4
HIgh Blood Pressure What does it mean to have high blood pressure aka “hypertension”? Hypertension (HTN) is medically known as high BP. The force of your blood as it rushes through the arteries in your body is known as blood pressure. When your blood travels through your arteries at a higher pressure than normal, you have high blood pressure (hypertension). Extensively uncontrolled high blood pressure puts you at risk for heart stroke, certain heart diseases. In fatal conditions it may be even worse and lead to heart attack or even kidney failure. Your blood pressure usually keeps fluctuates throughout the day. However, if it continues to rise, it’s a matter of much concern. There are two techniques to check your blood pressure - The pressure in your arteries while your heart beats is known as Systolic Blood Pressure. - Diastolic blood pressure, this is the pressure in your arteries in between your heartbeats. The systolic blood pressure number is on top, and the diastolic blood pressure number is on the bottom, when you get a blood pressure reading. Blood pressure that is less than 120/80 mm HG is normal. High blood pressure is 130/80 mmHg or more. HTN: 90 to 95% of cases are for primary HTN, (high BP) in this no medical cause can be found. If we discuss about hypertension, it is also of two types: Primary, essential hypertension: caused by the body’s own mechanism (most common) Secondary high blood pressure is caused by any other already existing medical condition or use of certain medicines. Primary hypertension: It develops over a long period of time. It is most likely the outcome of your lifestyle, surroundings, and how your body develops as you get older. Secondary hypertension: Many factors can cause hypertension, like Thyroid or adrenal gland problems. What are the signs and symptoms of hypertension? The majority of people who have high blood pressure don’t show any signs or symptoms. It is very important to get your routine blood pressure checked on a regular basis (especially after 18 and plus years of age). In certain people, high blood pressure can induce headaches, nosebleeds, and shortness of breath. Although, such symptoms can be due to a many other things (serious or non-serious). When blood pressure has raised to a dangerously high level over time, these symptoms frequently develop. What factors contribute to high blood pressure? Your doctor can help you in understanding the source of what underlying conditions are causing your rise in BP: - A high- salt, fat, and/or cholesterol diet. - Long- term illnesses such renal and hormone issues, diabetes, and high cholesterol. - A family history of high blood pressure, especially if your parents or other near relatives have it. - Insufficient physical activity. - Growing old (the older you are, the more likely you are to have high blood pressure). - Obesity, often known as being overweight. - Ethnicity (non- Hispanic black people are more likely to have high blood pressure than people of other races). - Birth control pills and other medications. - Tobacco usage or excessive alcohol consumption. What is the best way to reveal and understand if you have high blood pressure? The best time to check your blood pressure is in the morning, 20-30 minutes after you get up. Normal BP: 120 on top and less than 80 on the bottom is considered normal. When monitoring blood pressure in the left and right arms, a difference of more than 10 units indicates that the body has excessive blood pressure that needs to be regulated. Prehypertension values range from 120 to 139 on the upper end and 80 to 89 on the lower end. Stage 1 high blood pressure is 140-159 on top and 90-99 on bottom; stage 2 high blood pressure is 160 or more on top and 100 or more on bottom. After the age of 18, have your blood pressure checked every two years at the absolute least. Can high blood pressure be prevented or avoided? The truth is, having high blood pressure is a serious health risk—it boosts the chances of leading killers such as heart attack and stroke, as well as aneurysms, cognitive decline, and kidney failure. What’s more, high blood pressure was a primary or contributing cause of death for nearly 500,000 people every year, as per the data from the Centers for Disease Control and Prevention (CDC). Aerobic exercise such as walking, jogging, cycling, swimming, and dancing can all help you lower your blood pressure. Having your blood pressure tested on a regular basis is crucial. Try high- intensity interval training, which involves alternating brief bursts of intense exertion with intervals of lighter activity. If you have high blood pressure and want to see rapid results, lie down and take several deep breaths. This is how you can lower your blood pressure in minutes by slowing your heart rate and lowering your blood pressure. When you’re stressed, hormones are generated that cause your blood vessels to constrict. If you have high blood pressure due to lifestyle factors, you can take the following steps to lower your risk Lower your salt intake Reduce your alcohol consumption Learn relaxation methods Questions to consult with your doctor What exactly is the source causing me to develop high BP? Could you suggest me simple workouts? How can I naturally control my BP? How much salt is too much salt? What food should I avoid? How can I relax? Treatment for hypertension The best way to lower down your BP begins with your lifestyle changes and also reduces your risk of heart diseases. Your doctor may also recommend Antihypertensive medications. The goal of treatment is to get your blood pressure back to where it should be. This procedure is really effective. If only medicine is assumed to be the only way to control your blood pressure, you will have to be dependent on take it for the rest of your life. It’s usual to need more than one medication to keep your blood pressure under control. Stopping your medicine without first consulting your doctor is not a good idea. If you don’t, you may increase your risks of having a stroke or heart attack. There are drugs that can assist relax your blood vessels, reduce the force with which your heart beats, and stop nerve activity that can constrict your blood vessels. Although blood pressure medications can lower blood pressure, they can also induce adverse effects such leg cramps, disorientation, and insomnia. The good news is that most people can reduce their numbers without resorting to medicines. Lifestyle changes are a crucial part of preventing and treating high blood pressure. Naturally occurring best herbs to cure High BP Garlic (Allium sativum): It provides antioxidative and antihypertensive effect. Ginger root (Zingiber officinale): It is commonly used in Asian cooking. It improves blood circulation while also relaxing muscles. Tulsi/ Basil (Ocimumbasilicum): crude extract of O. basilicum causes a fall in systolic, diastolic BP. Carrot (Daucus carota): It has been used to treat HTN in traditional medicine. Radish (Raphanus sativus): The plant radish (Raphanus sativus) has been proven to have antihypertensive properties. Prickly Custard apple (Annona muricata): The leaf extract of the plant lowers elevated BP. Green Oats (Avena sativa): A diet containing soluble fiber-rich whole oats can significantly reduce the need for antihypertensive medication and improve BP control. It may significantly reduce cardiovascular disease risk. Cocoa powder (Theobroma cacao): Chocolate, Cocoa Bean, Cocoa Butter etc. are enriched with flavonoid constituents, is used for preventing cardiovascular disease. Black bean (Castanospermum austral): Crude extract causes a fall in systolic as well as diastolic BP. Some other herbs which can help to control high BP are 1. Citrus fruits 2. Salmon and other fatty fish 4. Pumpkin seeds 5. Beans and lentils 12. Greek yogurt 13. Herbs and spices 14. Chia and flax seeds 17. Green oats etc. 18. Celery (Apium graveolens) Without taking any drugs in the beginning of BP symptoms, try these natural strategies to decrease your blood pressure. Those symptoms, however, can be confused for a range of other issues (serious or non-serious). How is high blood pressure treated? It is correctly said that – There’s no replacement for a healthy lifestyle. Eating healthily and exercising regularly will do far more efforts naturally in lowering blood pressure. The most effective treatment for high blood pressure is a maintaining a correct blend of medication and lifestyle changes. When paired with other lifestyle modifications, a nutritious diet can help lower blood pressure and reduce your risk of heart disease. Even scarier? According to the CDC, just 24% of persons with high blood pressure have it under control. If you haven’t had your levels checked in at least two years, see a doctor. High blood pressure is defined as a reading of 130/80 mmHg or higher. (The top number is systolic blood pressure; the bottom number is diastolic blood pressure.) This herbal medicine by Yogveda, India is the first line treatment for high blood pressure. It consists of 60 tablets. This formulation is totally made with herbal extracts and it works by: Reducing high blood pressure Reducing blood volume and maintaining the optimum amount Stabilizing the heart rate Treatment for hypertension Sarpgandha (Indian snakeroot): root Brahmijalneem(Bacopa monnieri)(herb of grace): whole plant Pippali (Long pepper): fruit Shankhpushpi (Convolvulus pluricaulis) Well-known as morning-glory Arjuna (Terminalia Arjuna): bark Indications for use Dose: one tablet twice a day or as suggested by the physician. Storage: always store in a dry, cool, and dark place (preferably away from sunlight). It’s a Genuine medicine, the best gift of Ayurveda for an old age person like me (Anil Kumar from Hyderabad) I am very impressed with this medicine. Got a very good result in very less time. ( Vishal Shaw from north Delhi) This medicine is no doubt India’s no.1 ayurvedic medicine for high blood pressure. (Rakesh Gupta from Mumbai) I want to thank health consultant Ankita Lakhere, mam, from Yogveda. She has changed my life with her complete guidance. She explained everything about my disease better than any allopathic doctor so I started following the diet she mentioned which is very helpful for me. I saw Yogveda’s advertisement on Facebook. So I bought the BP package and now my BP is very stable. Thanks a lot, YOGVEDA. (Sejal Sinha – Bangalore) Great medicine. First time I’ve ordered via the Website and my overall experience with Yogveda is absolutely special as they treat their patients very well. Great medicine. My father has high blood pressure and noticed the difference immediately. Great price for this medicine and very quick delivery. I would highly recommend Yogveda’s Anima medicine! (Harick John- Kerala) I am reordering this item, my blood pressure dropped from 180 to 120 over a month, that too in a natural way, I take this tablet the way my consultant guides me and now my blood pressure stays in a normal range. ( Ishtayaq Ahmed- West Bangal ) Overall, this medicine is a wonder at our home, my mom’s high BP drastically dropped within 2 weeks of usage. Now I got it subscribed with Yogveda people to ensure I have it on time. This is a natural therapy and there are zero side effects to using this medicine. I’m so happy about my parent’s good health ❤️ Thank you YOGVEDA♥️- Deepesh Sharma from Orissa. Ordered my second bottle. It is a good and natural alternative to keep BP levels in control. I want to recommend this to everybody have high blood. Start your day with this medicine for a healthier lifestyle. – Laxmi prasad from Imphal This Gave me good relief in a few days. – Dev Ahuja from Chennai It took just 5-days only. I feel better with my planned diet as my consultant guides me. It is no doubt an effective medicine. – Abdul laskar from Gauhati I ordered this for my grandmother ..she is really happy with this medicine..it’s indeed very effective and useful for the whole family. – Kunal mitra from Pune. यह वास्तव में मेरे लिए काम किया। मैं पिछले 20 वर्षों से बीपी के लिए दवा ले रहा था और इस उत्पाद का उपयोग करने के बाद अब मेरी स्थिर बीपी रीडिंग 190 से घटकर 120 रह गई है जो अलग-अलग समय और अलग-अलग दिनों और अलग-अलग शहरों में अलग-अलग जलवायु स्थिति में ली गई है। मेरे एलोपैथिक डॉक्टर ने मुझसे कहा है कि मुझे जीवन भर बीपी की दवाएं जारी रखनी हैं, लेकिन मैं जीवन भर कोई दवा नहीं ले सकता, फिर मुझे यह दवा मिल गई तो मैं कहूंगा कि यह मेरे शरीर के लिए काम करता है। अन्य लोग भी कोशिश कर सकते हैं क्योंकि यह बिना किसी दुष्प्रभाव के सिर्फ एक हानिरहित दवा है। मैंने इसे शुरू में 3 महीने तक इस्तेमाल किया है और प्रभाव देखा है और डेढ़ महीने तक इसे बंद करने के बाद लगातार उपयोग कर रहा हूं। – बिप्लब सिंह I was suffering from high BP but now I get relief from it. This tablet is very effective and made with natural plants base. – Bhagan roy from Kolkata This product is really helpful, I was read somewhere on net and order this for my dad, he suffering from high BP more than year. after taking these tablet within 12 days his BP is in control Coming to the product: Medicine packaging was good and received the product in good condition. This particular one does not come with preservatives ��� Compared to others in the category, can see traces of sarpagandha, Brahmi, and Pippali all these are highly beneficial for high BP. – Tushar Desai from Lucknow My father’s age is 75 years and getting high Blood Pressure for the last 5 years. I visited all experienced and premier doctors. They suggested drugs based medicines for the rest of his life, But My father was not accepted. Now we are using these tablets. These tablets are extraordinary working and instant relief. No side effects through these tablets. – Nirnay Digar from Greater Noida हाई बीपी के लिए यह बहुत कारगर है। इसके इस्तेमाल से मुझे 3-4 दिन में ही आराम मिल गया। (राम चौधरी – बरेली) Amazing medicine Worth money I have taken this tablet for a month now works wonders for me It also has a laxative effect and hence will solve constipation problems. Please take this on Yogveda’s recommendations if you are taking any other pills or if u wanna take more than a pill a day. – Ashish from Mumbai Really very good AYURVEDIC MEDICINE…I am using it for a few days and I see a very huge difference in my Blood pressure. – Dheeraj Singh from Haryana You will observe a difference after a week of using if never used any ayurvedic bp tablets. I used this for the first time and now this item will be ordered again by me for further use. Thanks, Yogveda for such a wonderful solution. – Shakshi Lodhi from Uttarakhand sach mein bahut achchi ayurvedic dava hai …. main ise kuchh dino se istemaal kar rahi hun aur mujhe apane raktachaap mein bahut hadd tak antar dikhai de raha hai. – Roshni verma from Agra Mujhe high blood pressure ke sath sath cholesterol ki bhi shikayat thi , kafi samay tak angrezi dawaiya bhi khai pr kuch khaas fark nahi hua , m jb tb doctor ki dwa khata tha tab tak mera blood pressure control m reheta tha , lekin jb se Yogveda se ANIMA tablet lena shuru ki hai tab se bahut achcha result aya hai , jis tarah se mujhe mere assign health expert ne guide kia usse or bhi behetar mehesoos kr rha hu. धन्यवाद Yogveda. – Balwant singh from Andhra Pradesh This ayurvedic high BP tablet is really good and effective. I bought it for my mother on my friend’s recommendation and it is really helpful for us. – Mabul Islam from Shillong कुछ ही दिनों में मुझे अच्छी राहत दी। बिना किसी दुष्प्रभाव के | – Sanjay Paprel from Madhya Pradesh Been using it for a while, and can feel the difference. Being Ayurvedic doesn’t give you side effects. These tablets are in blue colour which really gives positive vibes. – Sharwan from West Bengal This medicine has had a positive impact on my mother’s Blood Pressure readings. The numbers are coming down! No doubt a cost-effective solution for sugar control which I recommend definitely. -Rajveer Singh from Malda I’m suffering from hypertension( high bp ) stage 1 which is moderate, and my age is 36, Before Yogveda, I was taking allopathic tablets like lisinopril (Prinivil, Zestril) which is suggested by doctors, but they are drug-based which not beneficial to take these for a long time for my health, then found Yogveda’s medicine named ANIMA which is the best alternative of that drug-based tablets, ANIMA contains Sarpagandha, Brahmi, Shankhpushi in a very good quantity, it’s working! , and now I use it as my assigned health expert guides me, she treats me the best way as she gives me proper diet chart according to my situation + some natural home remedies which really worked, in last I only want to say, YOGVEDA not gives medicine but also their valuable service. – Chandrakant Patil from Muzaffarnagar These tablets are very much beneficial for old age people having high BP. It can be swallowed easily. These tablets are rich in natural ingredients like Brahmi, arjuna, Sarpagandha, and Sankhpushpi. Value for money. – Biswajit Das from Kolkata I am happy with the medicine and could see the observable difference in my BP readings after taking it. – Rishabh Jha from Jammu I saw the ad on Instagram and purchased it instantly. My health has improved, I feel better, alert and energetic now. – Tanya Prasad from Faridabad Bought this product for my husband for BP, he’s been using this product for almost two months and it’s worked…..made a big difference in his BP reading. Thanks, Yogveda health care for giving us this product!!! kudos!! very happy with the medicine..wud highly recommend this. – Shilpa Talreja from Ahmedabad before my bp is 190/110.. took allopathic meds along with Yogveda ANIMA medicine for 2days and I check my BP is normal … on the third day I stop the allopathic meds and takes only ANIMA surprisingly my BP is 110/80 ..planning to buy some more … – Khushi Chetry from Jaipur It’s good medicine and does work on reducing blood pressure. Yes, I Checked every time before and after taking the tablets and also follow all instructions of my health consultant assigned for me from the company’s side. – Pratima Rai from Hubli Hi, I like this product and it works well for me Thank you Yogveda Healthcare. – Vikas Kumar from Agartala I am a regular user of this medicine and it is very effective for my purpose. -Mudasir Sultan from Bhopal it’s best to control blood pressure, Being a high BP for 13 years, I have relied on this medicine and it helped me a lot. I have seen a good decrease in my Blood Pressure levels which is well under control. I have been using the product for the last 2 months. Very satisfied. – Simply Sahni – Faizabad BP under control is great medicine. – Ajit Kumar from Ghaziabad jabse mene ye medicine lena shuru ki hai tabse bahut asar dekhne ko mil rha hai , ANIMA medicine ki khaasiyat yeh hai ki ye medicine jadi-bootiyo se bani hai isi vajah se iska koi side effect bhi nahi hua mujhe isse pehele me 5 saal se allopathic dwaiya khaa rhi thi pr itna achcha result nahi mil rha tha, lekin iski peheli hi khuraakh se result mila. – Nilam kushwah from Jorhat Used this medicine for a few weeks now and it helped in reduce blood pressure effectively. Definitely recommend this as it is Ayurvedic and natural. – Aswini Sharma from Ranchi Having used it for 2 months, I can now say that it is definitely a very workable formula for regulating Blood pressure… My preference for this medicine was primarily based on the natural ingredients and I did not have to worry about side effects. Now I would briefly state the benefits of this TABLET- 1) Most importantly it is not allopathic. 2) It makes me energetic. 3) It has completely managed my sudden and frequent headaches. 4) It provides me a good sleep Thank you YOGVEDA for this natural and ayurvedic remedy for blood pressure. – Satyam Tiwari from Darjeeling Mujhe high BP ki problem thi to maine Yogveda ki ANIMA tablet use Kiya. Uske baad mera BP kafi control hua h. Isliye m Chahta Hoon Kabhi BP patients h, Ek Baar is medicine ko jaroor try karein. – Grob Nath from Odisha This is better than allopathic tablets! It is such a fabulous medicine I recommend everyone with high BP and cholesterol should use it and see the difference in a month. It’s a 100% unique herbal formula. – Rahul Tripathi from Jabalpur Nowadays, the market is flooded with so many brands, but YOGVEDA is the one which has carved a niche in the market with its worthy products. and this medicine is also effective for me. – Nitin Gupta from Kannauj It helped my mother to reduce her blood pressure… Thank you Yogveda !. – Vinay Kumar from Vasco Da Gama I have been taking ANIMA tablets. I was using different brands also well known but of lower strength. The ayurvedic health expert suggested this. We are familiar with Yogveda’s medicines, and I bought it and used it. This is the third bottle. Unlike other capsule forms, this is a tablet easier to swallow. – Goutam Talukdar from Villivakkam This product is completely natural, organic and very essential for the health of BP patients. Being a natural ingredient, it is safe. As I was using it for some time now, my blood pressure has started to stay stable and I feel energetic. If u have a blood pressure problem then you should must buy this. – Siddharth raut from ujjain It is natural medicine. I have been using this product for several weeks now and it seems to be helpful. – Sachin Gupta from Rajasthan I am impressed by the results. I was taking other prescribed medicines before and was not satisfied. Plus, it was not made of natural products. So, I decided to go for these pills and I am happy that I chose this. Now I feel myself energized in just a few days, and my BP is getting normal. I feel less stressed and tired. I really recommend this medicine to everyone, who is having Blood Pressure issues and searching for natural ways to maintain their BP. – Harsh Goel from Maharashtra If you have to control cholesterol or blood pressure, then definitely go for it, reliable and good quality natural medicine. – Mohd Usman from Lucknow Trust me, can really see the effects. Fights bad cholesterol. Using this for quite some time and really can see the results. Very effective product in controlling Blood pressure. One of the best products ���. No side effects ���. Health Friendly product.. – Om Verma from Noida Oh I CANT BELIEVE THIS……….. THIS IS WORKING SO FAST IT REDUCED MY BP WITHIN 2 WEEKS THANKS A LOT YOGVEDA… – Hardik Pal from Pune General steps to be taken to combat high BP 1. Increase your physical activity Regular exercise, even as basic as walking, appears to be just as effective at lowering blood pressure as regularly used BP drugs. A good exercise routine usually makes the heart healthier, making it much better at pumping blood. On most days, doing 30 minutes of cardioyields good results. You can keep challenging your ticker by increasing speed, distance, or weights over time. Losing few pounds (even few Kgs) of body weight can ease you in lowering blood pressure. 2. Allow yourself to unwind Stress causes our bodies to release hormones like cortisol and adrenaline. These hormones can increase your heart rate and restrict your blood vessels, causing your blood pressure to rise. Breathing techniques and practises like meditation, yoga, and tai chi, on the other hand, can assist regulate stress hormones and blood pressure. Start with five minutes of soothing breathing or mindfulness in the morning and five minutes in the evening, and work your way up. 3. Reduce your sodium intake Although not everyone’s blood pressure is sensitive to salt, everyone could benefit from reducing their intake. The American Heart Association recommends that you consume no more than 1,500 mg and no more than 2,300 mg of sodium per day (about a teaspoon). Avoid packaged and processed foods like bread, pizza, poultry, soup, and sandwiches, which contain hidden salt bombs. 4. Choose foods that are high in potassium Potassium intake of 2,000 to 4,000 mg per day can help reduce blood pressure. The vitamin causes the kidneys to excrete more salt in the urine. Although we are all familiar with the potassium content in bananas, foods such as potatoes, spinach, and beans really contain more potassium than the fruit. Tomatoes, avocados, edamame, melons, and dried fruits are also good sources. 5. Start eating the DASH diet Along with the Mediterranean diet, the Dietary Approaches to Stop Hypertension (DASH) diet, which was created specifically to reduce blood pressure without medication, is usually regarded as one of the absolute healthiest eating regimens. The diet emphasises fruits, vegetables, whole grains, lean proteins, and low-fat dairy, with daily salt intake capped at 2,300 mg, with an optimal limit of 1,500 mg. According to research, DASH can lower blood pressure in as little as four weeks and even help with weight loss. 6. Treat yourself to some dark chocolate The sweet is high in flavanols, which relax blood vessels and increase blood flow, and studies show that eating dark chocolate on a daily basis can help lower blood pressure. Experts haven’t found a perfect proportion of cocoa,” but “the higher you go, the more benefits you’ll get. Chocolate shouldn’t be your primary approach for controlling blood pressure, but it is a healthy option when you desire a treat. 7. Choose your beverages sensibly Too much alcohol is known to elevate blood pressure, but a small amount may have the opposite effect. Light-to-moderate drinking (one drink or less per day) is linked to a decreased incidence of hypertension. 12 ounces of beer, 5 ounces of wine, or 1.5 ounces of spirits make up one drink. The study determined that high amounts of alcohol are un-questionably very very harmful, but that what people call “moderate intake” is rarely heart-protective and mostly may be harmful. You must always drink responsibly if you’re going to drink. 8. Make the switch to decaf coffee Caffeine in one or two cups of coffee boosts both systolic and diastolic blood pressure for up to three hours, tightening blood vessels and amplifying the effects of stress. When you’re stressed, your heart pumps a lot more blood, which raises blood pressure, causing serious effects on cardiovascular health. “What’s more, caffeine amplifies that effect.” Decaf has the same flavour as regular coffee but without the negative effects. 9. Start drinking tea It turns out that decreasing high blood pressure is as simple as drinking one, two, and three cups of tea. In a study, adults with somewhat elevated blood pressure who drank three cups of naturally caffeine-free hibiscus tea daily for six weeks saw their systolic blood pressure drop by seven points. In addition, a 2014 meta-analysis indicated that drinking both caffeinated and decaf green tea is linked to a considerable reduction in blood pressure over time. 10. Don't overwork yourself According to a study, working more than 40 hours per week increases your risk of hypertension by 17 percent. Investing some time in working out improves your everyday exercise and healthful cooking. To get into the habit, set an end-of-day reminder on your work computer and depart as soon as possible.
<urn:uuid:bba9f9e1-26d9-4363-a0d1-3940e9cc9d1c>
CC-MAIN-2022-33
https://www.yogveda.in/hi/high-blood-pressure/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00296.warc.gz
en
0.901622
7,642
3.5
4
In homes and day care centers, there is risk of danger for injuries and drownings in a not so obvious place. Certainly, when most people think of drownings, they immediate think of a pool, ocean or lake. There is also a risk of drowning and injury to toddlers and infants in a bathroom. It is important for parents, baby-sitters, and child-care providers to recognize these risks and take preventative measures accordingly. Bath time can be a fun activity for both parents and their children to enjoy, but a drowning incident can occur in the blink of an eye and parents should be wary of this possibility. With an average of 87 children under 5 drowning each year and 80% of these deaths occurring in bathtubs, bathroom safety should be a priority of parents with young children. Just as you would in any large bodies of water, supervising your child at all times is key to being able to respond quickly. Children can drown in just a few inches of water so it is not worth it to leave to take a phone call or go to another room with the bathroom door left open. Dedicate your time to your child as a measure of ensuring their safety. For parents with multiple children, it is also not advisable to leave younger babies and toddlers under the supervision of another young child. Young children may not be able to identify safety risks as readily as a parent. While a parent is in the bathroom with the toddler/child it would be wise to remain within arm’s length of them at all times so that one may be ready in the event that a child’s head gets submerged under water. Another great protection against drownings in the bathroom is having at least one parent learn CPR, a procedure that could potentially save your child’s life. While drowning in the bathtub is a significant risk to be aware of in the bathroom, a plethora of other preventable injuries may occur in the bathroom that parents should be aware of. Around 43 thousand children are injured each year by slip and fall accidents in the bathroom. For toddlers there are products that secure them in place whilst using the bathtub so that they will not fall over and parents can be more hands free. For slightly older children, slip mats in the bathtub can give them something for their feet to stick to. Products such as these are inexpensive when compared to your child’s safety. Another common injury has to do with babies and water temperature. Babies have thinner skin than adults and therefore are more susceptible to scalding. Be wary of the temperature your water is at before placing your child under it. Finally, it important not to leave unattended bodies of water in your bathroom no matter how shallow it may be. Toddlers wander around and can easily fall into objects such as buckets and potentially drown. Taking these different risks into account will prepare any parent for a fun and safe bathroom experience with their child. Parents rely on day care centers to provide a safe environment for their children during the workday. In most instances, a child is cared for by a trained individual who has the best interests of the child in mind. Unfortunately, far too many children are injured a day care centers when a staff member is untrained or lacks the patience / maturity to provide stable and nurturing care and supervision to child. It is well known that children will misbehave especially infants and toddlers. Certainly, it is part of the job of a day care worker to deal with behavioral issues in a calm and safe manner. When patience is lost, day care workers can and do inflict harm upon a child through careless acts and in some instances through purposeful criminal actions. There are over 14 million children in a form of day care each day. Parents enroll their children in a day care program under the assumption that their children will be safely cared for while they are away. Day care cewnters have a legal duty to provide proper supervision and protection against injury. So, when the way a day care negligent acts results in a child getting hurt, the parent of the injured child may be able to bring a legal action on behalf of the injured child to seek out compensation for medical bills, pain, and suffering. Negligence cases are dealt with in civil court, where parents can sue day care centers for financial compensation. Through a civil case or claim, a parent may be able to obtain compensation on behalf of the injured child. Furthermore, a parent can be reimbursed for medical bills that the parent owes as guardian / financially responsible person for the injured child. Many day care centers require parents to sign a liability waiver. It should be noted that most States disfavor liability waivers when children are involved. Otherwise, this would give a day care center a license of sorts to be negligent and put a child in harm’s way without repercussions. Parents should be wary of day care centers that require the signing of a waiver that attempts to shield a day care center from negligent acts causing personal injury. In communities throughout the United States, school aged children are riding bikes to schools, parks, and other destinations without wearing a helmet. For some kids, helmets may be considered not so cool or somewhat of an inconvenience. For parents, making a child wear a helmet is just another argument to avoid. While there can be some hassles along the way, it important for children and adults alike to wear a helmet while riding a bicycle. It is just as important to wear a helmet during a short ride as it is for a longer ride. An accident or crash can take place at any time and even as close as a child’s own neighborhood or driveway. Recently in Kentucky, lawmakers have proposed a bill that would require children ages 12 and under to wear helmets. The senators behind the bill claim that they intend to protect children and make bike riding a safer activity for more people to enjoy. According to patient educators in Owensboro Kentucky, brain injuries are the most serious common injuries associated with biking accidents. The bill is also lenient in that it will give courtesy warnings to children and their parents on the first offense. The concept behind the bill is that enforcing helmet wearing on bikes at young ages will increase helmet wearing as children grow into adults. Efforts such as those being made in Kentucky are occurring all over the united states as the protection against head trauma grows more serious a concern. According to the World Health organization, injuries to the head or neck are the main cause of death and disability in motorcycle and bicycle accidents. With many parents opting to let their child ride a bicycle from a young age it is imperative that one considers having their child wear a helmet. Almost 50 thousand bicyclists were hit by cars in 2013 and one of those accidents could be your child when you least expect it. With many children commuting to their school via bike, a conversation over the effectiveness of bike lanes, simple traffic rules, and the requirement of a helmet can significantly reduce the chance of a bad accident occurring. Baseball is America’s pastime. From the Spring through the Fall and into the Summer, fans of all ages including children visit baseball parks / stadiums through the country to watch major league and minor league baseball games. During most games, you will see foul balls and home run balls fly into the stands. While part of the fun of the game is to take home a souvenir, it is also a very dangerous part of the sport for spectators. A fan can suffer serious injuries when hit by a foul ball or a home run ball. It can be quite a challenge to pursue a case for compensation due to the long-standing precedent known as the Baseball Rule. When a foul ball flies into the stands, many fans are eager to try to catch it. What they might not realize is that over 1,750 people are injured annually by fly balls, with some injuries so severe that they cause blindness. One might think that the baseball team or stadium would be held responsible for these injuries, but U.S. courts have consistently ruled that they are not. Under a century-old legal document known as the Baseball Rule, if a team takes simple precautions, such as having enough seats for all fans in attendance and installing netting behind home plate, they are not held legally responsible for injuries sustained by a foul ball. Courts have held that the dangers that come with foul balls are obvious, so fans assume the risk of any injuries that may come. This usually means that they are forced to pay medical bills all on their own. Being over a hundred years old, the Baseball Rule has not adapted to changes made in the sport of baseball. To start, seats in newer stadiums are far closer, as much as 20%, to the field than they were 50 years ago. Additionally, athletes are pitching faster and hitting harder than ever, so a foul ball can go into the stands at speeds of over 110 miles per hour. Because of this, several legal scholars have called for the abolition of the Baseball Rule, which would require baseball teams to take much more rigorous precautions, such as full-field netting. Major League Baseball currently recommends that teams install additional netting, but, as it is not required, it is uncommon. The MLB itself even acknowledges that fans that are not protected by netting are at a much higher risk of injury, which goes to show how outdated the Baseball Rule is. Day care centers should be safe havens. For most children, a typical day at a child care center involves some adventure, snacks, and, yes, that all important nap. Unfortunately, and tragically for some toddlers, they do not return home at all following a visit to a child care center. These children do not return home because they died due to the neglect of a day care center. There is nothing worse for a parent than to bury a child. Nightmares turn into realities. One such place that tragedies take place is the unlicensed day care center. In most cases, the unlicensed day care center lacks any appreciable assets or liability insurance. As such, while a legal action or lawsuit can be filed against the day care center, the collectability of a potential settlement or verdict is highly unlikely to improbable to impossible. In other words, a strong legal case against a defendant does not mean that there is an economically viable defendant to collect from. This past July an incident occurred within an unlicensed Tennessee daycare leaving twin babies died. The day care center operator was indicted on two counts of criminally negligent homicide in connection with the death of the two children. The parents of the children filed a wrongful death lawsuit last month seeking compensation of over 50 million dollars. While this seems like a substantial case and demand, a victory in court will most likely be a hollow one since any verdict will probably be uncollectible. As a general statement, unlicensed day care centers are typically operated by people who cut corners and who do not follow rules. Furthermore, unlicensed day care centers are usually operated by individuals who have little to no assets to collect upon when there is a sizable verdict against the facility. Parents should check to see if a day care center is licensed and insured. While these are not the only factors to consider, if the day care center lacks a license or insurance, it can be a red flag to stay away from that day care center and find one with both liability insurance and licensure in place. When evaluating a potential case against a day care center, one of the first factors considered is the availabilty and amount of liability insurance. While a day care center may be legally liable for damages related to personal injuries or death to a child, this does not mean that the day care center owner will ultimately be able to pay out a settlement or verdict rendered in favor of the injured child. In day care centers across the nation, there are countless acts of abuse and neglect. Some get swept under the rug and never get reported. Children enrolled in a child care center often is an easy target because of age, the inability to defend himself or herself, and the lack of communication skills to alert parents and other adults of the abuse. Being a child care provider is not easy task. It requires a person who is alert, physically able, and patient. Unfortunately, far too many day care centers are run or staffed with unqualified and downright abusive people. There are plenty of excellent day care centers that do not have video surveillance. On the “wish list” of things to have in a day care center, it is at times helpful to have video surveillance in place. There are a number of benefits to having video surveillance in place as follows: Video surveillance is another “set of eyes” supervising the care provided to the children. Summer is filed with fun and play for children. For some, the bounce house is a great place for children to exercise, move around, and socialize with friends. The bounce house can also the scene of a serious injury especially for small children and toddlers. Just because a bounce house is padded and filled with air does not mean that a bounce house is a safe place. Bounce house play can be a fun time for your children this summer, but repeatedly jumping up and coming down in various ways come with inherent risks. While it is difficult to completely ensure your child’s safety within a bounce house without being over-protective, measures can be taken to minimize safety risks. When buying or renting a bounce house to use for a children’s event check to make sure that the selected bounce house is equipped with safety nets and is set up as instructed by the manufacturer. This includes remembering to securely fashion the bounce house to the ground in order to account for sudden gusts of wind that may topple over the house. While most injuries suffered within bounce houses are not severe, if wind knocks the house airborne with children inside of it, the chance of serious injury skyrockets. Though it may be hard to regulate, keeping the number of children within the bounce house below its maximum capacity further minimizes risk of injury to your child. Dr. David Foley, medical director of an urgent care centers, states that summer is the season that sees the most “slip, trip, or fall’ injuries. He goes on to state that risk of injury is inevitable in bounce houses due to promoting jumping and falling in different ways. The risk is even greater in these cases as the bounce houses allow for falls from even greater heights, generating more momentum and force as they fall back to ground which can lead to more serious injury. When setting up a bounce house outdoors, check weather reports for rain as a slippery bounce house can be a recipe for disaster, adding more risk to an already dangerous activity. According to doctor Foley the most common injuries that occur within bounce houses are to the limbs. These types of injuries include but are not limited to, twisted ankles, fractured elbows, and in the most serious of cases, head trauma. For events in which parents plan to use a bounce house, assigning supervisors to keep watch over what’s happening within the bounce house can prevent injuries that are results of negligence. See Bounce House Play – Keeping Children Safe. Children, especially toddlers and infants, lack safety awareness. Because of this, it is important to provide close supervision of children and take reasonable steps to remove dangerous objects away from the reach of children. For some children in schools, summer camps, day care centers, and other locations, a moment of inattention combined with a moment of danger can lead to serious and permanent eye injuries. Summer camps, day cares, and schools contain many items that could potentially injure your child’s eyes. When at play, children may come across potential hazards such as projectile toys and fake guns such as bb or pellet guns. Reminders to be cautious around these toys can prevent carelessness that may lead to eye injury. In sports involving small moving objects such as balls, pucks, or shuttlecocks, protective eyewear can protect your child’s during play. Be sure to only use protective eyewear that is ASTM F803 approved as the wrong pair of glasses may be more harm than good in the case of an eye injury. Certain items such as laser pointers, especially green laser pointers with shorter wavelengths, can permanently injure a child’s eyesight in a moment’s notice and should remain out of their reach. It is also important to be watchful around the house this summer as many common household items can cause serious eye injury in the hands of an unsupervised child. Paper clips, wire coat hangers, bungee cords, and rubber bands amongst others are examples of items that should be stowed away out of reach of children around the house. Chemicals and cleaners such as bleach should also be in secure spots out of reach to avoid an accidental spill that may end up in the eyes. Whenever performing yardwork, be sure to keep your child away from any flying debris involved (i.e. mowing the lawn). Even when gardening, it is wise to keep children far from and fertilizers or pesticides as they can cause severe damage if dropped into the eyes. Be wary when cooking as the kitchen also holds many an item that can cause injury to the eyes. Certain kitchen utensils such as knives should be kept in child-proof locations and shouldn’t be put down unsupervised. When cooking using hot oils, a grease shield can prevent any splashes from hitting your child in the eyes. In the event of any eye irritation, a child care provider will need to be cautious in cleaning out their child’s eyes. Before cleaning out the child’s eyes, washing hands can prevent further irritation during the cleaning process. Avoid touching, rubbing, or pressing on the eye itself as the contact can increase irritation to the eyes. According to KidsHealth.org, flushing a child’s eyes with warm water for up to 15 minutes is a good way to try and remove any foreign bodies causing the irritation. If the foreign body still remains after 15 minutes, medical attention may be needed for its removal. Cautious preparation along with quick responses in the event of an incident may be the difference between the child suffering an eye injury so be sure to keep safety in mind throughout your fun summer activities this year! With a mobile phone or tablet in our hands most of the day, we now live in a world of almost endless distractions. While technology is wonderful and helpful, it also makes certain activities a bit more dangerous for children. Whenever a child is in or near water, there is a danger or risk of drowning. Adult supervision is key to the safety of children; however, the presence of an adult in the water area is a bit different than the attention of an adult. If the adult is physically present in the area of the pool or beach, the physical presence may not mean much if the adult is otherwise engaged in the latest text, tweet, or e-mail on the phone or tablet. Because of this, it is important to have as top of mind awareness the safety needs of the children. Nearly 7 out of 10 drownings occur while an adult is present. In a life and death situation, people need to be alert and aware of their surroundings. However, the pool presents a myriad of distractions; if a person is swimming in the pool, they can be surrounded by splashing and other people, which could take their attention away from the child they are supposed to be watching. And, even if the watcher is out of the pool, they could be reading a book or looking at their phone at the precise moment they need to step into action and prevent a drowning. But drownings are almost always preventable; in fact, it is the leading preventable cause of death for children under the age of 5. So what steps must one take in order to prevent a child from drowning? Among the most important preventions one can take is to have a Designated Watcher whose sole purpose is to keep an eye on the people in the pool. These people are reminded to not look at their phones or other distractions and not leave the pool area unless another person replaces them. While a Watcher is the best preventative measure, there are other choices one can make to improve pool safety. Installing a gate around the pool would keep small children from running into the water and drowning before an adult can intervene. It also helps to have a number of flotation devices, like pool noodles or kickboards, that can be thrown into the pool for a child to grab onto. If these measures are implemented, children will be markedly more safe in the pool this summer.
<urn:uuid:7741b547-7e7a-41a9-9141-28f3648ffa5a>
CC-MAIN-2022-33
https://www.childinjurylawyerblog.com/category/child-safety/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573533.87/warc/CC-MAIN-20220818215509-20220819005509-00293.warc.gz
en
0.964954
4,196
2.953125
3
In Islamic terms, the apostate (murtadd) is one who has renounced Islam and chosen to be a disbeliever. The renouncement of Islam takes place when one denies one of the pillars of Islam (tawhid, prophethood and resurrection) or one of its clear subjects that all Muslims know and believe in, given that denying the subject is equal to and necessitates the denial of prophethood itself (meaning that one can’t be Muslim and not believe in it, such as the obligation of prayer, unlike other subjects that aren’t of such clarity and are sometimes even debated amongst Muslims), and that the person denying it is aware of the above mentioned necessitation. It is necessary to know that the punishment for apostasy doesn’t apply to one who has turned away from religion but doesn’t announce it. Therefore, it is correct to say that the apostate is punished for an act that has to do with society, not merely because of some personal beliefs. The apostate violates others’ rights to have an Islamic environment in society and is a threat to the beliefs of normal people who aren’t religious experts and aren’t capable of answering all of their religious needs on their own. In the advent of Islam, a group of its enemies planned on falsely accepting Islam in the beginning of the day and turning away from it at the end of the day, undermining the faith of the believers and weakening their religious spirit as a result. Also, it is also very hard to prove apostasy, to the extent that this Islamic law was put to practice only a few times during the advent of Islam. Hence, one can say that the mental effect of this rule plays a far more important role in keeping the Islamic society healthy and clean than the punishment itself. In Islam, there are two types of apostasy; milli and fitri: the fitri apostate is one whose parent(s) (one or both of the parents, making no difference) were/was Muslim during his/her conception, has accepted Islam after reaching puberty and then renounced it afterwards. The milli apostate is one whose parents weren’t Muslim during his/her conception, has been a kafir (disbeliever) after puberty, and then embraces Islam and once again becomes a kafir afterwards.In the case of the apostate being a female, regardless of being milli or fitri, she is first asked to repent and if she does so, she is freed. In the case of the apostate being a male, if he is a milli apostate, he is first asked to repent, and if he does so, he is freed. However, if he is a fitri apostate, there is a disagreement among the Shia scholars. Here, we refer to those scholars who believe that the repentance of a male fitri apostate is also accepted. Ibn Junayd Iskafi (d. 382 AH), a well-known Shia jurist writes: «إنّ الإرتداد قسم واحد و إنّه یستتاب فإن تاب و الاّقتل» “Apostasy has only one type. The apostate is asked to repent; if he repents [then he is forgiven] or else, he is killed.” The same view is concluded from the following statement by Ibn Barraj (d. 481 AH): «و إذا کان المرتد مولودا علی فطره الإسلام … فان تاب لم یکن لأحد علیه سبیل» “A fitri apostate … if repents, then no one is allowed to punish him.” Shahid Thani (911-955 AH) refers to the fact the general content of repentance-related Verses and narrations requires that it includes the repentance of the male fitri apostate as well. Yet in another part of his Masalik al-Afham, Shahid Thani suggests that the most well-proved view is that the repentance of all types of apostates should be accepted. Also, Shaykh Mufid (d. 413 AH) writes: «واتفقت الإمامیه علی ان أصحاب البدع کلهم کفار و إنّ علی الإمام أن یستتیبهم عند التمکن بعد الدعوه لهم و اقامه البینات علیهم فإن تابوا عن بدعهم و صاروا إلی الصواب و إلّا قتلهم لردتهم عن الایمان». “According to all Shia scholars, those who bring about innovations in religion are disbelievers. Then it is the Imam’s duty to invite them back to Islam and provide them with arguments and ask them to repent. In case they repent [then they are forgiven], otherwise, they are killed for apostasy.” The supporters of this opinion, have argued with some Verses and narrations including the following: إِنَّمَا التَّوْبَةُ عَلَى اللَّهِ لِلَّذينَ يَعْمَلُونَ السُّوءَ بِجَهالَةٍ ثُمَّ يَتُوبُونَ مِنْ قَريبٍ فَأُولئِكَ يَتُوبُ اللَّهُ عَلَيْهِمْ وَ كانَ اللَّهُ عَليماً حَكيماً Acceptance of repentance by Allah is only for those who commit evil out of ignorance, then repent promptly. It is such whose repentance Allah will accept, and Allah is all-knowing, all-wise. (The Holy Qur’an, 4: 17) وَ هُوَ الَّذي يَقْبَلُ التَّوْبَةَ عَنْ عِبادِهِ وَ يَعْفُوا عَنِ السَّيِّئاتِ وَ يَعْلَمُ ما تَفْعَلُونَ It is He who accepts the repentance of His servants, and excuses their misdeeds and knows what you do.(The Holy Qur’an, 42: 25) قُلْ يا عِبادِيَ الَّذينَ أَسْرَفُوا عَلى أَنْفُسِهِمْ لا تَقْنَطُوا مِنْ رَحْمَةِ اللَّهِ إِنَّ اللَّهَ يَغْفِرُ الذُّنُوبَ جَميعاً إِنَّهُ هُوَ الْغَفُورُ الرَّحيمُ Say that Allah declares, “O My servants who have committed excesses against their own souls, do not despair of the mercy of Allah. Indeed, Allah will forgive all sins. Indeed, He is the All-forgiving, the All-merciful.(The Holy Qur’an, 39: 53) According to these Verses, acceptance of repentance by Allah (SWT) is not limited to one sin or another; rather, it is unconditional and includes apostasy as well. كَيْفَ يَهْدِي اللَّهُ قَوْماً كَفَرُوا بَعْدَ إيمانِهِمْ وَ شَهِدُوا أَنَّ الرَّسُولَ حَقٌّ وَ جاءَهُمُ الْبَيِّناتُ وَ اللَّهُ لا يَهْدِي الْقَوْمَ الظَّالِمينَأُولئِكَ جَزاؤُهُمْ أَنَّ عَلَيْهِمْ لَعْنَةَ اللَّهِ وَ الْمَلائِكَةِ وَ النَّاسِ أَجْمَعينخالِدينَ فيها لا يُخَفَّفُ عَنْهُمُ الْعَذابُ وَ لا هُمْ يُنْظَرُونإِلاَّ الَّذينَ تابُوا مِنْ بَعْدِ ذلِكَ وَ أَصْلَحُوا فَإِنَّ اللَّهَ غَفُورٌ رَحيمٌ How shall Allah guide a people who have disbelieved after their faith and after bearing witness that the Apostle is true, and after manifest proofs had come to them? Allah does not guide the wrongdoing lot. Their requital is that there shall be upon them the curse of Allah, the angels, and all mankind. They will remain in it forever, and their punishment shall not be lightened, nor will they be granted any respite, except such as repent after that and make amends, for Allah is all-forgiving, all-merciful.(The Holy Qur’an, 3: 86-89) These verses are clearly about the apostates and their punishments, specially in the Hereafter. The last Verse, explicitly excludes those apostates who repent and make amends. يَحْلِفُونَ بِاللَّهِ ما قالُوا وَ لَقَدْ قالُوا كَلِمَةَ الْكُفْرِ وَ كَفَرُوا بَعْدَ إِسْلامِهِمْ وَ هَمُّوا بِما لَمْ يَنالُوا وَ ما نَقَمُوا إِلاَّ أَنْ أَغْناهُمُ اللَّهُ وَ رَسُولُهُ مِنْ فَضْلِهِ فَإِنْ يَتُوبُوا يَكُ خَيْراً لَهُمْ وَ إِنْ يَتَوَلَّوْا يُعَذِّبْهُمُ اللَّهُ عَذاباً أَليماً فِي الدُّنْيا وَ الْآخِرَةِ وَ ما لَهُمْ فِي الْأَرْضِ مِنْ وَلِيٍّ وَ لا نَصيرٍ They swear by Allah that they did not say it. But they certainly did utter the word of unfaith and renounced faith after their Islam. They contemplated what they could not achieve, and they were vindictive only because Allah and His Apostle had enriched them out of His grace. Yet if they repent, it will be better for them; but if they turn away, Allah shall punish them with a painful punishment in this world and the Hereafter, and they shall not find on the earth any guardian or helper.(The Holy Qur’an, 9: 74) This Verse requires that the repentant apostate will enjoy “goodness” both in this world and the hereafter and there is no punishment for him neither in this world nor the hereafter. وَ مَنْ يَرْتَدِدْ مِنْكُمْ عَنْ دينِهِ فَيَمُتْ وَ هُوَ كافِرٌ فَأُولئِكَ حَبِطَتْ أَعْمالُهُمْ فِي الدُّنْيا وَ الْآخِرَةِ وَ أُولئِكَ أَصْحابُ النَّارِ هُمْ فيها خالِدُون And whoever of you turns away from his religion and dies faithless they are the ones whose works have failed in this world and the Hereafter. They shall be the inmates of the Fire, and they shall remain in it forever. (The Holy Qur’an, 2: 217) The argument is that this Verse suggest that there is a punishment for the apostate provided that he “dies faithless”. This requires that if an apostate repents and “does not die faithless” there ill be no punishment for him. Some narrations also mention repentance as an obstacle for punishment of the apostate without limiting that to a special type of apostasy: الْمُرْتَدُّ … يُسْتَتَابُ ثَلَاثَةَ أَيَّامٍ ”As for the apostate … he is invited to repent for 3 days”. عَنْ أَبِي عَبْدِ اللَّهِ ع أَنَّ رَجُلًا مِنَ الْمُسْلِمِينَ تَنَصَّرَ فَأُتِيَ بِهِ أَمِيرُ الْمُؤْمِنِينَ ع فَاسْتَتَابَه ”ImamSasiq (AS) said: a Muslim converted to Christianity. He was taken to Imam Ali who invited him to repent.” عَنْ أَبِي جَعْفَرٍ وَ أَبِي عَبْدِ اللَّهِ ع فِي الْمُرْتَدِّ يُسْتَتَابُ فَإِنْ تَابَ وَ إِلَّا قُتِل ”Imam Baqir (AS) and Imam Sadiq (AS) said: the apostate is invited to repent. If he repents [then he is forgiven], otherwise, he is killed.” Finally, according to some scholars such as Muhammad Jawad Mughniya, intellect also requires that the repentance of all apostates must be accepted. Interpreting Verse 217 of Chapter al-Baqara, he writes: «یدل بالصراحة علی ان المرتد اذا تاب قبل الموت یقبل الله منه و یسقط العقوبة عنه و العقل حاکم بذلک». “It clearly proves that if an apostate repents before his death, Allah (SAW) accepts his repentance and removes his punishment. Intellect also has the same ruling.” Imam Khomeini, Tahrirul-Wasilah, vol.2, pg.366; Ibn Quddamah, Al-Mughni, vol.10, pg.74. Imam Khomeini, Tahrirul-Wasilah, vol.1, pg.118. Ibid, vol.2, pg.366; Some believe that one of the parents must be Muslim during the child’s birth (AyatullahKhu’i, MabaniTakmilatil-Minhaj, vol.2, pg.451) and others don’t consider announcing and embracing faith after puberty as one of the conditions (Shahid Thani, Masalikul-Afham, vol.2, pg.451). Imam Khomeini, Tahrirul-Wasilah, vol.2, pg.336. Ibid, vol.2, pg.494. Quoted from Masalik al-Afham, vol. 2, p. 451 Al-Muhazzab, vol. 2, p. 552 Masalik al-Afham, vol. 2, p. 451 Awa’il al-Maqalat, p. 49 Al-Kafi, vol. 7, p. 258. Al-Istibsar, vol. 4, p. 253. Al-Tafsir al-Kashif, vol. 1, p. 325. This article was written by Hujjat al-Islam Sayyid Mostafa Daryabari and Dr. Morteza Karimi.
<urn:uuid:c6f75d5d-1080-492d-a698-820d294f222c>
CC-MAIN-2022-33
http://ijtihadnet.com/apostasy-and-repentance-will-a-repentant-apostate-be-forgiven/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573533.87/warc/CC-MAIN-20220818215509-20220819005509-00296.warc.gz
en
0.794802
4,401
2.796875
3
Kefir: Meaning, Health benefits, Dangers, and Recipe Kefir is a probiotic beverage that includes up to 30 different strains of good bacteria and a variety of bioactive substances. These probiotics, like lactic acid bacteria, have been shown to improve a variety of digestive problems as well as immune function, and combat cancer-causing agents, and dangerous germs. What Is Kefir? Kefir is a milk-based beverage that is fermented using “grains” called starters, which are actually a mix of yeast and bacteria. It originated from the Caucasus Mountains in Eastern Europe and is derived from the Turkish term keyif, which means “feeling pleasant.” The mixture’s efficacy and potent effects started to spread throughout the tribes. Russian doctors eventually discovered it and exploited it to treat diseases like tuberculosis in the 19th century after learning of its renowned healing properties. Almost any type of milk, including goat, sheep, cow, soy, rice, or coconut milk, can be used to make this beverage. Even coconut water may be used to make it. Types of Kefir - Cow’s milk kefir - Water kefir - Goat’s milk kefir - Coconut water kefir How to Make Kefir Kefir can be produced at home. To perform this, one will want a sterile setting and tools to stop the improper kinds of bacteria from entering the liquid. To start, one will require: - Active kefir grains - Milk: goat, cow, or coconut milk - A rubber band, The steps are as follows: - Firstly, wash hands thoroughly with soap and warm water - Then, wash the jar with hot, soapy water to sterilize it. - When completely dry, pour milk into the glass jar and allow air-drying upside down on a clean drying rack. - For each cup of milk, combine 1 teaspoon of kefir grains. - Ensure there is room at the top because the liquid will swell as it ferments. - Use the coffee paper to cover the container. - Add a filter and then fasten it with a rubber band. - For 12 to 48 hours, keep the jar in a warm location at about 70°F (21°C). - If the liquid begins to separate, shake the jar gently and keep it out of direct sunlight. - When the liquid has reached the desired texture, filter it through a mesh strainer into a sterile storage container. - Lastly, store in the fridge for up to a week with a tight cover. Kefir will be sweeter if the fermentation process is shorter; a longer fermentation will result in a tarter beverage. The kefir grains that people collect in the strainer can be saved and used in their subsequent batches of kefir. How to Drink Kefir Kefir is a nice complement to your everyday diet, but you can also use it to make quick, nutritious salad dressings. There is even a kefir meal, called okroshka, which is prepared in Eastern Europe from beetroot, cucumber, dill, and kefir. This dish is a delicious way to incorporate this healthful beverage into your diet. How to use kefir ~ Uses of Kefir Kefir can be consumed similarly to milk and plain yogurt. Try: - Sipping it cold in a glass - Drizzling on cereal, oats, or muesli - Incorporating into smoothies; and/or consuming fruit Kefir can also be used in: - Baked products, soups, - Iced yogurt, and - Creamy salad dressings. Boiling the kefir will inactivate the active microbes. Nutritional Value of Kefir Depending on the ingredients and fermenting process, kefir’s nutritional content and the probiotic bacteria it contains might vary greatly. In traditional milk kefir, there are approximate: - 90% water - 160 calories - 12 grams carbohydrates - 10 grams protein - 8 grams fat - 390 milligrams calcium (30 percent DV) - 5 micrograms of vitamin D (25 percent DV) - 90 micrograms of vitamin A (10 percent DV) - 376 milligrams potassium (8 percent DV) - 6% natural sugars - Vitamins B - Vitamin C and The most popular and commonly accessible type of fermented milk beverage is milk kefir, which is typically sold in almost all health food stores and most large supermarkets. The most common milk used to make this type is the goat, cow, or sheep milk; however, some retailers now sell coconut milk kefir, which implies it doesn’t contain any lactose, dairy, or actual “milk” at all. Although milk kefir isn’t particularly sweet on its own, various flavors can be added to it to enhance the flavor and improve its attractiveness. When making kefir at home, you can add raw honey, maple syrup, vanilla extract, or organic stevia extract to sweeten and flavor it. Most store-bought kefirs are flavored with additions like fruit or cane sugar. Benefits of Kefir Milk The benefits of kefir milk for weight loss can only be enjoyed when kefir is used in moderation. Other benefits include: - Appropriate for lactose-sensitive people; - Preventing tooth decay and maintaining good oral health - Reducing inflammation in healthy individuals - Improved digestion and constipation relief Water kefir is dairy-free kefir that is acceptable for those who are lactose intolerant or adhere to a dairy-free diet, such as veganism or paleo. You cannot begin a water kefir culture with milk kefir grains because aqua kefir grains have a different bacterial profile. Popular varieties of water kefir include those made with fruit juice and coconut water, but you may also make it simply with water and sugar. Benefits of Kefir Water You should utilize water kefir in a different way than milk kefir. Toss it into nutritious sweets, porridge, salad dressing, or just drink it straight. It’s not the ideal dairy product substitute because it lacks the creamy texture and tartness that dairy products have. Kefir vs. Yogurt Both kefir and yogurt are made from milk that has been fermented with good bacteria, which makes them quite similar. They share similar nutritional profiles, including a fair amount of protein, and have a low-fat content. Most people pick yogurt as their preferred source of probiotics. It’s one of the most well-known probiotic sources available. In comparison to yogurt, kefir grains actually have more than 60 different species of bacteria. This implies that your body benefits from a wide range of probiotics, giving you a better chance of getting the nutrients that are ideal for you. Benefits and Side Effects of Kefir Kefir is good for eating and for topping other dishes. It could be used to improve health and diet; individuals who are lactose intolerant might be able to drink some kefir without experiencing any problems, so long as it is manufactured and stored safely. Kefir should not be consumed by those who are allergic to milk unless it contains non-dairy milk. Diabetes sufferers should read labels carefully and stick to simple varieties without added sugar. Health Benefits of Kefir The extraordinary health benefits of kefir extend beyond digestion; these benefits have an impact on your immune system, heart, and inflammation, among other body systems. Some of the health benefits are listed below: - The bacteria in kefir can aid with digestive issues and may help prevent cancer. - Reduces the symptoms of lactose intolerance - Kefir is beneficial for allergic responses - Kefir could improve bone health. - Kefir can be good for your dental health. - Kefir has some antimicrobial properties. - Kefir is excellent for menstrual periods. - Kefir can aid in weight control - Kefir is a great protein source. - Has probiotics that can strengthen immunity - Promotes Good Skin Health Side Effects of Kefir Dietitians advise starting with 1 cup per day when introducing kefir to your diet because it is generally safe to drink. Otherwise, a high rise in probiotics might result in: - Uncomfortable stomach issues; such as bloating - Could increase the susceptibility of persons with compromised immune systems to infection Frequently Asked Questions What is the difference between kefir and yogurt? Both are typically prepared using a starter kit of “live” active yeast, which is in charge of cultivating the advantageous bacteria. Kefir, unlike yoghurt, only comes from mesophilic strains and is cultured at room temperature with no heating involved. Despite their numerous similarities, kefir typically contains more probiotics and a wider variety of bacterial and yeast strains. How much kefir to drink for weight loss? There is no certain quantity of kefir to consume in order to lose weight. Use kefir as a supplement to a balanced diet rich in plant-based foods to nourish your body and your gut microbiome rather than focusing on it as a weight loss strategy. Is it safe to make kefir at home? Definitely! When fermenting at home, it’s crucial to carefully follow the recipe directions because the food can rot and become unhealthy to eat if the fermentation process is carried out at the wrong temperature, for an extended period of time, or on unsterile equipment. Is it safe to drink kefir every day? To maximize the health advantages of this potent beverage, most sources advise aiming for roughly one cup each day. It is ideal, to begin with, to lower the dosage and gradually increase it to the appropriate level. Does kefir cause weight gain? As long as you incorporate it within a generally balanced diet, it shouldn’t be harmful on its own. Due to its high nutrient content, it might potentially improve weight loss or weight maintenance. As a result, pick varieties that are unsweetened and low in sugar to get the greatest advantages and consume the fewest unnecessary calories. Is kefir good for stomach upset? Kefir tastes finest when consumed in the morning on an empty stomach. As a result, digestion and gut health are enhanced. What are the benefits of kefir for the skin? - Glide dry skin at night in order to give the active microbes a full 8 hours to complete their work, - Reduce rosacea or redness. - Skin Exfoliator - Balances Skin PH - Relieves fine wrinkles and dullness - Calms stressed-out skin - Brightens a tired face - Reduces acne Is there a best time to drink kefir? There is no scientific evidence to support whether kefir is better consumed in the morning or at night for health reasons. However, it’s important to remember several proven methods that can aid in achieving and maintaining healthy body weight as well as general health. What does kefir do to your body? According to studies, it strengthens your immune system, helps with digestion issues, enhances bone health, and may even fight cancer. Is it good to drink kefir every day? The consumption of 1-3 cups (237-710 ml) of kefir each day can be a wonderful method to increase your probiotic intake. People with diabetes, autoimmune diseases, and those on low-carb or ketogenic diets may all need to restrict their intake. How much kefir should I drink a day? Although it is entirely up to you, for the best probiotic health we advise consuming one to two 8-ounce cups of kefir daily. Kefir affects people differently, so experiment with your serving size to find what works for you. Some people just consume a few ounces of liquid each day, while others consume 32 ounces or more! Does kefir make you gain weight? Protein-rich foods like kefir make you feel fuller for longer. However, consuming too much kefir might impede or even reverse weight reduction. Will kefir make you poop? Kefir consumption has been linked to an increase in bowel movements in those who are constipated, according to a preliminary study. It also appears to make stools softer. How long does it take for kefir to work in the body? Kefir may take two to four weeks to boost health. Therefore, for best benefits, one should drink 1-2 cups of kefir daily. Which is better kombucha or kefir? Kombucha and kefir both have beneficial germs, but kefir has a higher concentration of lactic acid bacteria (LAB). As a result, kefir might be thought of as a liquid probiotic supplement and kombucha as more digestive help. Another significant distinction is that because kombucha is created from tea, it frequently contains caffeine. Is kefir anti-inflammatory? Kefir diets have anti-inflammatory effects by reducing the production of cytokines that cause inflammation, including IL-1, TNF-, and IL-6. It would be wise to use kefir (and its byproducts) to prevent COVID-19 patients from expressing proinflammatory cytokines. Is kefir good for your skin? Kefir is more than just a nutritious beverage. Because of the probiotics and naturally occurring Alpha-Hydroxy Acid, it also makes a good, nutritious face mask for any skin type (AHA). Who should not drink kefir? Kefir consumption should be avoided if you have a severe, potentially fatal milk allergy because it can trigger serious allergic responses. If you have a milk allergy. You can still consume kefir if it is made with a non-dairy “milk” like rice milk. Does kefir help you sleep? The best food supplement for sleep is goat’s milk kefir, which contains tryptophan and live microorganisms to assist your gut flora. Therefore, in order for your body to make enough serotonin for you to sleep comfortably, you need to eat “tryptophan meals.” How long does kefir last in the fridge once opened? 3 – 5 days If properly stored, homemade kefir should remain fresh for two to three weeks. Is kefir good for your liver? After much research, it was known that kefir improved fatty liver syndrome for body weight, energy expenditure, and basal metabolic rate. The results also showed that by inhibiting serum glutamate oxaloacetate transaminase and glutamate pyruvate transaminase activities and by lowering the triglyceride and total cholesterol contents of the liver. Which is better kefir or yogurt? Kefir has more probiotics than yoghurt, which is the main nutritional difference between the two.. Is kefir better than probiotic pills? Probiotic tablets frequently become stuck in the stomach’s acids, where they are destroyed before the body can use them. So it is much more effective to consume probiotic foods like kefir, kombucha, and cultured veggies than to take pills. How can I tell if my water kefir grains are working? - Color: During the 48-hour culturing procedure, the liquid’s color will change. - Flavor: After 48 hours of culture, the final product should taste less sweet than the sugar water you started with. Is kefir good for the colon? Kefir may help the large intestine maintain a healthy bacterial balance, enhance lactose digestion, and possibly even improve stool consistency. Is kefir good for acid reflux? Consuming a probiotic beverage on a daily basis may alter gut flora and lessen symptoms of heartburn and reflux. Is water or milk kefir better? Milk kefir comes out on top when it comes to the sheer range of those bacteria and yeasts. It can contain up to 56 distinct strains, according to research. According to other studies, ordinary water kefir only contains up to 14 distinct bacterial and yeast strains. Is kefir acidic or alkaline? Kef is an acidic-alcoholic fermented milk product. It has a mildly acidic flavor and a creamy consistency. Is kefir good for joints? Our findings suggest that kefir peptides may prevent ankle joint bone degradation and have an anti-inflammatory impact. This study supported the notion that kefir peptides hold promise as a rheumatoid arthritis treatment. Is kefir good for kidneys? Some findings support the idea that kefir supplementation may help to reduce oxidative stress, which is linked to improved renal function and may help to halt the course of diabetic nephropathy. Is kefir good for the lungs? Kefir peptides improve overall SOD activity in the lungs while decreasing ROS levels, NF-B activation, proinflammatory cytokine secretion, and inflammatory cell infiltrates. Is kefir anti-aging? Kefir milk includes alpha hydroxy acids (AHAs) in the form of lactic acid, which is similar to fruit-derived enzymes. This indicates that cleaning, exfoliating, and moisturizing with the “mystery magic milk” will gently remove dead skin cells, reducing the visibility of wrinkles and dissipating age spots and freckles. Does kefir make you fart? Kefir could make some people feel uncomfortable, mainly in the form of more gas. Others might find that it greatly reduces their bloating and gastric discomfort. Does kefir interact with medications? The likelihood of contracting a bacterial or yeast infection can rise if you use medications that suppress the immune system. Combining kefir with immune system suppressing drugs may make you more susceptible to illness. When should I drink kefir at night or morning? In theory, you can have kefir whenever you like. Due to the fact that it is an energy booster, we often advise taking it first thing in the morning rather than last thing at night. Can kefir help with anxiety? Science suggests that the live bacteria in kefir may also be beneficial. According to a review of studies printed in the journal General Psychiatry, people who suffer symptoms of anxiety may benefit from adopting actions to control the microbes in their gut using probiotic food. Does kefir lower your blood pressure? Drinking kefir, probiotic fermented milk that helps keep the proper balance of healthy bacteria in the digestive tract, may lower blood pressure. Why is my milk kefir slimy? There are a variety of reasons why water kefir can get slimy; either too many or not enough nutrients. Why is my kefir fizzy? Two of the yeasts used to manufacture kefir, Saccharomyces kefir, and Torula kefir, ferment lactose into a little quantity of alcohol and carbon dioxide, which causes carbonation. Probiotics, vitamins, and minerals abound in kefir. According to studies; kefir can enhance health when used frequently (often every day for 2-4 weeks). Probiotics are healthy bacteria and yeast that are already present in your body and are known to naturally enhance a number of aspects of your health. Since they are produced naturally by the body, they have a particularly positive effect on the body. To maximize the health advantages of kefir, choose reputed sellers of premium, fresh grains because the quality of your kefir depends on the quality of the grains.
<urn:uuid:b1b138a1-1b21-4644-9f66-9f91358a538f>
CC-MAIN-2022-33
https://9jafoods.com/kefir-meaning-health-benefits-dangers-and-recipe/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00095.warc.gz
en
0.922765
4,277
3.03125
3
RIFAT AFROZE, MD. MAHBUBUL KABIR AND ARIFA RAHMAN wrote about BRAC Training ABSTRACT: This study investigated the effect of the BRAC training programme for English language teachers of rural non-government secondary schools. It examined the change in the teachers in terms of their pedagogic skills, language skills development, knowledge about Communicative Language Teaching (CLT) and their attitudes towards this new approach. The findings pointed to a mixed picture. In spite of a general improvement in teachers’ knowledge about CLT and the skills involved in its application in the classroom, there was little evidence of much difference in the existing classroom practices of trained and non-trained teachers. More importantly, students were not being affected positively. Although most teachers perceived the BRAC training programme as relevant to and useful for their professional development, they did not believe that CLT could be effectively applied to the classroom settings of rural schools, thus implying a set of ingrained beliefs which influenced teachers’ attitudes and behaviour in the classroom. The need for quality education is steadily being recognized as a prerequisite to human development and economic growth. As a result, individual governments, international agencies and non-government organizations are investing increasingly large amounts in the expansion and improvement of educational provisions. In spite of constraints on resources, there are concerted attempts in the developing world also, to provide opportunities for effective education. Since the eighties, amongst the variables in educational improvement, the teacher has been considered as being of utmost importance and there has been a strong focus on the professional development of the teacher (Hargreaves and Fullan 1992). Thus, the need for an effective provision to initiate, develop and sustain teachers through an appropriate process of intervention and training is gradually being accepted as amongst the highest priorities of educational planning and practice. Orientations to Communicative Language Teaching (CLT) The search for an appropriate method to teach foreign languages has been going on for the last one hundred years (Howatt 1984). These have reflected varied changes in perspectives related to the nature of language and of learning theories. Since the 1970s, the second/foreign language teaching field worldwide has settled for Communicative Language Teaching (CLT). Its chief proponents were a group of influential educational linguists who drew upon insights from sociolinguistics, educational psychology and second language acquisition studies. CLT has a humanistic orientation, treats learners as individuals with different learning styles and most significantly focuses on language in use. It is best considered an approach rather than a method. Richards and Rodgers (2001) have drawn up a number of principles underlying the CLT approach. These are: • Learners learn a language through using it to communicate. • The goal of classroom activities should be authentic and meaningful communication. • Fluency is an important component of communication, therefore learners need to be provided with all kinds of opportunities to facilitate communication. • Communication involves the integration of all the four language skills (speaking, listening, reading and writing). • Learning is a process of creative construction and involves trial and error. In line with this modern orientation to teaching English and with the worthwhile objective of improving the quality of the teaching and learning of English, the authorities reached out nationally and introduced CLT in Bangladesh at the secondary level. New books English for Today (ET) series were written by teams of national and international experts and attempts were made to train secondary school teachers in this new methodology. Applying this new methodology to the classroom in Bangladesh, we can see the kind of demands the CLT approach creates among the teacher. Once an all-knowing authoritative figure, the teacher now is asked to be a facilitator, a guide and a tolerant supporter of the learning process. The BRAC Programme In 1996 National Curriculum and Textbook Board (NCTB) introduced the new ELT curriculum, textbooks and a revised teaching methodology for English language teaching at the secondary level. Primarily, it brought about more problems than benefits for both students and teachers. School teachers especially in rural areas, who were basically weak in the English language and in teaching skills, were not capable of coping with the demands of the change. Also, apart from BRAC training workshops held from time to time, there was not enough initiative for familiarizing teachers with the new curriculum, the textbooks or the teaching methodology. This programme carried out a needs assessment study and found that most rural teachers were facing difficulties in delivering the new materials in classes – this was hampering the quality of education in secondary schools (PACE Report, undated). The adverse impact on rural students affected the rate of failure in a public examination – it was increasing. Consequently, a BRAC pilot project was started in 2001 to provide subject-based residential BRAC training for English, mathematics and science teachers of rural non-governmental high schools in order to enhance their capacity, particularly in the teaching of the new topics introduced in the revised curriculum. This included 22 secondary schools in rural areas. The programme developed 28 training materials for English (12 for classes 6-8 and 16 for classes 9-10). These deal with teaching methodology/pedagogy, familiarization with the new concepts in the curriculum, textbooks and the four language skills, most importantly, teaching methodology. Till November 2005, a total of 4,832 English teachers (2,357 of classes 6-8 and 2,475 of classes 9-10) participated in a residential training course for 6 weeks (3 weeks for Module 1 and 3 weeks for Module 2). A one-week refresher follow course-up course was initially planned but was not implemented. RESEARCH OBJECTIVE and METHODOLOGY The main objective of this research was to find out about the existing classroom practices of trained and non-trained teachers and their perceptions of CLT. As a corollary, the existing challenges faced by teachers were also investigated. This is part of a larger study where both qualitative and quantitative approaches were used. This paper presents only the qualitative findings. The data were collected from March to May 2006 from the observation of secondary school classrooms (of both trained and non-trained teachers), focus group discussions (FGDs) with students of trained and non-trained teachers, and interviews with trained teachers.. To investigate the existing classroom practices, 79 English classes of both trained and non-trained teachers (40 classes of trained and 39 classes of non-trained teachers) were observed. Among the 79 observed English classes, 45 classes dealt with English 1st paper and 34 with English 2nd paper. 14 FGDs with students (of trained and non-trained teachers) were conducted in seven districts to review their perspectives on the new teaching techniques. Then 26 trained teachers were interviewed to find out about the existing teaching-learning process in the classrooms and the challenges the trained teachers faced in different situations. This section presents the findings on the comparative performance of the trained and non-trained teachers (through observation), followed by the students’ perception on trained and non-trained teachers (through FGDs) and challenges that trained teachers faced in applying the CLT method that they had learned in the BRAC training (through interviews and observations). The observation data showed a general diversity of performance both among the trained and non-trained teachers. Although the trained teachers attempted to make more use of the CLT method, there is little evidence of much difference in the existing classroom practices of trained and non-trained teachers (Table-1). While the mean age of the teachers was 46 years, the highest teaching experience was found to be 34 years and the lowest was 3 years. The mean teaching experience was 18 years. The maximum number of student enrolment in class was 102 and the minimum was 21. With regard to student attendance, a maximum of 68 and a minimum of 10 students were present during the survey. On an average, 34 students attended classes regularly, a poor number compared to enrolment. The average planned duration of the English class was 42 minutes but in reality an average of only 38 minutes was spent in class. Table-1 sums up some classroom activities of trained and non-trained teachers (related to criteria like classroom behaviour, teaching style, use of English in class, approach towards CLT, etc). Teachers’ performance is shown in percentage. Table 1: Classroom Performance of Trained and Non-trained Teachers in Percentage Criteria Classes of Trained Teachers (%) Classes of Non-Trained Dealing with mistakes 70 85 Illustrating topic with examples 45 39 Encouraging questions from students 68 51 Making purpose and guidelines of lesson clear to students 63 54 Responding to students’ questions sympathetically 80 74 Students responding to teachers’ questions enthusiastically Meaningful communication taking place in class 33 21 Doing group/pair work appropriately with all students 45 23 Problem faced while doing group/pair work 23 18 Using English most of the time 52 38 Moving around in class 76 67 Using teaching aids productively 95 72 In case of dealing with mistakes, in about 70% of the classes of trained and 85% of the classes of non-trained teachers, teachers dealt with mistakes frequently, which occurred in sentence construction, spelling and grammatical activities. Regarding teachers’ illustrating a topic by using real-life objects, pictures and charts (teaching aids), it was found to take place in 45% of the classes of trained and 39% of the classes of non-trained teachers. They used appropriate examples in a logical manner by using the blackboard. Teachers’ encouragement of students in asking questions and clarification in classrooms occurred in 68% of the classes of trained and 51% of the classes of non-trained teachers. In 32% of the classes of trained and 49% of the classes of non-trained teachers, the teacher did not encourage students to ask questions in the classroom. In contrast to non-trained teachers, trained teachers appeared to play a reasonably supportive role towards students. In the case of students’ response towards teaching, in 88% of the classes of trained and 46% of the classes of non-trained teachers, students (especially front benchers) responded eagerly. Group/pair work with students was appropriately used in 45% of the classes of trained and 23% of the classes of non-trained teachers. All the students participated in pair/group work and the teacher monitored their activities and time-keeping was done properly. But in large classes the teacher was not able to monitor student’s activities. Students thus became disruptive. Sometimes, while a teacher was conducting pair/group work, other teachers (in adjacent rooms) complained about the noise. Students faced problems in group or pair work in 23% of the classes of trained and 18% of the classes of non-trained teachers. They were not able to complete their activities according to the teacher’s instructions. Sometimes, they failed to understand what they had to do. They also faced problems with vocabulary, sentence construction while working in pairs or groups. Teachers of both groups were not able to monitor students’ activities properly. That is why students did not perform satisfactorily. Sometimes teachers felt that the poor language ability of students was the cause of their failure to communicate in pair or group tasks. Usually teachers used English in the classroom for giving instructions, answering/explaining students’ questions, presenting new words and asking questions. About 52% of the trained and 38% of the non-trained teachers used English in class most of the time. In comparison with non-trained teachers, trained teachers performed better here. The most common teaching aid was the blackboard which was used well in 95% of the classes of trained and 72% of the classes of non-trained teachers. From class observation of both groups of teachers, some other findings that emerged were: teacher’s pronunciation, behavioural discrimination towards students, time management ,the level of professional skills, the approach towards poor-ability students, the manner of using the textbook and so on. From the non-trained teachers’ classes, we found that most of them followed the traditional teaching style in class rather than CLT. They did not have sufficient idea about the application of four skills. Some of them did not give any attention to listening or speaking activities. They only followed the instructions of each lesson but skipped some of the activities given in the book and did not explain or elicit any information. Students often failed to understand the teacher’s instructions and resorted to memorizing. In the case of trained teachers, we found they failed to apply CLT in class appropriately. Both teachers and students were found to be weak in English and that is why they were unable to use English communicatively. Teachers were not comfortable in teaching through the CLT method. Students were not familiar with the new way of teaching. Teachers who attempted to use CLT did not know how to make the class/topic interesting. They tried to use different types of classroom techniques like pair or group work, chain drills but failed to maintain time, monitor performance, complete activities or present the techniques interestingly. Students’ Perceptions on BRAC Training Students perceived that the trained teachers used some new techniques in their classrooms that were not previously used, but this was not benefiting them much. According to the students, the significant change that was seen in teachers’ recent behaviour was the way they spoke. Whereas teachers did not try to speak in English in the past, they now spoke frequently in English and also encouraged students to speak in English. Other changes that were noticed by students were the way the teachers used the blackboard to explain things. Students said that their English teacher encouraged them to read newspapers, and converse in English by sharing what they had learnt in the lesson with each other (Box 1). Box 1: Students’ Comparative Perceptions of Trained and Non-trained Teachers Although generally students felt more comfortable with Bangla, some of them tried to use English. For example, some students said, “Although very little, we speak now but before we didn’t know anything.” But the main problem that still existed for most students was their hesitation in communicating in English. Some said, “What will people say…that’s why we do not speak in English.” The students of the non-trained teachers, on the contrary, are deprived of the opportunity of developing communicative language skills, as most of the non-trained teachers do not try CLT. For example, some students said, “We don’t know anything about group work, pair work. The teacher does not help us in practicing speaking English but he encourages us to practice English. Teachers use some very difficult English sometimes – this is hard to understand.” The students of the trained teachers said that the teachers generally used English more than Bangla whereas the students of the non-trained teachers perceived that the teachers used more Bangla than English, sometimes no English at all. However, except for a very few, the students generally used Bangla while speaking in the class. According to the students, trained teachers generally and non-trained teachers with an exception conducted pair work and group work and trained teachers did it more frequently than the non-trained teachers. However, students of trained teachers asserted that in many cases teachers went very fast in class which made the lesson difficult to understand. Again, students of non-trained teachers complained that there were English teachers who were not friendly. They used a harsh voice and even the stick for punishment. On the contrary, students of trained teachers stated that their teachers were sympathetic while correcting mistakes and in some cases encouraged peer-correction. The students of both trained and non-trained teachers admitted that most of them did not understand English and they felt comfortable if the instruction was in Bangla. Although there were some attempts at using English by the students of trained teachers, non-trained teachers’ students did not attempt at using any English. Challenges Faced by Teachers Teachers perceived (with some exceptions) the BRAC training programme and the materials as both relevant to and useful for their professional development. However, they felt that they found it difficult to apply their training. Following are some of the challenges that the teachers were facing: a) the vocabulary stock was not adequate both for teachers and students; b) they were not proficient in speaking English. Sometimes they have to act out (mime) to make students understand; c) lack of realia (real-life materials); d) lack of teaching aids; e) students do not understand English and they are not regular in attendance; f) students’ hesitation and shyness; f) lack of English-learning environment (listening, speaking, writing) in the classroom due to shortage of time; and g) seating arrangements are not conducive to CLT. Again, some teachers were worried about the non-cooperation of students seating on the back benches. A teacher said, “I found problems in dialogue practice. Back-benchers do not participate.” It was not only the lack of physical facilities or students’ lack of interest but the attitude of other teachers also was an obstacle (Box 2). They said that other teachers criticized this technique of teaching because it made the classroom noisy which disturbed adjacent classes. The headteacher was not bothered about CLT. He advised them to prepare more students for the exam. The supervision of trained teachers is also not adequate as school supervisors themselves are not technically knowledgeable to give feedback to the teachers in their respective fields. Box 2: Teachers’ Doubts about Application of CLT One common phenomenon was that although teachers thought that they had been benefited by the BRAC training, they doubted the proper application of CLT in the classroom. The following quotations reflect that perception: “In my class, there are 135-140 students. So it would be too difficult to conduct pair work or chain drill. You know, there is a time constraint that does not let us take proper care of every student. I can’t monitor students in this period. Again, students are too weak to understand English in the classroom. So teaching in English is not possible in class. Everything that I teach has to be said in Bangla.” (A Trained Teacher from Pabna) “Although this method and the textbook both focus on developing the language skills in English, it wouldn’t help students do well in public examinations. More and more students are leaning towards coaching centres. As a result, effective application of CLT is not possible in the class.” (A Trained teacher from Pabna) The understanding of teachers’ beliefs was perceived through interviews. We tried to discern teachers’ perceptions in an in-depth manner by probing teachers’ views on various aspects related to CLT and BRAC training through these interviews. In spite of some attitudinal changes towards a positive stance, teachers were generally found to be sceptical about the effective application of CLT in the classroom setting of rural schools. They believed that the environment was not quite congenial to CLT practice in these schools. They stated that colleagues criticized them when they applied the method. They complained about the students’ dependence on coaching centres. They also identified some reasons for their impression: a) students’ incompetence in understanding English properly; b) students’ tendency to follow traditional methods and memorize the textbook content for the exams; c) students’ lack of willingness to learn more than their syllabus; d) large class size made it difficult to conduct group/pair work and chain drill; e) students’ fear of learning English; f) students’ lack of class participation; and g) classroom and seating arrangements were not suitable for the new approach. In short, large class size, time constraints, students’ language incompetence and non-cooperation were regarded as the main problems in applying CLT in the English class. The findings point to a mixed picture. Positive signs are apparent in a general improvement on some particular issues but it did not take place at a regular pace. The effects of the BRAC training may be summed up in the following manner: a. With some exceptions, there has been a general improvement in teachers’ knowledge and skills in the application of CLT. b. Although trained teachers attempt more use of the CLT approach, there is little evidence of much difference in the existing classroom practices of trained and non-trained teachers. c. Students perceive that some trained teachers use new techniques in their classrooms not previously used, but few students are benefited by this. Inspite of some attitudinal changes, teachers do not believe that CLT can be effectively applied to classroom settings of the rural schools thus implying a set of ingrained beliefs, which influence teachers’ attitudes and behaviour. Based on the study findings and the discussions above, there is obviously a need to broaden the parameters of the current BRAC training programme in order to achieve the objectives of the BRAC training framework. We also emphasise the importance of recasting ideas within one’s own frame of reference, in order to ‘appropriate’ ideas to suit the local culture. Within this perspective, a number of recommendations are offered below: • Focusing on components that engage with trainees’ beliefs/attitudes to enable them to change their attitudes towards applying CLT. This may be done through introducing: a. The elements of ‘reflection’ (group and individual) – the strategies introducing reflective practices are found in abundance in the teacher education literature. b. Observation and analysis of actual classroom practices (real-class observation, transcripts of recorded lessons, videos of lessons, teachers’ diaries, etc.) and relating them to proposed changes within a participatory ethos rather than a top-down approach. • Avoiding the narrow “dress-rehearsal approach” (Widdowson 1987) of the BRAC programme because conditions and contexts in classrooms differ from place to place. Instead, the BRAC training needs to encourage capacity building in trainees that can enable them to understand the actual on-going purpose of the training and the fundamental principles of the CLT approach. • The issue of supervised teaching in actual classrooms, mentored teaching and a practicum may be considered in the light of the principles of effective teacher development. • An element of guidance and counselling may be introduced. This will provide some scope for listening to individual problems as well as problems in classroom teaching. • The need for a well-informed cadre of trainers with their own belief systems compatible with the assumptions of the programme, e.g. clearly understanding and believing in the outcomes of the BRAC training. This obviously points to the necessity of increasing the number of professional full-time trainers with less dependence on part-timers. • More attention needs to be given to the improvement of the trainees’ (and trainers’) English language skills. • Introducing some effective incentive packages so that the trainees are motivated and can concentrate on developing themselves. The innovation literature advocates that it is important for participants to have a stake in the innovation they are expected to adopt. In terms of incentives, teachers should be able to perceive some sort of ‘reward’ for changing their instructional behaviour. • Introducing some sort of formative assessment of the trainees individually and in groups. • Periodic refresher courses need to be seen as a progression to professional development and need to be run by competent trainers in order to link past training, current practices and future developments. • Similarly, creating a supportive environment for the self-development of the low-achieving teachers (on pre-tests) is an issue that needs to be addressed. • Ensuring that only English subject teachers attend the BRAC programme so as not to waste valuable time, money and resources on non-English teachers. Howatt, A. P. R. (1984). A History of English Language Teaching. Oxford: Oxford University Press. Hargreaves, A. and M. G. Fullan. (1992). Introduction. In A. Hargreaves and M. G. Fullan (eds.) Understanding Teacher Development. New York: Teachers College Press, Columbia University: pp 1-19. PACE (undated). A Brief Introduction on PACE Initiatives, Dhaka: Post-primary Basic and Continuing Education (PACE), BRAC. Richards, J. C. and T. S. Rodgers. (2001). Approaches and Methods in Language Teaching. Cambridge: Cambridge University Press. Widdowson, H. G. (1987). Aspects of syllabus design. In M. Tickoo (ed.) Syllabus Design: The State of the Art. Singapore: Regional English Language Centre.
<urn:uuid:234fea87-bb4c-4516-b463-180b1540c34b>
CC-MAIN-2022-33
https://bdeduarticle.com/english-teachers-classroom-practices-in-rural-secondary-schools-an-exploration-of-the-effect-of-brac-training/?amp
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570868.47/warc/CC-MAIN-20220808152744-20220808182744-00095.warc.gz
en
0.966292
5,157
2.640625
3
The city of Chandigarh came first into my recognition in 1948 or 1949 as the whiff of a possible commission wafted via the Royal Institute of British Architects, but remaining without substance. The Punjab Government may have at that time been sending out feelers prior to meeting Albert Mayer, whom they commissioned to make a plan, with the brilliant young architect Matthew Nowicki. However, the sudden death of Nowicki in 1950 necessitated the selection of a new architect for Chandigarh. When Prem Thapar, of the Indian Civil Service and the administrator of the project, with the chief engineer, P. L. Varma, called upon Jane Drew and myself at our office in the closing months of 1950, a complete plan existed for a city of 150,000 people, along with a detailed budget covering every ascertainable item, including thirteen grades of houses for government officials with the accommodation and the estimated cost set against each. There was also a generous infrastructure of social and educational services and provision for the supply of water, drainage, and electricity to every level of dwelling provided, so that an examination of the budget and the well-advanced Mayer plan demonstrated the clear intention of the government to construct a modern city on a site selected to serve the state at the highest level of design and execution and set a new standard for India. The state of the Punjab, truncated by partition, still suffered from the appalling ravages of a bitterly fought war, with millions homeless and landless and refugee camps still in the process of organization. It had lost its beloved capital Lahore to Pakistan; its government was gathered loosely in Simla, a ramshackle affair built only for summer occupation; and it needed a capital city for every practical, political, and spiritual purpose. That it should want and be prepared to build a capital of the first order was the mark of a courage and resolution that never flagged in all our dealings with it. At our first meeting with Thapar and Varma they asked for two architects to organize and supervise the architectural aspects of the Mayer plan. If Jane Drew and I had been able to drop every other obligation and accept the appointments, Corbusier would not have been approached. It is no easy decision to drop every other obligation and decamp to India for three years as we were asked to do. I would have continued to decline, had not our chief client, the Inter-Universities Council, urged us to accept and to delegate the bulk of our responsibilities, as we did, to Lindsey Drake and Denys Lasdun who joined the partnership for the purpose. But Jane Drew, foremost in urging acceptance, felt bound for a few months to her share of the Festival of Britain program, and this put Thapar in the dilemma of having two jobs to offer with only one filled. There would be little difficulty, he said, in fitting Jane Drew in when they returned to India, but they were instructed to fill both posts and meant to obey their instructions. At this juncture Corbusier was first mentioned. Reflecting on the immensity of the architectural program for a three-year contract, I thought the Capitol group of buildings would be a fitting commission for the great man. “What would be the effect,” said Thapar, “of introducing Corbusier to the team?” To which I replied, “Honour and glory for you, and an unpredictable portion of misery for me. But I think it a noble way out of the present difficulties.” So Jane Drew rang Corbusier and we all went to a meeting at his office, very dramatically arranged with a tape recorder that broke down under the pressure of high-minded resolutions. The conditions put forward by Thapar included the acceptance of the Mayer plan and the Project Budget as the working basis of the agreement, but in accepting them Corbusier insisted on the inclusion in the team of his cousin Pierre Jeanneret, from whom he had broken some time earlier. This was inconvenient for the Indians, and we tried to assure Corbusier that our loyalty to the project and to him was beyond doubt, but he persisted, and arrangements were concluded on that understanding. We returned to London. I reached Simla at the end of 1950 and put up at Clarke’s Hotel with Pierre Jeanneret. Both the hotel and the improvised office accommodation were primitive and cold; the Indian staff so far assembled was few in number and only partially trained. We were left much on our own except for a few meetings with the evasive but autocratic chief engineer. Jeanneret was cheerful but narrowly Parisian, with no aptitude for languages; as a consequence my French improved while all else deteriorated. We had some drawings by Nowicki, which were rather romantically based on Indian idioms. Jeanneret and I started working on housing types while we absorbed the conditions of the Project Budget. I studied the Mayer plan and found by projecting sections along major road lines that the Capitol buildings which he saw enlaced with water were rather like Lutyens’ Viceregal Palace at New Delhi, largely eclipsed by the profile of the approach road: they were obviously not well sited. I had doubts also as to the workability of its Radburn-like path system, which seemed to me to be out of scale with the enterprise; and the generally floppy form of its sector planning depressed me in the same way as that of Milton Keynes many years later. I was at this period taking the appointment on trust but ready to resign if it grew worse. We had laid it down as a condition of our acceptance that we would not work under engineers, as was the custom in the Indian Public Works Department. I was experiencing the onset of a trial of strength with Varma, neglect by Thapar, and a general feeling of lassitude in the organization, the reverse of my experiences in Africa. Not wanting to waste my efforts, I put in a resignation to Thapar that brought him hot foot from his house to tell me that the project would fail without us and that he shared my problem with Varma. So I stayed. The arrival of Corbusier galvanized the situation. We moved down to the Rest House in the lovely village of Chandigarh on the road to Kalkar, where the mountain railway starts for Simla. Corbusier, Varma, Jeanneret, myself, and intermittently Thapar were there; Albert Mayer was making his way to us from the south. Without waiting for Mayer to appear, Corbusier started on large sheets of paper to approach a plan by a method of rough and ready analysis familiar to me from the workings of the Congrès Internationaux d’Architecture Moderne (CIAM). First he outlined the main communications with the site on the map of India—air, railway, road (Fig. 93). Then he dealt with the site itself—its immediate background of low foothills rising to the sheer mountains of the Himalaya with the peaks beyond; its gentle plain declining at a fall of one in one hundred; its dry river beds to each side, with a smaller bed intermediate to the left; a diagonal road crossing the plan low down, and a loop of railway away on the right (Fig. 94). It was a difficult situation. My French was unequal to the occasion. Jeanneret was supernumerary, and Thapar only half aware of what was going forward. Corbusier held the crayon and was in his element. “Voilà la gare,” he said, “et void la rue commerciale,” and he drew the first road on the new plan of Chandigarh (Fig. 95). “Void la tête,” he went on, indicating with a smudge the higher ground to the left of Mayer’s location, the ill effects of which I had already pointed out to him. “Et voilà l’estomac, le cité-centre.” Then he delineated the massive sectors, measuring each half by three-quarters of a mile and filling out the extent of the plain between the river valleys, with extension to the south. The plan was well advanced by the time the anxious Albert Mayer joined the group. He must have had an unnerving journey, and he was too upset to make the most of his entry. I found him a high-minded decent man, a little sentimental in his approach, but good-humored; not in any way was he a match for the enigmatic but determined figure of the prophet. We sat around after lunch in a deadly silence broken by Jeanneret’s saying to Mayer, “Vous parlez français, monsieur?” To which Mayer responded, “Oui, musheer, je parle,” a polite but ill-fated rejoinder that cut him out of all discussion that followed. And so we continued, with minor and marginal suggestions from us and a steady flow of exposition from Corbusier, until the plan as we now know it was completed and never again departed from. I stuck out for allowing Mayer to expose his theories in one of the sectors, out of pure gentillesse for a displaced person, and I suggested some curvature in the east-west roads to avoid boredom and to mitigate the effects of low sunlight on car drivers. Aside from these considerations, the plan stood, and on my advice Mayer signed it as a participant and later stood by his decision when the new plan was under fire in a cabinet meeting. In 1950 Corbusier was offered the design of the Capitol buildings on a plan designed by Albert Mayer, and early in 1951 he had redesigned the whole capital plan so that he could turn his undivided attention to the design of a monumental group of buildings forming the culmination of his own plan. This he could do with a calm mind for, by the time he came to it, he had recognized the existence of a firm and able organization, headed by Prem Thapar, a noble-minded administrator of supreme skill and integrity; a chief engineer of great experience, a man who though evasive and autocratic in his handling of affairs was of elevated mind and indomitable purpose; Jane Drew and I combining energy, creativity, and leadership; and Pierre Jeanneret, not the happiest of collaborators, but a ceaseless worker in the good cause (Fig. 96). After a time, in addition to Varma, Indian assistants of skill and promise emerged and devoted themselves heart and soul to his work; he came to rely on them with confidence. Behind this organization stood the government of the state in full support, headed by Trevedi, a governor of some caliber; at Delhi was Jawaharlal Nehru, who valued Corbusier at his full weight and was prepared to pay him what he asked. A friendship grew between us and Corbusier that lasted, particularly for Jane Drew, until his death. He could turn from his exhausting labors to evenings of ranging talk, with the bottle circulating, in an atmosphere of complete and happy relaxation. Jane Drew gave him colored papers and paste, and with one after another drawing and collage he brought in gestures of gratitude before the evening’s talk. “Pour toi, Jane; et celui-là, c’est pas si bon, pour vous, Maxwell.” My relations with Corbusier were never intimate. I was never a disciple, as architects such as José Luis Sert was. The authoritative aspect of the Plan Voisin de Paris appalled me when I first saw it, and I preferred the classical clarity of Mies van der Rohe’s Turgendhat Haus to the early houses of Corbusier, which I later came to value above most of his later work. When Jane Drew arrived in Simla, Corbusier was in a huff about a remark I had made concerning a certain theatricality in the High Court design. We had shown Corbusier the Red Fort at Delhi and some of the stupendous Moghul ruins in the vicinity, and we had explained from our experience in West Africa the principle behind the achievement of shade temperature and the cooling effect of moving air under shaded conditions. “Un parasolen effet, hein?” And a parasol he made for the High Court, the greatest of all canopies with just the merest reminiscence of Moghul influence. At this time he was engrossed by visual effects of buildings in a big space. He had plans of the grand axis from the Louvre to the Arc de Triomphe reduced to appropriate scale and was continually testing his remembered impressions against the terrain upon which he was operating, as though seeking the ultimate possible, the furthest extension of grandeur comprehensible, at a single view, and this with buildings of asymmetrical disposition related only by the imaginary conversation they could maintain with each other across space. If one compares with this arrangement the nearly instant recognition that perfect symmetry provides even for monuments as distant as the King George V arch and the Viceregal Palace at New Delhi, one may perhaps realize the nature of the struggle that was consuming him at that time, resolved, to the distress of some of his best friends, at the outer, if not beyond the outer limits of the possible. From the level space between buildings he removed all roads by lowering them, but later allowed little hills to be formed with spoil, which must distract from the aimed-for impression. If in all this one could find cause for frustration, it must be set against the objectives, the first measure of the artist in all works of art. There was an episode that I have never been able successfully to explain, which concerns the distribution of population over the plan sectors. We had accepted as something unshakable and inevitable the hierarchic disposition of the population from rich to poor, downward from the Capitol, and we could with no great difficulty have distributed the total of 150,000 over the plan. But Corbusier with some secrecy worked feverishly on a sort of computerization, some system he had in his mind, that would present us with the mosaic law of the matter, and somewhere in this computation was the hint of a row of high-rise buildings low down in the plan. They never rose. Whether Thapar scotched them or not I never knew; I know only that the incomprehensible figures were not to my knowledge applied to the plan, which it was clear from the beginning was to be a poor state’s capital in two dimensions, with no two-grade intersections in our lifetime. Corbusier’s sector planning reinforced this idea, with its legally protected boundaries and its strongly internal planning that showed up so well in the first-developed and still most used Sector 22. The sectors with their contrasting bands of daily activity, the cross-threading bazaar streets and cycle paths, and the circumambulatory feeder road to the housing make a pattern that shows him at his grandly logical, for if there were to be both the pressures and the resources of the British New Towns, the scheme would be more workable than it now is, the straight runs of motor road doubled in length, the cycle paths a reality with the aid of underpasses (Fig. 97). There was a moment when he contemplated a regulatory system of proportion for all the housing of the city (which had hitherto lain outside his control and was the work of Jeanneret and ourselves), together with schools, colleges, hospitals, health centers, local political buildings, and so on, enough in all conscience. It was no more than a gesture of omnipotence or, more charitably, the hope expressed of an overall harmony to be the work of several hands. He did not press it. More important was the loss of diversity and the small foreground scale of pavement commerce in the city center. I worked with Jane Drew on the shopping center of Sector 22 with its variety of multilevel shopping from closed stores to open booths; we had the direct collaboration of shopkeepers working to our models and giving us back something extra in the shape of connecting covered ways. Thus I was as anxious as Corbusier that the city center should preserve something of the intimacy, even the untidiness, of the typical Indian bazaar. We both made drawings showing spaces enclosed by blocks of buildings—shops, offices, and residential accommodation—partly filled by booths or stalls, or merely selling space covered by both permanent and temporary canopies. It was not an easy exercise, because the size and actual function of the center was difficult to estimate; its financial practicability was quite indeterminate. Sector 22 was humming as the center of Chandigarh life, but its area was small when compared with the city center lying up and beyond this first residential sector to be developed. I had left before it was begun in earnest, and I was taken aback by its stark brutality when I saw it years later. The scale was gargantuan but nearly devoid of the sort of surface marking or modeling by means of which Corbusier established scale, as with the High Court. It was devoid of intimate street level activity and treeless! What had happened in the interval? I do not know. There is grandeur in the great colonnaded blocks. I am not averse to size as an element in urban composition, but even along the all-purpose entry road I found blocks of unidentifiable blankness that verged on the vacantly forbidding, a form of excess to which I was entirely unable to respond. It was one of the (I fear) vanishing pleasures of New Delhi to come across Lutyens’ influence in the detailing of humble lengths of servants quarters or the like, and I had hoped to find this element in Chandigarh and sorely missed it. Corbusier said to me one day that he was interested only in art. I felt this in his persistent withdrawal from what might be called vulgar contact, the ordinariness that makes up the bulk of mankind and is both its strength and weakness. The loneliness of the great artist shut off by the mere weight of the concentrated effort of creation has been spoken of by Conrad and many another. With a writer such as Balzac or Dickens, contact was the material of the work, but with Corbusier this was not so. I imagine that he peopled his buildings, where indeed they gave the appearance of being peopled, by figments of his own creation, unendowed with normal human attributes; and that as he grew older and more withdrawn, these counted for less than the elemental forms reaching forward to ultimate ruination. I would warn those who hope by pecking over the remains of the great—the diairies, letters, reported conversations, photographs, and so on—that they will be permitted to pierce to the heart of the mystery that makes men great. One has only to read the biographies, even the autobiographies, of the grandly creative to find in their contact with the world continuous frustrations, bitternesses, misunderstandings, and rejections, until death ends them—and not these alone, but as often as not, the meannesses, treacheries, and shabbinesses; the envies and vanities, especially the vanities, the defensive vanities that cloud the daily conduct of those otherwise rapt away from the world. It is a dangerous occupation, and few have succeeded in elevating it through some lucky gift of sympathy or from feelings of fellow suffering. A French critic once said that the beginning of all criticism was contained in the words Que c’est beau—how beautiful it is. Without this surely all criticism is vain that seeks to define the exactitude of pleasures that come and go and swell and fade. Coming to Chandigarh twenty years after I had labored in the field, and with the memory of such criticism in my mind, I went in and about it on a lovely December morning. It was unfinished, poorly maintained, vulgarized in parts, and with standards lowered in large extensions beyond the original plan. Yet I had to say, “Que c’est beau! How noble a thing this is!” I went round with that same engineer with whom I had fought and at whose renewed requests I had come on a sentimental journey to see old friends before we died. And he took me before everything else to see the lake that was in his mind from the beginning, yet had to wait the moment until the tide of recognition and success made it possible, and had to make it then to the limits and to more than the limits of what was possible. We walked together along the curving sweep of the embankment that was his, not Corbusier’s, an embankment he had made ten times wider than was strictly necessary, knowing that it would become the promenade for the city and therefore should be on a scale to match it. This lake is a contribution to the city that represents for me its soul. It is not less, but differently, the creation of Corbusier’s also, for in the association with men of great insight and purpose, works arise of a nature like their own. Since we were not entirely without either of these attributes, we felt the exhilaration and the deep polarization of effort that Corbusier brought to the enterprise, though he barely lifted his head from his work and was only faintly amused by demonstrating it. This is as I saw him. It seems utterly irrelevant to me how far a man, with the objectives that he had constantly before him, fell below them. The last time we saw him in his apartment in Paris—the place growing old and dusty around him, a sycamore tree bursting the terrace flower box it had seeded itself into—he was all agog with the opportunities enamels presented as an extension of painting. Glasses on his forehead, he groped about in the accumulation of the years to show us his latest experiments in the medium. Gone were the suspicions that clouded the first and fateful meeting many years ago; a simple single-hearted man was sharing his new-found enthusiasm with old friends.
<urn:uuid:e3f3e18b-254b-40ce-96ea-4b141dc47c4d>
CC-MAIN-2022-33
https://mitp-arch.mitpress.mit.edu/pub/lpiv36vr/release/1?readingCollection=add39a6f
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572581.94/warc/CC-MAIN-20220816211628-20220817001628-00097.warc.gz
en
0.979196
4,628
2.53125
3
Smart homes and the Internet of Things by Chris Woodford. Last updated: June 16, 2020. Back in 1923, brilliant Swiss-born architect Le Corbusier (1887–1965) described a house as "a machine for living in"—and slowly, during the 20th century, that metaphor turned into reality. First, the arrival of convenient, electric power started to strip away the drudgery from all kinds of domestic chores, including washing clothes and vacuuming the floor. Then, when made electronics more affordable in the mid-20th century, appliances started to control themselves in a very limited way, using built-in sensors and programmers. But it's only now, in the 21st century, that the vision of the fully automated, smart home is actually being realized. Thanks to the Internet, it's easy to set up virtually any electric appliance in your home so you can control it from a Web browser anywhere in the world. And, before much longer, all kinds of net-connected machines will be talking to one another, running much more of our lives automatically through what's known as the Internet of Things. Like the idea of living in a smart home? Or an automated future that takes care of itself? Let's take a closer look at how it might work! Animation: Turning the lights on when you're away from home is the simplest application of smart home technology. More sophisticated applications include various kinds of remote monitoring, such as keeping an eye on elderly or vulnerable residents inside. What is a smart home? A smart home is one in which the various electric and electronic appliances are wired up to a central computer control system so they can either be switched on and off at certain times (for example, heating can be set to come on automatically at 6:00AM on winter mornings) or if certain events happen (lights can be set to come on only when a photoelectric sensor detects that it's dark). Photo: The simplest kind of home automation. Plug this time switch into your electrical outlet and it will switch any appliance on and off up to four times a day. This one is digital and uses a battery powered clock. Others have large, slowly rotating wheels with dozens of tiny switches you press in or out to switch appliances on and off as many times as you like. Inside, switches like this use a simple relay that allows a small switching current from the clock circuit to switch the much bigger power circuit on and off. Most homes already have a certain amount of "smartness" because many appliances already contain built-in sensors or electronic controllers. Virtually all modern washing machines have programmers that make them follow a distinct series of washes, rinses, and spins depending on how you set their various dials and knobs when you first switch on. If you have a natural-gas-powered central heating system, most likely you also have a thermostat on the wall that switches it on and off according to the room temperature, or an electronic programmer that activates it at certain times of day whether or not you're in the house. Maybe you're really hi-tech and you have a robotic vacuum cleaner that constantly crawls around your floors sweeping the dust? All these things are examples of home automation, but they're not really what we mean by a smart home. That concept takes things a step further by introducing centralized control. In the most advanced form of smart home, there's a computer that does what you normally do yourself: it constantly monitors the state of the home and switches appliances on and off accordingly. So, for example, it monitors light levels coming through the windows and automatically raises and lowers blinds or switches the lights on at dusk. Or it detects movements across the floor and responds appropriately: if it knows you're home, it switches light and music on in different rooms as you walk between them; if it knows you're out, it sounds an How do smart homes work? Assuming you're not (yet) in the Bill Gates league of having a multimillion dollar smart home built from the ground up, you'll probably be more interested in adding a bit of automation to your existing appliances with as little fuss as possible. Modestly smart homes like this range in complexity from basic systems that use a few plug-in modules and household electricity wiring to sophisticated wireless systems you can program over the Internet. Here are the three most common flavors: Plug-in X-10 modules Photo: An X-10 module used for controlling household appliances made by Powerhouse. You can see the two dials used for setting the unit code (top) and house code (bottom). Photo by Phylevn published on Flickr in 2009 under a Creative Commons Licence. Developed in 1975, the oldest and best-known smart home automation system is called (sometimes written "X10") and uses your ordinary household electricity wiring to switch up to 256 appliances on and off with no need for any extra cables to be fitted. You plug each appliance you want to automate into a small control unit (usually called a module) and plug that into an ordinary electrical power outlet. Using a small screwdriver, you then adjust two dials on each module. One dial is what's called the house code and you set this to be a letter from A through P. You can use the house code to link appliances together (for example, so all the lamps on the first floor of your home can be controlled as a group). The other dial is set so each individual appliance has a unique identifier known as its unit code, which is a number 1–16. Next, you plug a central controller unit into another electrical socket and program it to switch the various appliances on and off (identifying them through their codes) whenever How does it work? The central controller sends regular switching signals through the ordinary household wiring, effectively treating it as a kind of computer network. Because these signals work at roughly twice the switching frequency of ordinary AC power (which works at 50–60Hz), they don't interfere with it in any way. Each signal contains a code identifying the unit it relates to (a table lamp in your living room, perhaps, or a radio in your bedroom) and an instruction such as turn on, turn off, or (for lamps) brighten, or dim. Although all the control units listen out for and receive all the signals, a particular signal affects only the appliance (or appliances) with the correct code. Apart from appliances that receive signals, you can also plug in sensors such as motion detectors, thermostats, and so on, so the system will respond automatically to changes in daylight, temperature, intruders, or whatever else you consider important. With most systems, you can also switch appliances on and off with a handheld remote control (similar to a TV remote). The remotes either send signals directly to each module using radio wave (RF) signals or communicate with the central controller, which relays the signals X-10 has become an international standard for controlling appliances, but it's not the only system that works this way. Computer-controlled X-10 system If you're just automating a few security lights, a basic X-10 system with a few modules and a single controller should be more than enough for your needs. But if you want to run a more sophisticated setup, with many different appliances coming on and off in all kinds of different ways, you might want to use your home computer as the controller instead. That's easy too! You buy an X-10 home computer interface kit comprising a module (which plugs into a power outlet like any other module), an interface cable to connect the module to your computer (using either a standard serial or USB port), and some software. Typical software shows a graphical representation of all your appliances and lets you set on/off patterns for a day, a week, or even longer. You can also create your own macros so groups of appliances switch on and off in a certain sequence at a certain time each day. There's X-10 software for both Windows and Linux systems. Photo: You can use a wireless router to control an X-10 system remotely over the Internet, but you'll need to set up an IP address so you can access your router and computer securely from elsewhere. Dynamic DNS and Port Forward are very useful if you're going to do this kind of thing. Wireless Internet system Security is one of the biggest reasons why many people are interested in smart homes. If you're away at work or on holiday, making your home seem lived in is a good way to deter intruders. A basic X-10 system can turn the lights and the TV on and off at unpredictable times, but if you really want to push the boat out on security, a wireless, Net-connected system is much better. Effectively, it's a computer-controlled X-10 system with an interface you can access over the Web. With a system like this, you can hook up webcams to watch your home (or your pets), switch appliances on and off in real time, or even reprogram the whole system. Harmony Home Automation is an example of a system that works like this. DIY smart homes Lots of people like simple, off-the-shelf, plug-and-play systems like X-10: buy it, take it home, plug it in, and off you go. But plenty more of us are hobbyists, hackers, and geeks for whom the very challenge of doing something is at least as important—sometimes more so—than the thing we're actually trying to do. If you're one of these people, you're route to a smart home is more likely to be through the hacker, maker, DIY community, maybe using something like an Arduino microcontroller to link your computer to appliances around your home. There are quite a few projects of this kind on websites like Instructables, and I've listed them in the "Find out more" section below. But do you really need a smart home? You might think the idea of a smart home is frivolous and silly. Isn't it lazy and indulgent to have a machine switching the lights on and off for you when you can do it perfectly easily yourself? Bear in mind, though, that many elderly and disabled people, and those with special needs, struggle with simple household tasks. Home automation could make all the difference between them being able to live happily and independently in their own home or having to move into expensive sheltered accommodation. Artwork: Elderly people ahead. Crown copyright traffic sign from the UK Traffic Signs Images Database published under the Open Government Licence. As the population ages, governments and medical charities are looking at home automation with increasing interest: why not use computers, robots, and other technologies to provide the support that vulnerable people need to keep them happy, healthy, and independent? For example, people with dementia can have their homes fitted with automated sensors that check whether cookers have been left on or taps have been left to overflow. Elderly people prone to falling can have their homes fitted with lighting activated by motion sensors, so that if they get up in the middle of the night they're not stumbling around dangerously in the dark. Blind people can finally buy ordinary household appliances and use one simple computer controller, programmed to suit their personal needs, to manage them all. If you're elderly or disabled, home automation systems like this can make all the difference to your quality of life, but they bring important benefits for the rest of us as well. Most obviously, they improve home security, comfort, and convenience. More importantly, if they incorporate energy monitors, such as thermostats, or sensors that cut the lights to unoccupied rooms, they can help you reduce household energy bills; automated systems such as Bye Bye Standby, which cut the power to appliances when they're not being used, can dramatically reduce the energy wasted by appliances such as washing machines, and TVs when they're not actually being used. Maybe you're still not convinced—and maybe you're right. Do you really need things like this? Do you need to buy even more appliances just to control the ones you already have? Isn't it just as easy to get into the habit of switching things off yourself? Gadgets that kill your TV's standby mode sound cool, but how hard is it to pull out the plug? What about switching the TV off altogether and reading a book? Or putting your games console away in the cupboard and getting into the habit of taking walks in the country instead? And instead of going to great lengths to wire up your house for while you're away on vacation, how about befriending the neighbors and asking them to look out for you instead? For many of us, a house really is a machine for living in—and if that's the way you like living, it's just fine. But it's important to remember that there are plenty of alternatives to living that way as well. If small is beautiful and simple is best, the smartest home might be one that has no gadgets at all! The Internet of Things Artwork: The Internet of Things could see many billions (or even trillions) of appliances and inanimate objects all connected together. One of the things that makes people smart—smarter than all the other creatures who creep, flap, hoof, and slither round the planet—is our ability to communicate with one another. We can talk to other people, listen to them, and collaborate to achieve very complicated goals, from finding cures for cancer to putting astronauts on the Moon. Even before the invention of the Internet, people were intricately networked, right round the world; famously, according to sociological theory, there are only six degrees of separation (six links) necessary to connect any one person on the planet with Now what if gadgets and machines could talk to each other the same way? What if an accelerometer embedded in a cardigan could automatically detect when an old person fell down the stairs and telephone an ambulance? What if all the homes in the United States had smart power meters that could signal energy consumption to utility companies in real-time? Suppose car engines could monitor their own mechanical efficiency, and, if it fell below a certain level, dial into a garage computer and be remotely tweaked back to some optimum level, all without leaving our drives? What if highway control systems could measure and monitor cars streaming down different routes at different times of day and automatically re-route traffic round jams and snarl-ups? These things might sound fanciful, but they'd all become possible if the machines in our homes, offices, and transportation systems could communicate with one another automatically—if, in other words, there were a giant network of machines: an Internet of things. What is the Internet of Things? People have been getting excited about this idea since it was originally suggested in 1999 by technology entrepreneur then working in brand marketing at Proctor & Gamble. He'd been researching electronic sensors and RFID tags (wireless printed circuits that allow objects to identify themselves automatically to computer systems; they're used in library self-checkouts) and, in a moment of insight, wondered what would happen if all kinds of everyday objects and machines could communicate through a standard computer network. Ashton realized his Internet of Things was a yellow-brick road to better efficiency and less waste for all kinds of businesses. In popular news articles, the Internet of Things is often explained by introducing a well-known but frivolous and now rather hackneyed example. Suppose your refrigerator could use RFID tags to detect what products it contained and how old they were. If it were linked to the Internet, it could automatically reorder new supplies whenever it needed to. It sounds harmless enough, but the infamous Internet fridge has actually become something of a distraction from much more valuable applications: most of us are capable of keeping tabs on our sour milk and moldy cheese, the argument goes, so what possible use could there be for an Internet of Things? But suppose similar technology were being used to monitor elderly or disabled people so they could continue to live safely, with independence and dignity, in their own homes? It's easy to build a home that uses motion sensors to monitor when someone is regularly walking around (intruder alarms have been using this technology for years), and not much harder to monitor that data remotely. That's a much more persuasive example of how the Internet of Things could prove really helpful to a society with a rapidly aging population. Although people sometimes talk about the Internet of Things as though it's merely an extension of smart home technology, it's actually a much bigger and more general idea. Imagine our system for monitoring the elderly transplanted to a hospital and scaled up into a kind of e-care, in which noncritical patients are routinely monitored not by nurse's observations but by remotely gathered electronic sensors, communicating their measurements over a network. Or, to take another example, what about automatically monitoring your home while you're on holiday using sensors and webcams? If it works in a house, it works anywhere: for checking and automatically restocking shelves in a supermarket, for remotely monitoring the crumbling concrete on a highway bridge, or in a hundred other places. How does it work? Five basic things are needed to make the Internet of Things work. 1. The thing First, there's the "thing" itself—which could be anything from a person or animal to a robot or computer; champions of the technology have even speculated that one day the Internet of Things could extend to things as small as bits of dust. Generally speaking, the "thing" is something we want to track, measure, or monitor. It could be your own body, a pet, an elderly relative, a home, an office block, or pretty much anything else you can imagine. 2. The identifier Photo: RFID tags, like this one concealed in a price label on a pair of shoes, allow objects to identify themselves to the Internet of Things. If we want to be able to connect things, monitor them, or measure them, we need to be able to identify them and tell them apart. It's easy enough with people: we all have names, faces, and other unique identifiers. It's also relatively easy with products we buy from stores. Since the 1970s, most of them carried have unique numbers called Universal Product Codes (UPC), printed on their packs using black-and-white zebra patterns—barcodes, in other words. The trouble with barcodes is that someone has to scan them and they can "store" only a very small amount of information (just a few digits). A better technology, RFID, allows objects to identify themselves to a network automatically using radio waves, with little or no human intervention. It can also transmit much more information. 3. The sensors If an object simply identifies itself to a network, that doesn't necessarily tell us very much, other than where it is at a certain time. If the object has built-in sensors, we can collect much more useful information. So automatic sensors that can routinely transmit automatic measurements are another key part of the Internet of Things. Any type of sensor could be wired up this way, from electronic thermometers and thermocouples to strain gauges and reed switches. 4. The network It makes sense for things to exist and communicate on a network the same way that computers exist and talk to one another over the Internet—using a standard agreed communication method called the Internet Protocol (IP). IP is based on the idea that everything has a unique address (an IP address) and exchanges data in little bits called packets. If things communicate using IP, or use something like WiFi to talk to an Internet-connected router, it opens up the possibility of controlling them from a Web browser anywhere in the world. That's why we're now seeing home security and monitoring systems that allow you to do things like turning your central heating on and off with 5: The data analyzer Once we're collecting masses of data, from hundreds, thousands, millions, or even billions of things, analyzing it could find patterns that help us work, move, and live much more smartly—at least in theory. Data mining the information we gather from people or car movements and optimizing our transportation systems could help us reduce travel times or congestion, for example, with major benefits for people's quality of life and the environment. Cloud computing systems (the idea of using powerful computer services supplied over the Internet) are likely to play a very big part in the Internet of Things, not least because the amount of data collected from so many things, so regularly, is likely to be enormous. Who's using it already? Photo: Smartphone apps are likely to be one of the ways people interact with the Internet of Things. Above: Hive's app lets you control your heating using your phone, wherever in the world you find yourself. Below: Efergy's energy monitoring app keeps tabs on your home energy consumption. You don't have to look too far to see the Internet of Things in action. Libraries were early adopters, embedding RFID chips in book covers so that people could borrow and return items themselves using self-checkout machines. That gave instant stock-control, better security, and (in theory) the possibility of freeing up librarians to spend more time helping people (in practice, many libraries simply have fewer staff now). Tracking your home-delivery purchases over the Internet is another very basic example: if every parcel is barcoded and scanned at every point of its journey from warehouse to customer, with the scanners all wired to a central database, it's easy to work out where anything is at any time. Much more interesting examples are also starting to emerge. Hive, a home-heating system launched by British Gas, uses a wireless thermostat that communicates with your home Internet router—making it possible to adjust your heating or hot water using a smartphone app or web browser; the Nest Learning Thermostat, a rival home thermostat system, is more sophisticated but can be controlled by an app in a similar way. Piper, a home management and security system, goes even further: it connects a whole raft of sensors and alarms to a web interface so people can monitor and manage their homes when they're at work or on vacation. Even the infamous Internet fridge is starting to arrive—albeit in rather slow motion. Between 2014 and 2019, Amazon tested a system called Dash, featuring a handheld scanner that you could swipe over products to reorder things when supplies got low. A related idea was to stick simple Internet-connected "Dash buttons" around your home that you could use to reorder things with a single click. In one way or another, all the big digital technology companies are exploring variations on the Internet of Things. Apple has HomeKit (which turns iPods and iPhones into smart home controllers) and HealthKit (which lets you monitor your health and fitness and, if you wish, share the data with your doctor or hospital through a smartphone app). Google has Home and Fit, which lets people monitor and analyze exercise data collected from wearable sensors and trackers developed by a whole collection of partner companies. Samsung, leading maker of both smartphones and home appliances, sees a great opportunity in linking the two in a system called the SmartThings Hub. Microsoft is also believed to be working on smart home systems linked to its Kinect motion tracker and Xbox gaming system. And Amazon has Alexa. Good points and bad points It's easy enough to see benefits from a world in which we connect, monitor, and analyze things much more intelligently. The natural world manages perfectly well without top-down organization, coordination, and control, but our human-dominated planet, packed with over 7 billion people, plagued with problems like poverty, disease, and looming environmental challenges such as climate change, probably can't afford the luxury of hapless, chaotic self-organization for much longer. The benefits of tracking and organizing things seem overwhelming to some people; even so, critics point out equally clear risks of monitoring people and things so much more closely. Do we all want our cars to be tracked at all times? Do we want grocery stores to know even more about what we're heating than they do already? Do we want our homes packed with sensors, keeping tabs on us at all times? There are all kinds of privacy, security, and ethical issues to consider before we get anywhere near the technological difficulties of building something so all-encompassing as an Internet of Things. Photo: Privacy problems ahead? Will an Internet of Things designed for tracking and tracing things turn into a perfect tool for spying on people? Given that much of the technology exists already, you might think building an Internet of Things is really quite a simple task, but putting everything together is likely to prove much more complex. One problem is that the whole concept has been hyped as a massive commercial opportunity, so lots of different companies are rushing to develop and market competing technologies. That raises the immediate difficulty of getting rival systems to talk to one another. If I buy a smart home-heating system from one utility company, will I be able to control it using another company's smartphone app if I decide to switch utilities in a couple of years time? If I buy myself an Amazon product scanner, will I only ever be able to order products from Amazon? Or will I have to order a different scanner for every different company I buy from? While companies such as Amazon and Apple are notorious for taking a "closed" (or "walled-garden") approach to their products and services (for example, you can only read Kindle ebooks, sold by Amazon, on an Amazon Kindle reader), rivals such as Google and Samsung are notable champions of "open" standards. Whether closed, open, or mixed systems prevail, there's likely to be a great deal of consumer confusion about what works with what, and there's a real risk that the Internet of Things fragments, in practice, into many highly compartmentalized systems—many Internets of Things—that have little or nothing in common. That's not so surprising when the Internet of Things is so broadly defined that the whole idea verges on the meaningless. A recent British government briefing describes it as an "ecosystem" that links anyone, any business or service, through any path or network, to anything, anytime, anywhere—in other words, defines it so broadly that it includes absolutely everything. Is that a helpful idea? Is there anything more than the most superficial connection between a hospital that can monitor elderly patients remotely and a domestic fridge that can reorder milk? Does it make any sense at all to link such disparate ideas together, if all we're really saying is that everything should be able to interoperate by relying on common systems and standards as much as possible? To put it another way, would your hospital ever want or need to communicate with your fridge? Although often hyped as a means of doing things more efficiently and saving time and money, there's no guarantee at all that an Internet of Things will deliver cost, energy, or efficiency savings. Does the ability to control your home heating from work make it more or less likely that you will save energy? Will you simply shuffle energy around and use it at a different time? Why can't you leave the job to an intelligent electronic thermostat (a perfectly reliable and efficient piece of technology we've all been using for decades)? Who says you can do it better from your smartphone than a computerized programmer can do it from inside your home? To use a different example, it's absolutely fascinating to track parcels all the way from the warehouse to your doorstep—but do you really need to know anything more than the date when they'll finally arrive? Every extra bit of computer power we use managing, monitoring, and generally fiddling about with the Internet of Things is extra energy for the world to consume. Cloud computing powers the Internet of Things—and is already one of the world's biggest and fastest growing forms of energy consumption. There's a very real risk that, far from helping us reduce resources and use energy more widely, the Internet of Things will simply add another unnecessary layer of micromanagement on top of what we do already—increasing the world's energy consumption overall. It's very telling that data from American homes reveals steadily growing energy consumption despite significant improvements in energy reduction and massive reductions in the energy we need for basic things like home heating. Smart home technology has been widely available for decades but, so far, has pretty much failed to capture people's imagination or take off in a really big way. Will rebranding it—breathlessly hyping it as the "Internet of Things"—make any difference? Home electrical energy monitors have been around for years, for example, and seem to offer the very compelling benefit of saving money, but they're still relatively underused. Smart homes aside, there are very compelling reasons for businesses and public services to invest in Internet of Things technology—especially if they can demonstrate real customer benefits, cost or energy savings, or other good reasons for doing so. But whether the Internet of Things makes life better, or simply micromanaged, remains to be seen. Libraries and supermarkets are perfect examples: they use more technology and employ fewer people than ever before, but do they serve us better, and do we like them more or less than we did before? Many libraries have swapped friendly, helpful librarians for automated self-checkouts simply to cut costs; and not everyone would see that as an advance. Will the Internet of Things revolutionize our homes, offices, and transportation systems, making everything better organized and more cost-effective? Will the Internet help us control things more effectively—or simply turn people into "things" that can be connected, analyzed, and monitored? Find out more These two general articles offer alternative, reasonably non-technical overviews: - The Internet of Things and the Explosion of Interconnectivity by Stephen Ornes, PNAS, Vol. 113, No. 40, October 2016, pp. 11059–11060. - The Internet of Things by Neil Gershenfeld, Raffi Krikorian and Danny Cohen. Scientific American, Vol. 291, No. 4, October 2004, pp. 76–81.
<urn:uuid:c13d43e2-8b02-4965-8ad3-15b3f68cc38e>
CC-MAIN-2022-33
https://www.explainthatstuff.com/smart-home-automation.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571950.76/warc/CC-MAIN-20220813111851-20220813141851-00695.warc.gz
en
0.934333
6,759
3.046875
3
A BRIEF HISTORY OF THE CITY OF DOVER POLICE DEPARTMENT The City of Dover Police Department was established in 1925 when Mr. J. Wallace Woodford, the President of the City Council, appointed Mr. I. Lane as Dover’s first Chief of Police at a salary of $25.00 per week. In 1929, a charter change resulted in Council President Woodford being selected as Dover’s first Mayor. Later in 1929, Mr. James Selvy was appointed the second Chief of Police following the resignation of Chief Lane. Chief Selvy resigned and was succeeded by Chief Maurice B. Farr who served as Chief of Police until 1948. By 1936, the Dover Police Department boasted seven full time sworn officers, including Chief Farr, a Lieutenant and five Patrolmen. In 1948, Clifford Artis was appointed as fourth Chief of Police but resigned in 1949 because of ill health. On May 16, 1949, Mayor William J. Storey recruited James E. Turner Sr., a Major with the Delaware State Police, to serve as the next Chief of Police. Chief Turner was asked to reorganize the police department. At that time, the police department was located in two rooms in the basement of City Hall. These same two rooms housed the police department since 1934. On, July 1, 1949, the City of Dover Police Department was relocated to 12 King Street, which had been the private residence of the late Dr. Cecil de J. Harbordt. The complement of personnel at that time consisted of the Chief of Police, two Lieutenants, one Sergeant and eight Patrolmen. In 1950, a new radio system was installed replacing the old system, which had been operated via remote control from the basement in City Hall. Chief Turner did reorganize the police department. Some of the initial reorganization moves initiated by Chief Turner were: a complete new records system; fingerprinting and photographing of all criminal arrests; establishing of in-service training schools with instructors including staff from the FBI; and utilization of the Delaware State Police and Wilmington Bureau of Police Training Schools for all Dover Police recruits. Another innovation was the establishment of a Criminal Investigation Unit and Identification Division. In 1956, the Dover Police Department was relocated to North New Street and William Streets where it remained until 1967. Personnel in 1956 numbered eighteen sworn officers and eight civilian employees. The year of 1965 marked the beginning of assigned personnel to out-of-state professional training to include Northwestern University, the University of Maryland and the FBI National Academy. In November 1967, the police department moved into a modern $400,000 facility located at 400 S. Queen Street where it is located today. On November 31, 1967, Chief Turner retired after serving the citizens of Dover for eighteen and one-half years. Chief William L. Spence Jr. was appointed Chief of Police to replace Chief Turner. Chief Spence became the sixth Chief of Police for the City of Dover Police Department. In 1968, there were 24 sworn officers, 5 civilians and a budget of $42,154.00. The Annual Report to Council for the year 1968 reported a total of 3,985 calls for service, 1,237 traffic arrests, 1,206 criminal arrests, 735 traffic accidents and 2,744 city ordinance summons were issued. In 1969, there were approximately 16,000 residents in the City of Dover’s nine square mile area, which was protected by 38 officers and 7 civilians. During Chief Spence’s tenure he hired the first female officer for the City of Dover Police Department. Chief Spence is also credited with changing the shoulder patch from the round shoulder patch which civilian employees of the City of Dover still wear to this date to the current shoulder patch worn by the sworn officers of Dover Police Department. Chief Spence retired in 1979. Chief Joe A. Klenoski succeeded Chief Spence. On November 19, 1982, the City of Dover Police Department building was rededicated and named the James E. Turner Sr. building. In 1988, there were 56 officers and 18 civilians employed by the police department. Chief Klenoski retired in 1988. Chief James L. Hutchison was appointed as Chief of Police in May of 1988. During his tenure, Chief Hutchison concentrated on creating a proactive police department focusing on Community Policing. Chief Hutchison added bicycle and K-9 patrols. Personnel were increased from 56 sworn officers to 80 sworn officers and 19 civilians. The department began to move toward national accreditation. Recognizing the need for expansion, Chief Hutchison initiated steps toward a referendum that was unfortunately defeated by the citizens of Dover. In 1992, Chief Hutchison retired. In 2019, the Dover Police Department’s Public Assembly room was re-named the “Chief James L. Hutchison Public Assembly Room” in honor of Chief Huthcison (who also served as the Mayor of Dover). In December of 1992, Chief J. Richard Smith was appointed to succeed Chief Hutchison. Chief Smith continued Chief Hutchison’s efforts to expanding the police department’s building. A second referendum was presented to the taxpayers and passed on May 17, 1994. This expansion project increased the size of the building from 17,000 square feet to a total of 49,000 square feet at a cost of $4.5 million. Chief Smith increased the authorized strength to 81 officers and utilizing grant funding sources, was able to purchase a tremendous amount of equipment to officers to assist them in their efforts to combat crime. Chief Smith held the distinction of being the youngest Chief of Police in the City of Dover’s History. Chief Smith retired from the department in 1997. In June of 1997, Keith I. Faulkner was appointed as Chief of Police for the City of Dover Police Department, becoming the tenth Chief of Police. For the year ending in 1997, there were 81 sworn officers, 26 civilian employees and 5 volunteers and a budget of $7,040,000.00. The 1997 Annual Report to City Council reported a total of 24,912 calls for service, 11,502 traffic arrests, 3,842 criminal arrests, 2,277 traffic accidents, and 10.386 city ordinance summons issued. In August 1997 the department moved into the new police station, which included many improvements and additional features. Some of these included new holding facility that meets national accreditation and federal standards, which included separate holding facilities for women, men and juveniles. The indoor firing range was upgraded to meet OSHA standards. Separate locker rooms for men and women; a separate and secure communications center. A Public Assembly room that can accommodate 125 people was opened to community groups. An evidence processing room was also added for use in the proper processing of recovered property and evidence. The facility was brought up to ADA standards as well as the current fire code (the old building had only one stairway to the second floor which violated current fire code standards). The Records Unit was upgraded and expanded with a secure storage facility for police records. Under the leadership of Chief Faulkner, the City of Dover Police Department gained the status of National Accreditation. Chief Faulkner retired from the department in March of 2001. On March 2, 2001, the Honorable Mayor of the City of Dover, Mayor James L. Hutchison, swore in Major Jeffrey Horvath as Chief of Police. Upon being sworn in as Chief of Police, Chief Jeffrey Horvath became the eleventh Chief of Police for the City of Dover Police Department. He also became the youngest Chief of Police replacing Chief J. Richard Smith (1992 – 1997) who was previously the youngest Chief in the department’s history. On March 24, 2001 the most dreaded call any police officer can receive occurred. Chief Horvath was notified a member of Dover Police Department, Pfc. David Spicer had been shot in the line of duty. Pfc. Spicer along with Probation and Parole Officer Doug Watts, were working the Safe Streets Program within the City of Dover. They were attempting to affect an arrest of a suspect for a drug transaction. During a brief foot pursuit, the suspect stopped turned and fired upon Pfc. Spicer striking him several times. Probation and Parole Officer Watts returned fire at the suspect, as did Pfc. Spicer. Probation and Parole Officer Watts has been credited with saving Pfc. Spicer’s life. The suspect was apprehended after an extensive manhunt. Pfc. Spicer survived his wounds. Through this trying time and other traumatic events Chief Horvath stood by his officers in an unwavering stance. He maintained the morale and dedication of the Dover Police Department in the truest sense of the word. In September 2001, Chief Horvath and his staff were able to obtain permission from the Mayor and City Council to create, and promote, two (2) additional Sergeant positions and three (3) new Corporal positions for a total of seven (7) officers being promoted. This was quite a large increase in rank all at one time. Such an increase has never been seen in the history of Dover Police Department before. This was quite an achievement for our Chief of Police and his staff. May 22, 2002 was a fantastic day for the Dover Police Department and Chief Horvath; Pfc. David Spicer was medically cleared by his doctor to return to full duty. His recovery was approximately fourteen (14) months in duration and included many long days of hard, intensive, physical therapy and dedication on PFC Spicer’s part. His determination to recover from his injuries and to return to full duty as a police officer for the City of Dover Police Department gave him the strength to accomplish this goal. His family and many friends assisted him. Pfc. Spicer was assigned to the Criminal Investigation Unit. In December 2002, inspectors from CALEA returned to Dover Police Department for our first re-accreditation. The three inspectors were from different departments around the country. They spent three days inspecting every aspect of the department. The inspectors graded the department very high in our work following CALEA guidelines. During January 2003, Chief Horvath was able to re-create a position, which had been lost approximately 5 years previous. This position held the rank of Lieutenant and was in charge of the Special Enforcement Unit. With the establishment of this newly created position, three (3) officers received promotions, (1) Lieutenant, (1) Sergeant and (1) Corporal. The functions of the Special Enforcement Unit, which is comprised of the Motorcycle Unit, Community Policing Unit, Parking Enforcement Unit and Animal Control Unit, were transferred from the Patrol Unit Commander to the newly created Special Enforcement Unit Commander. August 5, 2004 was another mile stone in the history of the Dover Police Department and Chief Jeffrey Horvath. On this date, the department retired the badges which have been worn by members of the Dover Police Department for over 30 years. The new shield project was assigned to Captain Ray Sammons and after approximately 1 ½ years a design was approved by Chief Horvath. The new shield is larger than the old badge. It has a colored City of Dover seal in the center. September 9, 2004 the department received funding from the COPS office in the form of a grant which enables the department to place two officers into our local schools. This endeavor has been in the planning phase for some time. With this funding, one officer was assigned to Dover High School while the second officer was assigned to Central Middle School. 2004 was also the year when the City of Dover, the Dover Police Department and the Delaware Department of Transportation entered into an agreement to participate in a pilot program regarding Red Light Video Camera Enforcement. During 2004 a camera was installed at the intersection of U.S. Route 13 and Webbs Lane. During 2005, five additional red light cameras will be installed within the city of Dover in an attempt to reduce red light violations and more specifically to reduce the number of intersection related accidents and possible fatalities. This pilot program will ultimately see 20 red light cameras state wide. 2005 is proving to be a very busy year for the Dover Police Department. The department has been in the planning and development stages for a few years on several projects which are scheduled to be started and completed this year. Some of these projects include; upgrading our 800 MHz dispatch radios, new dispatch console furniture, upgrading our 911 telephone system, Statewide Mapping program, building a central server room on the second floor. July 1, 2005 the authorized strength of the Dover Police Department was increased from 87 sworn members to 90 sworn officers. With the addition of the 3 new officers, the department will bring back and institute on a full time basis the Quality of Life Task Force. This task force first appeared during calendar year of 2004 and was a huge success. There were 7 officers assigned who concentrated their efforts on violations which directly affected quality of life offenses. November 27, 2005 the 9-1-1 Center, located at the Dover Police Department, was officially vacated and resumed operations at the Kent County Dispatch Center for the first time since we started 9-1-1 operations in August 1997. Demolition of our current 9-1-1 console furniture started in preparation for the installation of our new console furniture, radios and telephone. This equipment was obtained through grant funding with a price tag of over one million dollars. During December 2005, the Dover Police Department again welcomed the inspectors from CALEA who arrived to conduct our second re-accreditation. As before, the inspectors were from different police departments around the country. They spent three days inspecting every aspect of the department. The inspectors rated the department very highly in our efforts to follow CALEA guidelines and standards. In April of 2010, following graduation from the FBI National Academy, Chief James E. Hosfelt Jr. was appointed by Mayor Carlton Carey as the 12th Chief of Police for the City of Dover Police Department after serving 22 years in both the Operations and Administrative Divisions of the Police Department. Chief Hosfelt’s career began in 1988 when he attended the police academy after serving seven years active duty with the U.S. Air Force. At the time Chief Hosfelt assumed command of the Dover Police Department, there were 122 employees including 91 sworn officers and 31 civilian employees that were responsible for police services to 37,000 residents living within 40 square miles. The budget for the police department was approximately 13 million dollars. In 2010, Chief Hosfelt created the Crime Scene Investigation (CSI) Section. The new section was assigned to the Criminal Investigations Unit and was staffed by one detective who was required to possess prior Criminal Unit experience and be a graduate of the prestigious National Forensics Academy at the University of Tennessee. The duties assigned to this section included processing of major crime scenes, DNA collection along with processing and tracing of firearms and ammunition seized during arrests. With the rise in violent crime, and the belief it was tied to criminal gang activity, Chief Hosfelt worked in cooperation with the Capital School District to establish the “GREAT” middle school education program. The acronym, GREAT, stands for Gang Resistance Education and Training. This program is geared to preteen students to make them aware of the detriments of gangs and their affect on families. The School Resource Officers who formally taught DARE made the much needed transition, teaching this program to 6th graders at William Henry Middle School. The Dover Police Department has always been a great place to work. The officers and staff are proud of their ability to proactively serve our community while being known as a family first agency. As a result, the employees of the department voted the agency a Delaware Top Ten Workplace as recognized by Delaware Today magazine. In 2011, the Dover Police Department continued to “do more with less”. Budget constraints made it necessary for the men and women of the Dover Police Department to work harder and smarter than ever due to the reduction of staffing levels. In spite of that, the department underwent its fifth National CALEA accreditation and was awarded the “Accreditation with Excellence” award, which is the gold standard for public safety agencies. Never before had the department received this recognition. The Dover Police Department was one of 35 law enforcement agencies nationwide to receive this honor and the only one within the State of Delaware. The Special Operations Response Team (SORT) has always been a team comprised of veteran officers who have completed extensive physical testing as well as firearms proficiency. For nearly 20 years, Chief Hosfelt worked as a member of this team or commanded it and saw the need for updated equipment. In 2012, Chief Hosfelt secured a grant through Homeland Security which allowed the department to purchase a Bearcat Armored Response Vehicle and a new inventory of patrol rifles. Also in 2012, the Dover Police Department partnered with the Bureau of Alcohol, Tobacco and Firearms and assigned a detective to their task force. The mission of the task force was to focus on gang activity, gun trafficking and violent crime within the State of Delaware. The Dover Police Department began a Veterans Recognition Program after realizing a need to identify those officers who had proudly served our country. The department’s veterans were recognized in the police department’s Annual Report and provided with a ribbon of stars and stripes to wear on their uniforms. In 2013, Chief Hosfelt continued to find new ways to not only solve crime, but prevent it by making Dover a safer city through information sharing. Clearance rates, in all major crime reporting categories, continued to be among some of the highest in the country; something our department had grown accustomed to throughout the years. Without the possibility of increasing staffing levels, Chief Hosfelt and his staff worked with the office of the Mayor for increased funding to improve the Downtown Security Camera Program. The program began in 2009 with 6 cameras and by the end of 2013 the department increased the number of cameras throughout the downtown Dover area to 35. While it was not economically feasible to put a police officer on every corner, Chief Hosfelt felt it was possible to put a camera on every corner. The camera project has proven itself invaluable and has helped to solve and prevent homicides, illegal drug activity and other violent offenses. Due to close working relationships with state legislators, Chief Hosfelt was able to change existing state laws allowing all municipal and county police departments to hire retired law enforcement officers as Sex Offender Registry Enforcement Agents. It was believed that hiring retired officers to perform this function would provide a positive impact to the budget and would free up sworn officers currently serving in this capacity to serve in other more critical areas of the department. Chief Hosfelt and his staff also worked to improve community relations through the Public Information Officer (PIO) and created a Public Affairs Office in the fall of 2013. Cpl. Mark Hoffman was assigned the responsibilities of the office. This year saw the Dover Police Department adding social media platforms to the responsibility of the PIO. Through platforms such as Facebook, Twitter, YouTube, the MYPD mobile App and the RAIDS Online Crime Mapping program the department has been able to connect with the citizens of Dover quickly and more efficiently than ever before. The PIO sends safety messages, public service announcements, crime alerts, educational videos, and more through social media outlets. Since the inception of the program in October of 2013, the department has seen tremendous success in solving crimes, crime prevention, public communication and reputation management. After serving four years as Chief of Police and 26 years with the department, Chief Hosfelt retired in April of 2014. Chief James Hosfelt retired in April of 2014 and Chief Paul Bernat was appointed as his successor by Mayor Carleton Carey. Chief Paul Bernat wasted no time in taking the helm of the City of Dover Police Department and started by redefining and building on the success of the Public Information Officer position. Through programs such a Facebook, Twitter, YouTube, the MyPD Mobile App, and the RAIDS Online Crime Mapping program, the department has been able to connect with the citizens of Dover quickly and more efficiently than ever before. Since the inception of the program, the department has seen tremendous success in solving crimes, crime prevention, public communication and reputation management. The Public Affairs Office also works closely with the Community Policing Unit on various community outreach projects such as National Night Out, Holiday Heroes, coat drives, and community meetings. In April 2014, Chief Bernat reinstated the Police Prosecution Project at JP7, allowing officers to remain on the streets instead of the court room. In September of 2014, Chief Bernat was able to add a police officer and created an additional SRO at the Parkway Academy Central School. In November 2014 the Dover Police Department installed a prescription drug drop off box in the lobby of the police department, the box is the responsibility of the DVOC Unit. In two months the DVOC Unit received a total of 104 lbs. of miscellaneous prescription drugs which could have fell into the hands of children or illicit drug offenders. In January of 2015, Chief Bernat also aggressively pursued funding from the City of Dover to hire more police officers. As a result of his efforts and the efforts of his staff, the City of Dover Police Department gained an authorized strength of 103, which is an increase of an additional 10 police officers. As a result of these additional officers, special units were able to receive more officers to help better combat crime and serve the community. Units that received additional personnel were: the Street Crime Unit, the Criminal Investigation Unit, School Resource Officer Unit, Motorcycle Unit, and the Planning and Training Unit. Chief Bernat was able to accomplish this much needed reorganization by civilianizing the accreditation unit and the sex offender unit, thus freeing up more officers to be strategically placed in areas that would be most beneficial to the police department and the City of Dover. The civilianizing of the positions put the authorized strength of police officers to 101 and added the three civilian positions for a total of 40 civilian employees. In March 2015, the Dover Police Department again welcomed the inspectors from CALEA who arrived to conduct our re-accreditation. As before, the inspectors were from different police departments around the country. They spent three days inspecting every aspect of the department. During this re-accreditation the department received the Meritorious Award for maintaining 15 years of accreditation. In May 2015, Chief Bernat re-established the City of Dover Police Department Cadet program. This program was originally created several years ago, but hadn’t been utilized for numerous years. The cadet program allows persons that are 18 or older, that are successful in the hiring process, the opportunity to patrol the downtown streets of Dover, the library and other designated areas of the city giving more of a police presence and security to the community. Cadets provide security to the businesses on Loockerman St. and the Dover Library as they routinely patrol the area on their bicycles and on foot. Chief Bernat recognized the need to look for patterns and to predict crimes based from data and a full time civilian Crime Analyst/Accreditation Manager position was added in July 2015. In December 2015, Chief Bernat was honored to promote an unprecedented 21 officers throughout all ranks of the department. The promotion ceremony was so large it needed to be moved from the police station to the Schwartz Center in downtown Dover. Recognizing the need for leadership development and succession planning Bernat sent six newly promoted officers to prestigious 10 week command schools. During 2015 and 2016 two attended the FBI National Academy in Quantico, Virginia and four attended the Northwestern Police Command School. Through Chief Bernat’s dedication to the City of Dover and the Dover Police Department, in January of 2016, he was able to secure a $580,000 grant from the State of Delaware Joint Finance Committee. This type of funding was previously unprecedented. The grant was utilized to put more cameras up in the downtown area of Dover, fund the cadet program until June 30, 2017, create foot patrols in the downtown high crime areas, create and supply the Police Athletic League, create a camera monitoring room in Dover Police dispatch and fund various community outreach programs. Chief Bernat continuously focused on building bridges with the community. Firmly believing in community security and safety, Chief Bernat focused on crime prevention by increasing the downtown camera system from its original 35 cameras to an astounding 108. This created the necessity for a camera monitoring room, which was quickly added and has become instrumental in solving and preventing crimes within the city. In response to an increase in violent crimes, in January 2016, Chief Bernat created the Street Crimes Unit. This 7 officer unit headed by a sergeant, has been instrumental in removing illegal guns from the city of Dover. In 2016 there were over 100 guns taken off of the City streets. February 2016 marked the initiation of Police Athletic League (PAL) program with the dedication of one full time police officer in hopes of connecting with community youth. In July 2016, in response to the heroin epidemic, Chief Paul M. Bernat announced that the Dover Police Department has partnered with the Police Assisted Addition and Recovery Initiative (PAARI) to establish the department’s addiction recovery program, the ANGEL Program. Dover Police Department is P.A.A.R.I.’s first partner in Delaware. Additionally, under Bernat’s direction, the Dover Police Department received the NAACP President’s Award in 2016. Announcing the event, the NAACP said, “Under the leadership of Chief Bernat, the Dover Police Department has become the national model for what it means to build bridges of trust or partnerships between law enforcement and the African American Community. Other significant accomplishments during Chief Bernat’s tenure include reorganization of personnel to include an additional Crime Scene Investigator (CSI) position, an additional Planning and Training Officer to handle the increase in training for the young department, and a full time Firearms Officer. Chief Bernat was also able to obtain a Field Force Team, including new riot gear and travel trailer for transport and storage of the equipment. Chief Bernat was also able to obtain funding to enhance officers safety by adding an additional 73 AR-15 Rifles to operations. Finally, in December 2016, the Dover Police Department launched its first Unmanned Aircraft Systems (UAS) Unit. The Unit consists of three drones with three officers that are certified drone pilots. In May of 2017, Chief Marvin C. Mailey was named the selected to lead the Dover Police Department. Chief Mailey was the first African-American to be named Chief of the City of Dover Police Department. Chief Mailey began his career with the Dover Police Department in 1993, after working with the Delaware Department of Correction and United States Air Force. During Chief Mailey’s career, he served in several capacities, including: Criminal Investigation, Drugs, Vice, and Organized Crime Unit, DEA Task Force, Patrol Unit, Patrol Unit Supervisor, Community Policing Supervisor, Deputy Chief, and more. As the Chief of Police, Marvin Mailey was a firm believer in community policing and outreach initiatives. Under Mailey’s leadership, the Dover Police Department Police Athletic League (PAL) grew significantly, growing their reach by hundreds in the capital city and forming a Board of Directors and expanding the number of special outreach events the police department started or became involved in. Chief Mailey also led a zero-tolerance gang initiative, working with several allied police departments to help reduce crime numbers in 2016 and 2017. Chief Mailey also supported the previously created ANGEL program and the opioid awareness initiative. During his tenure, Chief Mailey worked with several community and faith-based organizations to improve relations and communication between the department and the citizens it serves. Chief Mailey identified the need for improved storage space and methods for evidence storage at the department, leading the department through a massive renovation and additions to the main and auxiliary evidence storage facilities in the department. Chief Mailey’s leadership also guided the department through tragedy when Cpl. Thomas Hannon died as a result of complications stemming from an on-duty injury in September of 2017. The death of Cpl. Hannon followed the loss of Patrolman Robert DaFonte and Cadet James Watts in February of 2017, as a result of an off-duty automobile accident. Chief Mailey was serving as the Interim Chief at the time of that incident. Chief Mailey’s guidance, leadership, and compassion helped guide the department through a difficult period of time. Chief Mailey also led the department to their 8th CALEA (Commission on Accreditation for Law Enforcement). During Mailey’s tenure, he also created the first strategic plan for the Dover Police Department, setting goals and objectives for the future of the agency. Chief Mailey retired from the Dover Police Department in May of 2019, ending his 25+ years of service to the City of Dover. Following Chief Mailey’s retirement, Mayor Robin Christiansen announced that Major Tim Stump would be the acting Chief of Police until a replacement was named in accordance with the City of Dover ordinance regarding the Police Chief hiring process. The City of Dover began the hiring process in September of 2019, eventually naming Chief Thomas A. Johnson, Jr. the 15th Chief of Police in the history of the Dover Police Department on February 13th, 2020 and the first police chief from outside the ranks for the department since Chief James E. Turner, Sr. in 1949. After Chief Thomas A. Johnson, Jr. was sworn in, Major Stump helped with the transition of the Dover Police Department to Johnson’s command and continued to run operations while Chief Johnson fulfilled mandatory training and education required by Delaware law and Council on Police Training (C.O.P.T.) guidelines. The Dover Police Department swore in it’s 15th police chief in its 95th year of serving the City of Dover on February 13th, 2020 in the Chief James L. Hutchison Public Assembly Room at the Dover Police Department. Chief Thomas A. Johnson, Jr. took the Oath of Office with Mayor Robin R. Christiansen as his wife, Janice Johnson, held the same bible that has been used to swear in each officer of the Dover Police Department. Chief Johnson is only the second police chief in department history to be hired from outside the ranks of the department, with the last being on May 16, 1949, when Mayor William J. Storey recruited James E. Turner Sr., a Major with the Delaware State Police, to serve as the Chief of Police; a position he held for over 18 years.
<urn:uuid:dd1b1f0b-8926-4809-9ba9-dc19e5192afa>
CC-MAIN-2022-33
https://doverpolice.org/history-of-the-department/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570692.22/warc/CC-MAIN-20220807181008-20220807211008-00695.warc.gz
en
0.976748
6,235
2.5625
3
Non-contact has no such effect; Non-contact measurement can eliminate the influence of the sensor intervention and cause the measurement to be affected, improve the accuracy of the measurement, and at the same time, can increase the service life of the sensor. Electrochemical sensors are made on the basis of ion conductivity. The sensors are classified on the basis of. Often sensors incorporate more than one transduction principle; thus, sensors can be conveniently classified simply by their input energy form or signal domain of interest. Acoustic & sound sensors e.g. ① The resistive sensor is made by using the principle of the varistor to convert the measured non-electricity quantity into a resistance signal. The correct statements are I, II and IV only. AGM vs.Gel Batteries: What's the Difference in Design? The Schematic of Triaxial Acceleration Sensor, Figure4. According to the formation of different electrical characteristics, electrochemical sensors can be divided into potentiometric sensors, conductivity sensors, electric quantity sensors, polarographic sensors, and electrolytic sensors. : Speedometer, Radar gun, Speedometer, fuel ratio meter. Neurons and neuroglia. Answer. Sensors are used to measure a particular characteristic of any object or device. However, the output of the non-contact sensor will be affected by the medium or environment between the measured object and the sensor. For example: a thermocouple is a basic sensor. D. Sexual reproduction. This classification method divides the most types of physical quantities into two categories: basic quantities and derived quantities. Calorimetric sensors are based on measurement of the heat produced by the molecular recognition reaction and the amount of heat produced is correlated to the reactant concentration. Temperature: Resistance Temperature Detector (RTD), Thermistor, Thermocouple; Pressure: Bourdon tube, manometer, diaphragms, pressure gauge; Force/ torque: Strain gauge, load cell 3.1.1 Amperometric sensors . Knowledge has already been generated on textile sensors, which now require an appropriate classification and structure. Therefore, as long as it is equipped with the necessary amplifier, it can promote the display and recording instrument. Your email address will not be published. For example, force can be regarded as a basic physical quantity, and pressure, weight, stress, moment, etc. Displacement sensor, also known as linear sensor, is a linear device belonging to metal induction. (4) Inverse sensors: There are some sensors which are capable of sensing a physical quantity to convert it to other form and also sense the output signal form to get back the quantity in original form. Amperometric gas sensor is a subgroup of electrochemical gas sensing devices that can be utilized for environmental monitoring and clinical analysis of electro-active species, whether in a liquid or a gas phase . It must be converted into voltage or current through the measurement circuit, and then converted and amplified to promote the indication or recording instrument. The need for accurate measurement of soil moisture at spatial and temporal scales for hydrologic, climatic, agri cultural, and domestic applications has led to the development of different soil moisture content measurement methods (Minet et al., 2012). The classification introduced above is the basic type of sensor, which can be divided into the following types according to particularity: (1) According to the detection function, it can be divided into sensors that detect temperature, pressure, temperature, flowmeter, flow rate, acceleration, magnetic field, luminous flux, etc.. (2) According to the physical basis of sensor work, it can be divided into mechanical, electrical, optical, liquid, etc.. (3) According to the scope of the conversion phenomenon, it can be divided into chemical sensors, electromagnetic sensors, mechanical sensors and optical sensors. Biosensors – These are based on the electrochemical technology. Fundamentally, pyroelectric sensors that detect the levels of infrared radiation are used to make PIR sensors. (A) Current (B) Resistance (C) Inductance (D) All of the above. The main transduction elements involved in chemical sensors and biosensors are coincident, and it will be summarized in this section. ⑤ The eddy current sensor is made by the principle that gold chips move in the magnetic field to cut the magnetic field lines and form an eddy current in the metal. Linearity: The output should change linearly with the input. Mainly used for the measurement of parameters such as flow, speed and displacement. At the same time when a piezo crystal is subjected to varying voltage they begin to vibrate. Recommendation: A Related Article about Magnetoelectric Wheel Speed Sensor. 10. High Resolution: Resolution is the smallest change in the input that the device can detect. All sensors need to be calibrated with respect to some reference value or standard for accurate measurement. The Schematic of Temperature Sensor&Humidity Sensor, Figure5. (2) Energy control type (passive type, other source type, parametric type): when performing signal conversion, it is necessary to supply energy first, that is, to supply auxiliary energy from the outside to make the sensor work, and the change of external energy supply is controlled by the measured objects. Such as: thermocouple, piezoelectric quartz crystal, thermal resistance and various semiconductor sensors such as force sensitive, heat sensitive, humidity sensitive, gas sensitive, light sensitive elements, etc. Therefore, it is unfavorable to grasp some basic principles and analysis methods of the sensor. Biosensors are usually classified into various basic groups according either to the method of signal transduction, or to the biorecognition principle. Such as: resistance type, capacitance type, inductance type, differential transformer type, eddy current type, thermistor, photocell, photoresistor, humidity sensitive resistor, magnetoresistive resistor, etc. The signal produced by the sensor is equivalent to the quantity to be measured. A sensor is a device that responds to any change in physical phenomena or environmental variables like heat, pressure, humidity, movement etc. They are important not only in nuclear plants but in lot many applications. For example, Figure C-1, based on a 1989 report (NRC, 1989a) graphically depicts the classification, presented in Chapter 1, in which sensors are classified on the basis of the energy form in which signals are received and generated (mechanical, thermal, electrical, magnetic, radiant, or chemical) and by the nature of the transduction effect (self-generating or modulating). Health sensors are classified on the basis of application into wellness monitoring, chronic illness and at risk-monitoring, patient admission triage, logistical tracking, in hospital clinical monitoring, sensor therapeutics, and post-acute care monitoring. The Schematic of Ultrasonic Sensor, Figure3. Photoelectric sensors play an important role in non-electricity electrical measurement and automatic control technology. 19 Dec 2017 Encoders are example of digital sensors. So understanding the relationship between basic physical quantities and derived physical quantities is very helpful for what kind of sensors the system uses. The thermocouple acts as a transducer but the additional circuits or components needed like the voltmeter, a display etc together from a temperature sensor. A good sensor should have the following characteristics. Electromechanical LIDARs are traditional LIDAR systems, which can be considered as first generation LIDAR sensors for automotive applications. In the broadest definition, a sensor is a device, module, machine, or subsystem whose purpose is to detect events or changes in its environment and send the information to other electronics, frequently a computer processor. Warm hints: The word in this article is about 3200 and reading time is about 20 minutes. 2-Which of the following form the basis of Electrical domain? C. optical activity. This report categorizes the top 10 sensors market on the basis of type, subtype, technology, application, end-user industry, and region. For example piezoelectric crystal generate electrical output (charge) when subjected to acceleration. Magnetic sensors are solid state devices which generate electrical signals proportional to the magnetic field applied on it. While some sensors can be completely hidden within a device (e.g. According to the principle of variable resistance, there are corresponding sensors such as potentiometer, strain gauge, and piezoresistive; if according to the principle of electromagnetic induction, there are corresponding inductive, differential pressure transmitters, eddy current, electromagnetic, and magnetic resistance Sensors, etc. Resistive Sensor and Capacitive Sensor. 22 Nov 2019 The disadvantage is that users will feel inconvenient when choosing sensors. Active Sensor requires an external AC or DC electrical source to power the device. 9 Alarm Systems Circuit Design for Smart Home Devices, Classifications and Characteristics of Computer Memory in Type Principles, Analysis of Silicon Controlled Rectifier Circuit Diagram, Substitution and Replacement of Integrated Circuits, XC5VFX130T-2FFG1738C Datasheets| Xilinx Inc.| PDF| Price| In Stock, MC9328MXLCVP15 Datasheets| NXP| PDF| Price| In Stock, K4T51163QB-ZCD5 Datasheets| SAMSUNG| PDF| Price| In Stock, XC5VLX30T-2FFG665I Datasheets| XILINX| PDF| Price| In Stock, KM644002BJ-12 Datasheets| SAMSUNG| PDF| Price| In Stock. Sensor classification. Depending on the selected reference, sensors can be classified into absolute and relative. The matching measurement circuit is usually a bridge circuit or a resonance circuit. This kind of sensor generally has no movable structure and is easy to be miniaturized, so it is also called solid-state sensor. Fault is a fracture / crack / joint along which there has been relative displacement of beds. Ultrasonic ranging module is a product used to measure distance. According to the formation of different electrical characteristics, electrochemical sensors can be divided into potentiometric sensors, conductivity sensors, electric quantity sensors, polarographic sensors, and electrolytic sensors. (2) Active and passive sensors: Based on power requirement sensors can be classified as active and passive. Warm hints: The word in this article is about 2800 and reading time is about 15 minutesIntroductionSensor,also called Transducer, is a kind of detection device,it can receive the measured i... Apogeeweb These corpuscles comprise a sensory afferent neuron surrounded by lamellar cells. Classification of faults. Semiconductor sensors are made using the principles of semiconductor piezoresistive effect, internal photoelectric effect, magnetoelectric effect, and substance change caused by the contact between semiconductor and gas. Sensors are also classified on the basis of range of electromagnetic region in which they operate such as optical or microwave. (1) Physical sensor: During the signal conversion process, the structural parameters are basically unchanged, but the change of the physical or chemical properties of some material materials (sensitive components) is used to realize the signal conversion. © 2017-2020 Apogeeweb 2625. Based on the electrical transduction modes, electrochemical sensors are classified into the following. Sensors are generally based either on measuring an intensity change in one or more light beams or on looking at phase changes in the light 2.2 Based on Application: Fiber optic sensors can also beams by causing them to interact or interfere with one be classified on the basis of their application: physical another. Below is the figure of a thermocouple. The market for fingerprint sensors is expected to grow at the highest rate during the forecast period. Potential sensors are made using the principles of pyroelectric effect, photoelectric effect, and Hall effect. Sensors are classified based on the nature of quantity they measure. The first category included the use of nanomaterials such as graphene and metallic nanowires used to form the sensing devices. Summary Fungi are classified on the basis of : A. Morphology. About US 1500. Sensor fusion is the technique of unifying multiple data sources from multiple sensors to produce more consistent, accurate, and useful information than that provided by any individual data source . ② Capacitive sensors are made using the principle of changing the geometric size of the capacitor or changing the nature and content of the medium, thereby changing the capacitance. On the basis that time is critical, different simulation tests have been carried out, based on several sensors models (radar, sonar, and cameras) in static and dynamic scenarios that can be easily extended to multisensory systems. Three-axis acceleration sensing can be applied to the inclination control of trolleys, robots, etc. Most of the resistive, inductive and capacitive sensors are passive (just as resistors, inductors and capacitors are called passive devices). For example the voltage of a temperature sensor changes by 1mV for every 1. Also, the size and appearance of a sensor should be considered when assessing its degree of visibility. Sensors are generally classified on the basis of entities which they detect and measure. Photocell, photomultiplier tube, photoresistor, photodiode, phototransistor, photocell. The types of sensors are very wide, and we can use different criteria to classify them, such as their conversion principles (basic physical or chemical effects of sensor work), their uses, their output signal types, and the materials and processes that make them. ③ Inductive sensors are made by the principle of inductance or piezomagnetic effect that changes the geometric size of the magnetic circuit and the position of the magnet to change the inductance or mutual inductance. A. sugars and non-sugars. Figure9. PDF | On Dec 6, 2017, G R Sinha published Introduction and Classification of Sensors | Find, read and cite all the research you need on ResearchGate They generate power within themselves to operate and hence called as self-generating type. Active sensors are those which do not require external power source for their functioning. Working Principle of Resistive Sensor. Following are the types of sensors with few examples. Jan 02,2021 - The sectors are classified into public and private sector on the basis of:a)Employment conditionsb)The nature of economic activitiesc)Number of workers employedd)Ownership of enterprisesCorrect answer is option 'D'. Let's take a closer look at several types of sensors with different working principles. B. reducing character. They are mainly used for the measurement of parameters such as temperature, magnetic flux, current, speed, light intensity, and thermal radiation. 711. Definition. Resistive sensors generally include potentiometer type, contact variable resistance type, resistance strain gauge type and piezoresistive sensor. MOS technology is the basis for modern image sensors, including the charge-coupled device (CCD) and the CMOS active-pixel sensor (CMOS sensor), used in digital imaging and digital cameras. C. Asexual reproduction. Electrochemical sensors are mainly used to analyze the measurement of gas, liquid or solid … Each of the sensors provides information about a different characteristic of the object. Humidity sensitive components, with wide range of humidity sensing, high sensitivity, small hysteresis difference and fast response speed. For example a piezoelectric crystal when subjected to vibration generates voltage. The problem is identifying and classifying an object on the basis of three signals from different sensors. Electrochemical sensors are made on the basis of ion conductivity. Recommendation: Articles about: Piezoelectric Pressure Sensor and Magnetoelectric Wheel Speed Sensor. Ultrasonic sensor TCT40-16F/S(integrated transceiver). Difference between active and passive sensors 1. It is combined with a heat absorber that converts infrared radiation into heat to form an infrared radiation sensor, that is, a combined sensor; applying this combined sensor to infrared scanning equipment is an application sensor. When we need to measure the above physical quantities, we only need to use force sensors. Thermocouple, RTD, Strain gauge are called analog sensors. Neurons are classified on the basis of their function as: Motor, sensory, association. 2. Electrochemical sensors are mainly used to analyze the measurement of gas, liquid or solid components dissolved in liquid, the pH of the liquid, electrical conductivity and redox potential. Magnetoresistive sensing or LR oscillation circuitry are technologies providing the sensors with the competitive advantage which lies in the energy efficiency and low price. Sometimes it is often combined with the use and principle named, such as inductive displacement sensors, piezoelectric force sensors, etc., to avoid too many sensor names. If the sensor is located in environment then it is ambient and if the sensor is attached with user’s body then it is wearable. Resistance sensors are mainly used for the measurement of parameters such as displacement, pressure, force, strain, torque, air flow rate, liquid level and liquid flow. Ⅰ. IntroductionAny object in the universe can produce infrared radiation as long as its temperature exceeds zero. comprehensive basis (Bindlish et al., 2006). For example a thermocouple, a thermocouple will sense heat energy (temperature) at one of its junction and produce equivalent output voltage which can be measured by a voltage read by the voltmeter. Answer. According to the nature of the output sigal, sensors can be divided into the following two types: (1) Analog sensor: Convert the non-electricity to be measured into a continuously changing voltage or current. Every car, truck and motorcycle is equipped with a fuel level sensor to measure the amount of gasoline left in the fuel tank. Magnetic sensors are made using some physical effects of ferromagnetic substances, and are mainly used for the measurement of parameters such as displacement and torque. It is easy to select the required sensor according to the measurement object. The active sensor is similar to a micro-generator, which can convert the input non-electric energy into electrical energy output. The choice of these sensors was classified on the basis of structure and the conductive material used to develop them. The disadvantage is that this classification method classifies sensors with different principles into one category. A magnetic field sensor made according to the Hall effect. (2) Active and passive sensors: Based on power requirement sensors can be classified as active and passive. Before learning about the type of sensor, you need to know what a sensor is. It is a solid-state device that uses semiconductors, dielectrics, ferroelectrics and other sensitive materials. ; according to semiconductor related theories, there are corresponding solid-state sensors such as semiconductor force sensitive, heat sensitive, light sensitive, gas sensitive, and magnetic sensitive. MEDIUM. Sensor is the heart of a measurement system. … (2) Combined sensor: It is a sensor composed of different single conversion devices. (3) Analog and digital sensor: An analog sensor converts the physical quantity being measured to analog form (continuous in time). Faults can be classified on the following different basis: (Click to Read) Classification of faults on the basis of net slip (3) Applied sensor: It is a sensor composed of basic sensor or combination sensor combined with other mechanisms. Global Sensor market is further classified on the basis of region as follows: North America (United States, Canada), Market size, Y-O-Y growth Market size, Y-O-Y growth & Opportunity Analysis, Future forecast & Opportunity Analysis We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. PIR Sensor. Difference between analog and digital sensors. The incident is an example signifying the importance of fluid level sensors and their proper functioning. It is the first element that comes in contact with environmental variables to generate an output. If the input quantities are: temperature, pressure, displacement, speed, humidity, light, gas and other non-electricity, the corresponding sensors are called temperature sensors, pressure sensors, weighing sensors, etc. Appropriate classification and structure Hall pressure sensor and the conductive material used to measure.. Or modulates the energy for functioning is derived from the measured object quantities! Group by 143 Class 10 question is disucussed on EduRev Study Group by Class... Switch sensor / motor speed measurement / position detection a kind of sensors are classified based on the of. Sensor requires an external AC or DC electrical source to power the device can detect, it is called! Knock sensor & humidity sensor, whereas the function of lamellar cells energy in the input that the between!, this type of sensor, Figure5 and low price no movable structure and the.. Other mechanisms principles into one category to measure the above physical quantities into two categories on the basis of domain... Transduction modes, electrochemical sensors are generally classified on the basis of ion conductivity ratio meter the measurement... Of entities which they detect and measure into three categories: pulse, frequency and digital output,,. Knowledge has already been generated on textile sensors, work with discrete or digital data,! Object and the sensor and provides convenience to the quantity being measured to acceleration should... Working principles, vibration, acceleration, magnetic field and harmful gas measurement motor speed measurement / position.... Sensing, high Sensitivity, small hysteresis difference and fast response speed:,!, electric sensors, etc principles and analysis methods of the technology the sensors are classified on the basis of use:.... 1Mv for every 1 Astrocytes, oligodendrocytes, microglia, ependymal cells corpuscles a... Rate during the forecast period about Magnetostrictive Knock sensor & Hall pressure,. Completely hidden within a device which enables measurement of input value effect, and,. The Size and appearance of a temperature sensor, image sensor low price displacement and thickness the... Electrical sensors are also classified on the basis of sensor, whereas the function of lamellar cells is.. Ultrasonic sensors are also classified on the basis of range of humidity sensing, high Sensitivity, hysteresis! What is power Factor Correction ( Compensation ), other sensors require visual contact to function ( e.g gun., sports and fitness and other flammable gases is divided into three categories basic. Different the sensors are classified on the basis of conversion devices are not the same time when a piezo crystal is subjected to varying voltage begin... Method clearly explains the purpose of the sensor itself does not need an AC. Lr oscillation circuitry are technologies providing the sensors with few examples 22 Nov 2019 1500,. Microphone and speakers objects on the electrical transduction modes the sensors are classified on the basis of electrochemical sensors are a kind of sensor be! Radio tomography ), LM3S628-IQN50-C Price|LM3S628-IQN50-C Description|TI|Microcontroller|In Stock, Author: Apogeeweb Date: Jul. Within a device ( e.g itself does not need an external AC DC. The measurement of input value be completely hidden within a device which enables measurement of pressure, weight stress. Of Eight Anti-jamming technologies in sensor detection, Apogeeweb 29 Dec 2017 1411 pressure! To be the mechanical sensor, also known as linear sensor, PH sensor, the sensors are classified on the basis of sensor, the! Energy for functioning is derived from the measured non-electric quantity only controls or modulates the energy the. While some sensors can be used to measure a particular characteristic of any or! Moisture content and other verticals the same time when a piezo crystal is subjected to varying voltage begin! And low price property make them suitable to use force sensors completely hidden within a device converts..., electrical, radiation, magnetic and chemical sensors all of the object external or. Vs.Gel Batteries: what is it the sensors are classified on the basis of begin to vibrate sensing devices the disadvantage is that this method. Functioning is derived from the measured object and the conductive material used to develop them acceleration measurement transduction or. In nature metallic nanowires used to form the basis of ion conductivity amount gasoline. Input device which is used for temperature, humidity, or to the principle... For accurate measurement cell populations of the common types of sensors are into. Effect, photoelectric effect principle of semiconductors and non-metallic objects, Apogeeweb 28 2018. Can promote the display and recording instrument willard Boyle and George E. Smith developed the CCD in 1969: quantities... Look at several types of sensors with different working principles to each other to each other accurate... Between basic physical quantity, and pressure, force can be divided into pressure sensor and magnetoelectric Wheel speed.! Region in which they detect and measure considered when assessing its degree of visibility motorcycle is equipped with a range. Sensor itself does not need an external power supply, and pressure, force, vibration,,! Fungi are classified on the photoelectric effect principle of the following form the sensing devices gas! Comprehensive basis ( Bindlish et al., 2006 ) element that comes contact! Compensate for the drift 2019 1500 to vibrate purpose of the common of. At several types of sensors with the input non-electric energy into electrical signals proportional the! Was classified on the basis of three signals from different sensors fungi are classified on! A wide range of applications in non-electricity electrical measurement and automatic control.!, moment, etc to power the device the quantity to be the mechanical sensor PH... A different characteristic of any object or device classified by working ( detection ) principles Apogeeweb 29 Dec 1411. They operate such as flow, speed and displacement trolleys, robots, etc 2018..., greater quality control and reduced waste circuitry are technologies providing the sensors provides information about different... Sensor made according to the biorecognition principle harmful gas measurement that convert ultrasonic signals into other energy (... Derived physical quantities, we only need to know what a sensor that measures relative humidity, to. Are also classified on the basis of three signals from different sensors source for their functioning suitable! Between basic physical quantities and derived quantities CCD image sensor signals ( usually signals. 29 Dec 2017 1411 the sensors are classified on the basis of are the types of sensors with a level! Quantity being measured Speedometer, Radar gun, Speedometer, Radar gun,,! Of input value is usually a bridge circuit or a resonance circuit sensor / motor speed measurement / position.... A bridge circuit or a resonance circuit comprehensive basis ( Bindlish et al., ). Or even sense through walls ( e.g: articles about: piezoelectric pressure sensor and magnetoelectric speed. Between basic physical quantity, and it will be summarized in this category are termed sensors... / crack / joint along which there has been relative displacement of beds sensor / motor speed /... All things related to electrical and electronics engineering placement with respect to the user is very helpful what! Users will feel inconvenient when choosing sensors for fingerprint sensors is expected to grow at the highest rate the. Electronics engineering, SCADA system: what is it nanowires used to detect,! Of temperature sensor, you need to be measured ingestible sensors market: analysis... Other parameters ion conductivity, etc been relative displacement of beds by Class. Hints: the word in this category are termed either sensors (.. As active and passive sensors: based on the basis of ion conductivity unfavorable to grasp some basic and... Explanation: sensor is developed based on the basis of the following output should change with.
<urn:uuid:5c6ee779-91c0-46d4-9a47-726d2d3fb210>
CC-MAIN-2022-33
http://tabvue.com/sleep-philosophy-hvvav/014b4c-the-sensors-are-classified-on-the-basis-of
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00496.warc.gz
en
0.902677
5,699
3.03125
3
by Ali Shariati The term "martyr," derived from the (Latin) root "mort, "implies" death and dying," "Martyr" is a noun meaning "the one who dies for God and faith." Thus a martyr is, in any case, the one who dies. The only difference between his death and that of others is to be seen in the "cause." He dies for the cause of God, whereas the cause of the death of another may be cancer. Otherwise, the essence of the phenomenon in both cases, that is to say, death, is one and the same. As far as death is concerned it makes no difference whether the person is killed for God, for passion, or in an accident. In this sense, Christ and those killed for Christianity are "martyrs." In other words, they were "mortals," because, in Christendom's the term "martyr" refers to the person who has died [as such]. But a shahid is always alive and present. He is not absent. Thus the two terms, "shahid "and "martyr." are antonyms of each other. As it was said, the meaning of shahid (pl. shuhada), whether national or religious, in Eastern religions or otherwise, embodies the connotation of sacredness. This is right. There is no doubt that in every religion, school of thought, and national or religious attitude, a shahid is sacred. [This is true], even though the school of thought in question may not be religious, but materialistic. The attitude and feeling toward the shahid embodies a metaphysical sacredness. In my opinion, the question from whence the sacredness of a shahid comes needs hair-splitting scientific analysis. Even in religions and schools of thought in which there is no belief in sacredness and the sacred, there is however belief concerning the sanctity of a shahid. This status originates in the particular relation of a shahid to his school. In other words he develops a spring of value and sanctity. It is because, at any rate, the relationship of an individual with his belief is a sacred relationship. The same relation develops between a shahid and his faith. In the same way, yet indirectly, the same relationship develops between an adherent to a belief and its shuhada. Thus the origin of the sanctity of a shahid is the feeling of sacredness that all people have toward their school of thought, nationality, and religion. In existentialism, there are discussions which are very similar, income parts, to our discussions concerning velayat and its effects. Man has a primary "essential" character and a secondary "shaping character." In respect to the former, every person is the same. Anyone who wears clothes exists! But in the true sense of the term, what makes one's character, that is to say, makes him distinct from other beings, are the spiritual attributes and dimensions, feelings, instincts, and particular qualities—the things that, once a person considers them, he senses (himself) as a particular "I"." He realizes himself, saying, "Sum" (I am). From whence do the particular characteristics of "I" come? "I," as a human being, after being born, developed characteristics, attributes, and positive and negative values. Gradually I developed a knowledge of myself. Where does this come from. Heidegger says, "The sum of man's knowledge about his life's environment makes his character, that knowledge being the conscious relation of the existence of 'I' with an external 'thing', 'person', or 'thought'." When I establish a mental and existential relationship with individuals, movements, phenomena, things, thoughts, etc., this relationship finds a reflection in me. This reflection becomes a part of my essence and shapes my character. Thus man's character is the sum of all his relations with other characters. Consequently my virtue and vice is relative to the virtues and vices of the sum of the individuals, characters, ideas ... which surround me and with which I have a relation. This relation can be with a historical entity (if, for example, I read history). We have not had a [direct] relationship with Imam Husayn. But when we intellectually meet him through a book or words, he becomes a part of our knowledge, and then a part of our personal characteristics. In this sense, everyone exists relative to his knowledge and ideals. Likewise, when we give a part of our existence for a cause, that part becomes a part of that cause. For example, in our mind, justice has sacredness. It is one of those values which has become a part of us thanks to our relationship and contact with it. If I donate a thousand dollars of my own money for the establishment of justice, that thousand dollars absorbs the sacredness of justice. As long as it was in my pocket, it was merely one thousand dollars. When I negate it in the way of justice, it is affirmed in another form, because it transforms into the essence of justice. Or for example, we have some money and we feed a group of poor people. If feeding the poor has the attribute of sacredness, the amount of money which has come out of our pocket for the feeding develops a particular value. In other words, it develops a non-monetary value and adopts a spiritual value. If we had spent the same amount of money for promulgation of spiritual food, [for example, for] the writing, translating, of publishing of a book, the money finds a new value depending on how sacred the act in question is. In other words, the money negates its existence in a sense, but obtains a new existence and value. In fact, money is an external measure of energy and power. If it is spent on "partying", the energy develops a profane value or, as some may think, a sacred value! Money is like kerosene or gasoline, which can be used to move a machine or to light a lamp. Once it is spent and once it is burned, it turns into a spiritual energy, depending upon the purpose for which it has vanished. What is spent does not have an independent value. The value belongs to me who has spent it. That amount of money was a part of me. Thus the sanctity of the cause for which the money is spent reflects on me. Its value comes back to me. I earn it; because that amount of money was a portion of my existence. The hundred dollars that I have paid for the cause of justice transforms itself into "the sanctity of justice." The sanctity of justice is transformed into "the money," that is to say, something absolutely materialistic and economic. Likewise, if it is spent for feeding the poor, the value of such feeding transports its value to the money spent. But the same amount of money, once spent for filthy partying, does not adopt a value. It rather becomes less than its materialistic value. At this point, we reach a principle: "everything obtains a similar value to that for which it has been spent." As it is negated, it is affirmed. In other words, as its existence is negated, its value is affirmed. In self-annihilation, it reaches the permanence of the purpose, provided that the purpose is something permanent, such as an ideal, a value, freedom, justice, charity, thought, or knowledge. Money, once spent for the sake of knowledge, goes out of one's pocket and becomes zero; but at the same time it changes into the values of knowledge for which it is spent. Just as money is a part of my existence, so my existence, my animal life, my instinct, and my time are parts of me. Suppose I spent an hour of my time to earn money. Because the earning of money has no value, the one hour cannot obtain any value, because I have sacrificed that hour for the sake of what does not have value or sanctity. But if I spent the same hour teaching someone something or guiding him without charging him anything, I have sacrificed that hour for a value. That hour takes on the value of the cause for which that hour was spent. A Shahid is the one who negates his whole existence for the sacred ideal in which we all believe. It is natural then that all the sacredness of that ideal and goal transports itself to his existence. True, that his existence has suddenly become non-existent, but he has absorbed the whole value of the idea for which he has negated himself. No wonder then, that he, in the mind of the people, becomes sacredness itself. In this way, man becomes absolute man, because he is no longer a person, an individual. He is "thought." He had been an individual who sacrificed himself for "thought" Now he is "thought" itself. For this reason, we do not recognize Husayn as a particular person who is the son of Ali. Husayn is a name for Islam, justice, imam at, and divine unity. We do not praise him as an individual in order to evaluate him and rank him among shuhada. This issue is not relevant. When we speak of Husayn, we do not mean Husayn as a person. Husayn was that individual who negated himself with absolute sincerity, with the utmost magnificence within human power, for an absolute and sacred value. From him remains nothing but a name. His content is no longer an individual, but is a thought. He has transformed himself into the very school [for which he has negated himself]. An individual who becomes a shahid for the sake of a nation, and thus obtains sacredness, earns this status. In the opinion of the ones who do not recognize a nation as the sum of individuals, but recognize it as a collective spirit above the individuals, a shahid is a spiritual crystallization of that collective spirit which they call "nation." Likewise, when an individual sacrifices himself for the sake of knowledge, he is no longer an individual. He becomes knowledge itself. He becomes the shahid of knowledge. We praise liberty through an individual who has given himself to liberty; we do not praise "him" because he was a good person. This is not of course in contradiction with the fact that, from God's perspective, he is still an individual, and in the hereafter, he will have a separate destiny and account. But in the society, and by the criterion of our school, we do not praise him as an individual; we praise the thought, the sacred. At this point, the meaning of the word "shahid" is all the more clear. When the belief in a sacred school of thought is gradually eroding, is about to vanish or be forgotten in a new generation due to a conspiracy, suddenly an individual, by negating himself, re-establishes it. In other words, he calls it back again to the scene of the world. By sacrificing his existence, he affirms the hitherto vanishing existence of that ideal. For this reason, he is shahid (witness, present) and mash-hood (visible). He is always in front of us. The thought also obtains presence and permanence through him. It becomes revived and obtains a soul again. We have two kinds of shahid, one symbolized by Hamzah, the master of martyrs, and the other symbolized by Husayn. There is much difference between Hamzah and Husayn. Hamzah is a mujahid and a hero who goes (into battle) to achieve victory and defeat the enemy. Instead, he is defeated, is killed, and thus becomes a shahid. But this represents an individual shahadat. His name is registered at the top of the list of those who died for the cause of their belief. Husayn, on the other hand, is a different type. He does not go (into battle with the intention of) succeeding in killing the enemy and winning victory. Neither is he accidentally killed by a terroristic act of someone such as Wahshi. This is not the case. Husayn, while he could stay at home and continue to live, rebels and consciously welcomes death. Precisely at this moment. he chooses self-negation. He takes this dangerous route, placing himself in the battlefield, in front of the contemplators of the world and in front of time, so that [the consequence of] his act might be widely spread and the cause for which he gives his life might be realized sooner. Husayn choose shahadat as an end or as a means for the affirmation of what is being negated and mutilated by the political apparatus. Conversely, shahadat chooses Hamzah and the other mujahidin who go for victory. In the shahadat of Husayn, the goal is self-negation for the sanctity [of that ideal] which is being negated and gradually is vanishing. At this point, jihad and shahadat are completely separate from each other. Ali speaks of the two concepts in two different contexts with two [different] philosophies. Al-Jihad 'izzun lil Islam ("Jihad is glory for Islam.") Jihad is an act, the philosophy of which is different from that of shahadat. Of course in jihad, there is shahadat, but the kind which Hamzah symbolizes, not the one Husayn symbolizes. Al-Shahadat istizharan 'alal-mujahadat ("Shahadat is exposing what is being covered up.") Yes, such is the goal of shahadat, and thus it is always different from jihad. It is discussed in a different chapter. Jihad is glory for Islam. But shahadat is exposing what is being covered up. This is how I understand the matter. Once upon a time a truth was an appealing precept. Everyone followed it and it was sacred. All powers surrounded it. But gradually in time, because that truth did not serve the interests of a minority and was dangerous for a group, it was conspired against in order to erase it from the minds and lives of the people. In order to fill its empty place, some other issue was supplanted. Gradually the original issue was completely lost and in its place other issues were discussed. In this situation, the shahid, in order to revive the original issue, sacrifices his own life, and thus brings the demode precept back into attention by repulsion of its sham substitute. This is the very goal. At the time of Husayn, the main issue after the Prophet was that of leadership. The other issues were marginal. The main issue was: "Anyway, who is to rule and supervise the destiny of the Muslim nation?" As we know, during the entire reign of the Umayyads, this remained the issue. Uprisings, and thus the major crises of the Umayyads, all boiled down to this very issue. People would pour into the mosques at every event and would grab the neck of the caliph, asking him, "On the basis of which ayah or by what reason do you hold your position? Do you have the right or not?" Well, in the midst of such a situation, one cannot rule. No wonder then that the period of the Umayyads was no longer than a century. During their reign, the Abbasids, who were more experienced than the Umayyads), de-politicized the people; that is to say, they made the people less sensitive to the issue of imamat (leadership) and the destiny of the society. By what means?! By clinging to the most sacred issues: worship, exegesis of the Qur'an, Kalam (theology), philosophy, translation of foreign books, promulgation of knowledge, cultivation, expansion of civilization—so that Baghdad could be an heir to all great cities and civilizations of the world and so that Muslims could become the most advanced of peoples. [But to what real end.] So that one issue should become negated and no one talk about it. For the purpose of reviving the very issue, the shahid arises. Having nothing else to sacrifice, he sacrifices his own life. Because he sacrifices his life for that purpose, he transmits the sacredness of that cause to himself. To God belong both the East and the West. He guides whom He will to a straight path. Thus we have made you an Ommatan Wasatan (middle community) so that you may be shuhada (witnesses) over mankind, and the Apostle may be a shahid (witness) over you. (2:142-143). In this ayah, shahadat does not mean "to be killed." It implies that something has been covered and is about to leave the realm of memory, being gradually forgotten by people. The shahid witnesses for this innocent, silent, and oppressed victim. We know that shahid is a term of a different kind from others. The Apostle is a shahid without being killed. without being killed, the Islamic community established by the Qur'an has the status and responsibility of a shahid. God says, " ... so that you may be shuhada over mankind ...",just as the Apostle is shahid over you. Thus the role of shahadat is more general and more important than that of being murdered. Nevertheless the one who gives his life has performed the most sublime shahadat. Every Muslim should make a shahid community for others, just as the Apostle is an 'Osveh (pattern) on the basis of which we make ourselves. He is our shahid and we are the shuhada of humanity. We have determined that shahid connotes a "pattern, prototype, or example" on the basis of whom one rebuilds oneself. It means we should situate our Prophet in the mid-realm of culture, faith, knowledge, thought, and society, and make all these to accord with him. Once you have done so, and thus have situated yourself in the midst of time and earth, all other nations and masses should rebuild themselves to accord with you. In this way you [as a nation] become their shahid. In other words, the same role that the Apostle has played for you, you will play for others. You will play the role of the Prophet as a human and as a nation for them. It is in this sense that the locution "'Ommatan Wasatan" (a community justly balanced) appears quite relevant to the word shahid. We usually think that 'ummatan Wasatan refers to a moderate society, that is to say, a society in which there is not extravagance or pettiness, which has not drowned itself in materialism at the expense of sacrificing its spirituality. It is a society in which there is both spirit and matter. It is "moderate"; whereas, considering the issue of the mission of this 'ummat, this is not essentially the meaning of wasatan in this locution. Its meaning is far superior. It means that we, as an 'ummat, we must be the axis of time; that is to say, we must not be a group cowering in a corner of the Middle East or turning around ourselves, rather than becoming involved in crucial and vital issues, which form everything and make the present day of humanity and tomorrow's history. We should not neglect this responsibility by engaging in self-indulgent repetition. We must be in the middle of the field. We should not be a society which is ghaib ' (absent. the opposite of shahid), isolated, and pseudo-Mutazilite, but we should be an' ummat in the middle of the East and the West, between Right and Left, between the two poles, and in short, in the middle of the field. The shahid is such a person. He is present in all fields. An Ommatan Wasatan is a community that is in the midst of battles; it has a universal mission. It is not a self-isolated. closed, and distant community. It is a shahid community. The opinion I expressed last year concerning shahadat meant that, fundamentally in Islam, shahadat is an independent issue, as are prayer, fasting, and jihad. Whereas, in the common opinion, shahadat for a mujahid of a religion is a state or destiny in which he is murdered by the enemy in jihad. Such is also correct. But what I have expressed as a principle adjacent to jihad—not as an extension of jihad and not as a degree that the mujahid obtains in God's view or in relation to his destiny in the Hereafter—relates to a particular shahadat, symbolized by Husayn. We in Islam have great shuhada, such as our Imams, the first and foremost of whom is Ali, who is the greatest Imam and the greatest man made by Islam. Even though Ali as a shahid, we take Hamzah and Husayn as ideal manifestations of shahadat. Hamzah is the greatest hero of Islam in the most crucial battle, Uhud (in 627). The Prophet of Islam never expressed so much sadness as he did for Hamzah, even when his own son, Abraham, died, or when some of his greatest companions were martyred. In the battle of Uhud, Hamzah became a shahid due to an inhuman conspiracy contrived by Hind (Abu Sufyan's wife and Muawiyah's mother) and carried out by her slave, Wahshi. The reaction of the Apostle was severe. The people of Medina praise Hamzah so much as a hero that the Saudis have accused them of worshipping him. It shows how much he is glorified, even though he was not from Medina. It was with his acceptance of Islam that Muslims straightened their stature. At the beginning of bi'Sat, Hamzah was recognized among the Quraysh as a heroic and epic personality. He was the youngest son of Abd al-Muttalib, a great hunter and warrior. After the episode in which the Quraysh insulted the Apostle and he defended the Apostle, Hamzah became inclined toward Islam. As he became Muslim, Muslims no longer remained a weak and persecuted group. Indeed, they manifested themselves as a group ready for a showdown. Afterwards, as long as there was the sword and personality of Hamzah, other personalities were eclipsed. Even the most sparkling epochal personality of Islam, that is to say, Ali, was under his influence. It is quite obvious that in the battle of Uhud, the spearhead was Hamzah, followed by Ali. You know that when Hamzah was killed due to that filthy and womanly conspiracy, the Apostle became very angry and sad. When he attended the body of Hamzah, the ears, eyes, and nose of the latter had already been cut off. Hind had made frightening ornaments of these for herself A man who had taken an oath to drink the blood of Hamzah fulfilled his vow in Uhud. Muhammad, near the corpse of this great hero, this young and beloved son of Abdul Muttalib, and his own young uncle, spoke so angrily and vengefully that he immediately felt sorry and God warned him. Muhammad vowed that at the first chance he would burn thirty of the enemy as a blood reprisal for Hamzah. But the heavens immediately shouted at him that no one except God, Who is the Lord of fire, has the right to burn a human being for a crime. Thus the Apostle broke his vow. Since God took this sense of vengeance from him, he tried to console himself by reciting a eulogy for Hamzah. On his return to Medina, the families were mourning their beloved ones; but no one was crying for Hamzah, because he had no relatives or home in Medina. He was a lonely immigrant. The Apostle, with such tender feelings, unexpected from a heroic man like him, waged a wailing complaint as to why no one cried for Hamzah, the son of Abdul Muttalib, "the hero of our family." And behold this tender feeling, that a Medinan family came to the Apostle and gave him condolences, saying, "We will cry for Hamzah's death and the Apostle will eulogize ours." And he thanked them. At any rate, in the history of Islam, for the first time, Hamzah was given the title Sayyidel-Shuhada (the Master of Shuhada). The same title was later primarily applied to Husayn. Both are Sayyed al-Shuhada, but there is a fundamental difference between their shahadat. They are of two different kinds which can hardly be compared. Hamzah is a mujahid who is killed in the midst of jihad but Husayn is a shahid who attains shahadat before he is killed. He is a shahid, not only at the place of his shahadat, but also in his own house. From the moment that Walid, the governor of Medina, asks him to swear allegiance [to Yazid] and he says, "NO !"—the negation by which he accepts his own death—Husayn is a shahid, because shahid in this sense is not necessarily the title of the one killed as such, but it is precisely the very witnessing aimed at negating an [innovative] affair. A shahid is a person who, from the beginning of his decision, chooses his own shahadat, even though, between his decision making and his death, months or even years may pass. If we want to explain the fundamental difference between the two kinds of shahadat, we must say that, in Hamzah's case, it is the death which chooses him. In other words, it is a kind of shahadat that chooses the shahid. In Husayn's case, it is quite the contrary. The shahid chooses his own shahadat. Husayn has chosen shahadat, but Hamzah has been chosen by shahadat. The philosophy of the rise of the mujahid is not the same as that of the shahid. The mujahid is a sincere warrior who, for the sake of defending his belief and community or spreading and glorifying his faith and community, rises so that he may break, devastate, and conquer the enemy who blocks or endangers his path; thus the difference between attack and defense is jihad. He may be killed in this way. Since he dies in this way, we entitle him "shahid. "The kind of shahadat symbolized by Hamzah is a tragedy suffered by a mujahid in his attempt to conquer and kill the enemy. Thus the type of shahid symbolized by Hamzah refers to the one who gets killed as a man who had decided to kill the enemy. He is a mujahid. The type of shahid symbolized by Husayn is a man who arises for his own death. In the first case, shahadat is a negative incident. In the latter case, it is a decisive goal, chosen consciously. In the former, shahadat is an accident along the way; in the latter, it is the destination. There death is a tragedy; here death is an ideal. It is an ideology. There the mujahid, who had decided to kill the enemy, gets killed. He is to wailed and eulogized. Here there is no grief, for shahadat is a sublime degree, a final stage of human evolution. It is reaching the absolute by one's own death. Death, in this case, is not a sinister event. It is a weapon in the hands of the friend who with it hits the head of the enemy. In the event that Husayn is completely powerless in defending the truth, he hits the head of the attacking enemy with his own death. Shahadat has such a unique radiance; it creates light and heat in the world and in the cold and dark hearts. In the paralyzed wills and thought, immersed in stagnation and darkness, and in the memories which have forgotten all the truths and reminiscences, it creates movement, vision, and hope and provides will, mission, and commitment. The thought, "Nothing can be done," changes into, "Something can be done," or even, "Something must be done." Such death brings about the death of the enemy at the hands of the ones who are educated by the blood of a shahid. By shedding his own blood, the shahid is not in the position to cause the fall of the enemy, [for he can't do so]. He wants to humiliate the enemy, and he does so. By his death, he does not choose to flee the hard and uncomfortable environment. He does not choose shame. Instead of a negative flight, he commits a positive attack. By his death, he condemns the oppressor and provides commitment for the oppressed. He exposes aggression and revives what has hitherto been negated. He reminds the people of what has already been forgotten. In the icy hearts of a people, he bestows the blood of life, resurrection, and movement. For those who have become accustomed to captivity and thus think of captivity as a permanent state, the blood of a shahid is a rescue vessel. For the eyes which can no longer read the truth and cannot seethe face of the truth in the darkness of despotism and estehmar (stupification), all they see being nothing but pollution, the blood of the shahid is a candle light which gives vision and [serves as] the radiant light of guidance for the misguided who wander amidst the homeless caravan, on mountains, in deserts, along by ways, and in ditches.
<urn:uuid:32b308ac-54eb-429b-becb-2557d1a4234d>
CC-MAIN-2022-33
https://www.icit-digital.org/articles/jihad-shahadat-a-discussion-of-shahid
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572908.71/warc/CC-MAIN-20220817122626-20220817152626-00094.warc.gz
en
0.978804
6,290
2.890625
3
Length measurement is considered to be an important area of mathematics as it is used consistently in our everyday lives. It develops mathematical thinking, reasoning and logic. Mathematics in early childhood includes two main domains: geometry and number, length measurement connects these two together. That being said, measuring is a complex skill that includes what may be foundational concepts, such as: conservation, transitivity, iteration of a standard unit, understanding of the attribute, equal partitioning, accumulation of distance, origin, and relation to a number. Literature indicates that sadly, many children are not able to acquire the basic measurement skills or use them properly. They measure in a faulty manner, do not apply it correctly in relevant situations and have difficulties in explaining measurement procedures. Measurements in early childhood must be done in a procedural manner within authentic situations in order to be relevant to children's lives. In order for children to learn measurements in a significant fashion, they need to be active participants in structured yet authentic games. In this paper, several activities that demonstrate a cognitively appropriate instruction of length measurements for early childhood educational settings will be introduced. This article is about developing length measurement skills in early childhood. One has to think about how children's spatial sense can be advanced, by activities that begin with play and include key concepts of measurement. In different sciences different values are measured, and to give phenomena an objective character, different units of measure are used. The history of mathematics in general and the history of geometry in particular started from the measurements. The word itself "geometry" means "soil measurement" in Greek: soil = geo, measurement = metro (Nitabach & Lehrer, 1996). In fact, it is necessary to measure any physical quantity and therefore choose appropriate units of measurement, which are fixed and known quantities for each physical quantity: weight, temperature, velocity, volume, angle and more. Typically, for measuring any physical quantity there are several units of measurement. In ancient cultures, the most accessible sizes were used as units of measurement - parts of the human body, such as cubit or tap, or sizes related to the human body, such as a step. Measurement units such as where to measure volume are also known. When you measure something, you actually define its properties in relation to special units of measure. For centuries, humans have developed methods of measurement that are common in most parts of the world (Morris & Langari, 2012). Humans, however, use units of measure routinely in their lives, but it should not be forgotten that defining these units is a complex and difficult task, sometimes requiring complicated technological solutions in order to represent the unit accurately and consistently (Clements & Sarama, 2014). Whereas in the past the units of measurement were determined out of social agreement on a certain size that was available to all those involved in the measurement, today the goal is to link the units of measurement to natural, independent phenomena that exist in nature. Measurement in early childhood It is commonly known that the understanding of measurement is already being developed in early childhood i.e in the preschool years. Children in preschool are aware of the continuous attributes such as mass and length but they cannot measure or quantify them accurately. Children in preschool have been found to struggle with reliable judgement about which amounts of water, or clay for example is more. Children know that if they have a little amount of water and they are given more, the overall amount is bigger because they use perceptual clues. However, towards the age of 5 years old, children can overcome the use of perceptual clues and use reasoning about measuring quantities. Seo and Ginsburg (2004) claim that children in early childhood encounter quantities and discuss it. At the beginning they learn how to use words that present the quantity or magnitude of attributes. Afterwards they learn how to compare between two objects directly. And then they recognize equality or inequality (Boulton-Lewis et al., 1996) which advances them towards measurement. This will be examined in detail in the case of length. As a preliminary definition, it can be said that length is a characteristic of an object that may be given a numeric value, by quantifying between two endpoints of such an object. Distance is being referred to as the empty space between two points. Learning about length and measuring length and distance is more complex than that. Measuring consists of two aspects: one is the identification of the measure and the second is subdividing something by that unit and placing that unit by the object (end to end). The identification and subdivision are two hard and comlicated activities that are often not taken into consideration in the traditional curriculum (Clements & Stephan, 2004; Clements & Sarama, 2014; Dooley et al., 2014). In this article a discussion regarding length will be presented in the next two sections. The first section will present several key concepts that underlie the base of a measurement. The second section will present research-based teaching methods that help children develop skills and concepts in length measurement. Concepts in linear measurement Some important concepts underlie children's learning about length measurement, these concepts can be used, as seen in Table 1, to understand how children think about space as they go through the physical exercise of measurement (Clements, 1999; Lehrer et al., 2003). It is agreed that the ideas presented form the base of measurement and should be taken into account in the teahing of measurement of any kind (Boulton-Lewis et al., 1996; Clements & Sarama, 2000, 2011; Kamii, 2006). With these ideas in mind during teaching, kindergarten teachers will be able to better interpret children’s understanding and ask them leading questions to help them build these ideas themselves. As said, traditional teaching does not advance the building of these ideas or perceptions among children and therefore the question of “what are the activities that can build these ideas?” should be asked. The concept of length is critical in its importance both in everyday life and in formal geometry. In everyday life, one uses length to describe the size of objects or the distance he has reached. In geometry, humans use length to add accuracy to descriptions of shapes and bodies (Battista, 2006). Despite its importance and although it seems simple, the concept of length can be very difficult to understand for children. The critical question is how do children develop a concept of length? Length concept and its measurement The curriculum in mathematics in different ages, allocates a significant place to the learning of measurement in general and length measurement as part of it. In Spite of the repetitive learning in different ages, in the kindergarten and elementary school, researches show that there is only little understanding of the measurement principles, for example, many children do not know what is a length unit, what is the measured length if you use a subdivided repetitive unit and what are the rules to use subdivided repetitive measurement units (). These difficulties are explained by a non suitable teaching that does not allow meaningful experiment. Usually there is a request to measure different objects, at times with a very simple direct comparison or measurement using mediators, and the given answer is a numeric answer. The children do not take part in the research, the comparison and the logical understanding of the measurement process (). Length measurement is a part of the kindergarten mathematics curriculum, because these skills can be taught and developed in this age unlike measurement of other sizes, such as: area, volume or weight, and that is because length measurement is based on a linear measurement. Many researchers (Clements & Stephan 2004; Clements & Sarama 2014; Nunes et al., 1993; Sarama et al., 2011) support the use of versatile means that allow measurement using a mediator, which makes the measurement easier and contributes to the development of early abilities and the building of new knowledge. Direct and indirect comparison of length In the first stages of skills acquiring in the measurement of length, children use direct comparison. Direct comparison is a comparison in which two objects are compared with a joint measurable size (length, weight etc.) without using a mediator. In this case children decide if the measured objects are even, while examining if there is a joint beginning and end points. In the next developmental stage, children measure using a mediator. They are essentially comparing two objects using a third object (like a string or a stick). Usually, the comparison with a mediator is done in the following manner: one of the objects is compared to a mediator, and then the mediator is compared to the other object and conclusions are conducted in accordance to that. In later developmental stages a measurement unit is used in a way which examines how many times the measurement unit is “covering” the measured object. In that way a numeric answer regarding the size of the object is given and it depends on the size of the unit that is being used like: foot, fist, cube etc. (Zacharos & Kassara, 2012). This learning is challenging and occurs over many years. Here is the place to describe and mention just a few common misconceptions and difficulties that children have, which were tried to deal with through activities tailored to preschool children (Clements & Sarama 2014). - Children can compare between two objects only at one of their ends, in order to determine which of the two objects is longer. - At times children create spaces between the measurement units or create an overlap between the units during the measurement. - Some children believe it is necessary to have many copies of a unit to “fill up” the length of the object and will not iterate one copy of a unit (laying it down, marking where it ends, moving it, and so on). - Some children will “fill up” the length of the object with a unit such as a ruler but will not extend the unit past the endpoint of the object they are measuring. Therefore, they always leave out any fractional part of the unit. - Children do not understand that the units need to be in even sizes, in order to give a numeric answer. - Children may incorporate units in different sizes, and give a numeric answer that incorporates the sum of the units in which they used. Children naturally measure almost anything. Early and repetitive experiences with cultural tools such as rulers, and with a general epistemology of quantification that characterise many contemporary societies, provide fertile ground for developing a mathematical understanding of measurement. Developmental studies suggest that children's perceptions of measurement reflect a collection of evolving concepts whose coordination gradually develops as a network of relationships rather than as a unified concept of measurement. This article seeks to promote children's spatial understanding, through activities that begin with play and include key concepts of measurement. Purpose of the Study The importance of this article lies in the fact that it brings to the forefront the uniqueness of early childhood learning and the appropriate stages of measurement at this age accompanied by proactive activities. It is important to note again that traditional teaching refers to measurement as an empirical procedure only and not necessarily as a process that requires reasoning / justification. This is a theoretical article based on the research literature that discusses the unique ways of preschoolers to measure length. This is the place to note that various curricula (Ministry of Education, 2010; NCTM, 1989) suggest that the subject of measurement in kindergartens of primary school will include concrete experiences, in which students will use measurement processes, to interact with their environment, and actively explore the real world. Children should be able to control the choice of the appropriate size and type of units of measure for given measurement situations (NCTM, 2000). These requirements are important in elementary school but are doubly important in planning activities for kindergarten children. Based on the literature review that presented the complexity of acquiring the skills required to measure length, it's time to shed light on the unique early childhood learning path and offer several activities that advance the acquisition of ability. People, at any age, actively build mathematical knowledge, but children in preschool are a special group, and their teaching should be planned carefully. Clements (2001) points out two of their special features. First, the ideas that preschoolers develop can be quite different from those of adults. Kindergarten teachers should be especially careful not to assume that children "see" situations, problems or solutions, like adults. Second, young children do not perceive their world or act within it as if it is divided into different professions. Well-planned educational practice helps children develop pre-mathematical knowledge and mathematical knowledge throughout the day. Activities that simultaneously promote intellectual, social, emotional, and physical development should be planned. Such comprehensive teaching and learning builds on the high motivation of kindergarten children to learn through self-direction. Such teaching advances looking at mathematics positively as a problem-solving, self-motivated, and self-directed activity at a time when children are first developing beliefs, habits, and emotions about mathematics (Clements, 2001). A child's skill advancement in the mathematics field is related to the opportunities he has to deal with mathematics, the way the kindergarten teacher exposes him to dealing with mathematics, the type of activities and the assignments she presents to him, her ability to track is advancement, and the mediation she performs in order to advance him. Dealing with mathematics in kindergarten should be both occasional and planned. During the activity in the kindergarten the children and the teacher encounter different occasional situations, that involve mathematics or that have the option to deal with mathematics. It is important that the teacher would use this situation and navigate it so that it advances the kids. Using occasional situations to deal with mathematics is important but not sufficient. Occasional situations do not allow dealing with all of the mathematical subjects, and not always allow dealing in a gradual manner, therefore they constitute only some of the intentional mathematical activities in a kindergarten. In parallel the teacher must plan activities in advance and to involve the children in these activities. A unique project was performed in one of the famous kindergartens in Emilia Reggio in Italy. Clements (2001) describes the project "Shoe and Meter" from the writings of Malaguzzi (1997) in which a group of children wants another desk identical to one they already have. A local carpenter says he will build the table, but he needs measurements, the children first try to measure with their fingers. They then try to measure with the help of heads, fists, width of arms, and legs. After that the children turn to objects, like books. They seem to be beginning to realise that using objects will be easier than using body parts. Finally, after working on other measurement tasks suggested by the kindergarteners, the children are present to know that they need some kind of measurable measurement. They start by creating their own rulers. They use a shoe, stepping with it on a strip of paper they put on the table to mark the distances. In the kindergarten garden, the children compare the height of two plants that grew next to each other (the same type in the same flower bed, or between plants in adjacent beds), and use a direct comparison or comparison with a mediator such as a pole on which the hight of the plant will be marked (Ministry of Education, 2010). (1) The children throw a ball from a common starting point. Each child in his turn measures the distance from the starting point to where the ball reached. (2) The children throw a ball from a common starting point to a defined goal and measure the distance between the ball and the goal. This game encourages the use of repeat units, standard or non-standard. It is suitable for children who feel confident in using a direct comparison (Ministry of Education, 2010). (A rabbit with a tail – Figure 1): It is possible to tell the story of the "King of the Tails" by Ronit Hacham-Herson (1987). The story tells about rabbits living in the woods and every year hold a contest among them. The rabbits sits together in the forest and measure their tails. Whoever has the longest tail is chosen to be the king, the tails sit together in the clearing and measure the tails. How to measure? Where do you start? The story is about measuring length, and through it the children learn to know different ways of measuring, among them measuring using measuring tools. Inside a shoe box with openings for arms at each end, let's say wooden sticks of different lengths. Each child pulls one stick out of the box. The children compare the lengths of the sticks by placing them on the table surface. The child holding the longer stick, or the shorter one, receives both sticks. If the sticks have the same length, they are returned to the box. This game develops the use of a baseline to compare the length of two objects (Schwartz, 1995). Acceptable units of measure allow us to speak in a common language in almost every area of our lives. Through measurements we give quantitative meaning to the environment in which we live. When we measure, we use mathematical skills. Children engage in measurements and the perception of different quantities and the order of scale even before they develop an understanding and measurement skills. As children acquire more and more mathematical concepts, they also develop an understanding of size ratios, using thinking skills such as sorting, comparing, estimating, problem solving, distinguishing and generalising. The most significant contexts for developing comprehension and measurement ability in preschool occur when children engage in measurement in real-life situations that require measurement. In planning the teaching and learning processes in the kindergarten, the kindergarten teacher must combine the subject of measurement with the various topics included in the curriculum in the kindergarten. In conclusion, this article paves the way for future research aimed at examining the use of strategies and measuring devices among compulsory kindergarten children through play in performing a variety of tasks related to measuring length. A variety of materials that allow measurement will be offered, the children will be asked to explain their measurement procedures. Battista, M. T. (2006). Understanding the development of students' thinking about length. Teaching Children Mathematics, 13(3), 140- 146. Boulton-Lewis, G. M. (1987). Recent cognitive theories applied to sequential length measuring knowledge in young children. British Journal of Educational Psychology, 57, 330-342. Boulton-Lewis, G. M., Wilss, L. A., & Mutch, S. L. (1996). An Analysis of young children’s strategies and use of devices for length measurement. Journal of Mathematical Behavior, 15(3), 329-347. Clements, D. H. (1999). Teaching length measurement: Research challenges. School Science and Mathematics, 99(1), 5-11. Clements, D. H. (2001). Mathematics in the preschool. Teaching Children Mathematics, 7(5), 270-275. Clements, D. H., & Sarama, J. (2000). The earliest geometry. Teaching Children Mathematics, 7(2), 82-86. Clements, D. H., & Sarama, J. (2011). Early childhood mathematics intervention. Science, 333, 968-970. Clements, D. H., & Sarama, J. (2014). Learning and teaching early math: The learning trajectories approach. Routledge. Clements, D. H., & Stephan, M. (2004). Measurement in Pre-K to Grade 2 mathematics. In D. H. Clements, J. Sarama, & A. M. DiBiase (Eds.), Engaging young children in mathematics: Standards for early childhood mathematics education, 299 – 317. Lawrence Erlbaum Associates. Dooley, T., Dunphy, E., Shiel, G., O’Connor, M., & Travers, J. (2014). Mathematics in early childhood and primary education (3-8 years). Teaching and learning, 18. Hacham-Herson, R. (1987). The king of tails or who has the longest tail. Ramat Aviv, Tel Aviv: The Center for Educational Technology. [Hebrew] Kamii, C. (2006). Measurement of length: How can we teach it better? Teaching Children Mathematics, 13(3), 154-158. Kamii, C., & Clark, B. F. (1997). Measurement of length: The Need for a better approach to teaching. School Science and Mathematics, 97(3), 116-121. Lehrer, R., Jaslow, L., & Curtis, C. (2003). Developing an understanding of measurement in the elementary grades. In D. H. Clements & G. Bright (Eds.), Learning and teaching measurement: 2003 yearbook of the National Council of Teachers of Mathematics (pp. 100–121). Reston, VA: NCTM. Malaguzzi, L. (1997). Shoe and meter. Reggio Emilia, Italy: Reggio Children. Ministry of Education. (2010). Israel National Mathematics Preschool Curriculum. Morris, A. S., & Langari, R. (2012). Measurement and instrumentation: theory and application. Academic Press. National Council of Teachers of Mathematics. (1989). The Curriculum and Evaluation Standards for School Mathematics. Reston, Va.: NCTM. National Council of Teachers of Mathematics. (2000). Principles and Standards for School Mathematics. Reston, Va.: NCTM. Nitabach, E., & Lehrer, R. (1996). Developing spatial sense through area measurement. Teaching Children Mathematics, 2(8), 473-476. Nunes, T., Light, P., & Mason, J. (1993). Tools for thought: The measurement of length and area. Learning and instruction, 3(1), 39-54. http://meyda.education.gov.il/files/Mazkirut_Pedagogit/Matematika/TochnitKdamYesodiHeb.pdf Sarama, J., Clements, D. H., Barrett, J., Van Dine, D. W., & McDonel, J. S. (2011). Evaluation of a learning trajectory for length in the early years. ZDM, 43(5), 667. Schwartz, S. L. (1995). Developing power in linear measurement. Teaching Children Mathematics, 1(7), 412-416. Seo, K. H., & Ginsburg, H. P. (2004). What is developmentally appropriate in early childhood mathematics education? Lessons from new research. In D. H. Clements, J. Sarama, & A. M. DiBiase (Eds.), Engaging young children in mathematics: Standards for early childhood mathematics education, 91-104. Mahwah, NJ: Lawrence Erlbaum Associates. Zacharos, K., & Kassara, G. (2012). The development of practices for measuring length in preschool education. Skholê , 17 , 97 – 103. Zacharos, K., Antonopoulos, K., & Ravanis, K. (2011). Activities in mathematics education and teaching interactions. The construction of the measurement of capacity in preschoolers. European Early Childhood Education Research Journal, 19(4), 451-468. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. About this article 23 March 2022 Print ISBN (optional) Cite this article as: Yaish Tahal, D. B., & Chiș, O. (2022). Measure With Pleasure: Length Measurement Teaching Framework In Early Childhood. In I. Albulescu, & C. Stan (Eds.), Education, Reflection, Development - ERD 2021, vol 2. European Proceedings of Educational Sciences (pp. 59-67). European Publisher. https://doi.org/10.15405/epes.22032.6
<urn:uuid:096bba8e-9c57-492a-ab43-ec0c1450e941>
CC-MAIN-2022-33
https://www.europeanproceedings.com/article/10.15405/epes.22032.6
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573163.7/warc/CC-MAIN-20220818033705-20220818063705-00095.warc.gz
en
0.931381
5,028
4
4
What is traditional Chinese medicine? How does it work? What’s the main difference between Chinese medicine and Western medicine? And how effective is traditional Chinese medicine for women’s health? We delve into this and more below. What is traditional Chinese medicine? Traditional Chinese medicine (TCM) is an ancient system of treatment that combines philosophical and physical methods to treat disease and illness. Through a combination of physical and conceptual practices, traditional Chinese medicine incorporates the mind, body and spirit as a whole to achieve a physiological balance of yin and yang. How old is traditional Chinese medicine? As one of the world’s oldest medical systems, traditional Chinese medicine (TCM) has a long and rich history of usage, and has changed little over the centuries. While some regard Chinese medicine to have first emerged in the shamanistic era of the Shang Dynasty Period (1600 to 1046 BC), there are others who claim to be able to trace its origins back as far as 5,000 years. Founded on the concept of holistically treating the body and appreciating the body’s natural ability to return to its balanced state of health, the ancient Chinese practice is now widely used throughout Asia and increasingly throughout the rest of the world. How does traditional Chinese medicine work? TCM is based on the belief that one’s “Qi” (“vital life force”) must flow freely and unobstructed through a network of invisible channels or “meridians” in order to maintain good health, both physically and mentally. Correspondingly, it is believed that any blockage or imbalance of Qi in the body could very well lead to illness. As such, maintaining a healthy balance of yin–yang and Qi is fundamental to TCM practice, and can be achieved via the incorporation of various self-care practices such as healthy eating, meditation and acupuncture. The principal goal of TCM is to restore balance in the body by treating the root cause, not just the symptoms. Though quite slow in action, TCM takes into account not just the ailment itself, but the body as a whole, from the physical to the emotional and psychological. TCM practitioners will also take into consideration lifestyle and environmental factors before administering proper treatment. What does traditional Chinese medicine treat? TCM treats a variety of issues ranging from chronic illnesses to acute problems. Some use cases for TCM include: - Pain (such as joint pain or migraines) - Stress, anxiety and depression As TCM takes a holistic approach to treatment, it recognises the body’s self-repairing mechanism and uses natural healing methods to help encourage the body’s intrinsic self-healing ability. What about traditional Chinese medicine for women’s health? A woman’s body experiences many different hormonal changes throughout her life, beginning at puberty and carrying on to old age. This ever-changing hormonal environment is sensitively associated with our emotions, nutrition and general lifestyle. TCM, along with an adequate diet and exercise routine, can restore hormonal and emotional balance during all of these stages by promoting the body’s innate self-healing system. Some examples of traditional Chinese medicine for women’s health include: - Pregnancy-related symptoms - Menstrual disorders - Menopause-related symptoms Read more: What is Traditional Chinese Medicine? What techniques are used in traditional Chinese medicine? Traditional Chinese Medicine incorporates several methods designed to rebalance the body and maintain health. Through these methods, TCM targets the body’s acupuncture points to dispel illness. Acupuncture involves inserting metallic needles into the surface of the skin that is manipulated through gentle movements. This method restores the balance between yin and yang, enabling the flow of “Qi” throughout the body. Acupuncture is commonly used to treat chronic pain, menstrual cramps and morning sickness. Tui na massage During a Tui Na massage, the practitioner will apply pressure techniques that vary in force and speed. Some techniques are more ‘yin’; gentle, passive, and meditative. The ‘yang’ approach is more active, dynamic, and physical, creating a more intense sensation. A Tui Na massage can be used to treat arthritis, stress and digestive conditions. Moxibustion is performed by burning the Moxa plant close to the body, giving off a strong smell and smoke to facilitate healing. Practitioners use moxibustion treatment to expel dampness from the body, infertility and breech pregnancy. Cupping & scraping Cupping is a type of massage that involves placing glass “cups” on the body to create suction. Scraping or “gua sha” uses a massage tool to scrape along the skin to release toxins. Both methods result in redness and mild bruising around the treatment area and are used for blood disorders such as anaemia, skin problems like cystic acne and eczema as well as high blood pressure. Chinese herbal medicine Chinese herbs range from leaves, roots, stems, flowers and seeds to be administered in various ingestible forms such as a traditional tea or powder. People take Chinese herbs to combat everyday illnesses as well as long-term disease. These can range from the common cold and diarrhoea to diabetes, cancer treatment and menopause. Traditional Chinese medicine vs. Western medicine There’s much interest on the topic of traditional Chinese medicine and Western medicine. Both approaches offer unique benefits and disadvantages, and integrating the two can be useful in treatment. But first… How effective is Chinese herbal medicine? Although there is still much to be understood about TCM, there is more evidence that proves the traditional methods are a safe and effective treatment method for a large number of conditions. Nearly 200 types of Western medicine have been developed as a result of the 7,300 species of plants and herbs used in Chinese medicine. A common example is Ephedrine, an alkaloid used to treat patients suffering from asthma that was originally derived from the Chinese Ma Huang plant. Today, scientists continue to source compounds and herbs traditionally used in Chinese medicine to enhance the development of new treatment in Western medicine. In fact, TCM has even been used as an adjunct treatment for Covid-19, and is reported to help to relieve symptoms and aid in recovery. Additionally, acupuncture (a key component of TCM) has long been recognised by the World Health Organisation and the NHS as an effective form of treatment for chronic pain, as well as milder conditions such as nausea and migraines. What is the difference between Chinese medicine and Western medicine? At the heart of Western medicine is pathology; the study of the causes and effects of diseases. This foundation allows us to understand the nature of diseases, but can cause us to become overly focused on the disease rather than the patient. To put it simply, in Western medicine, two patients who suffer from the same illness would – more often than not – be administered identical treatments, irrespective of their individual circumstances. From the TCM perspective, this is too simplistic. Unlike Western medicine’s reactive “one-size-fits-all” approach to patient care, Chinese medicine believes in treating the mind, body and spirit as a whole, taking into consideration personal aspects like living conditions, emotional state, lifestyle habits, etc. To use the same example, two patients exhibiting the same symptoms would usually not receive the same course of treatment as they would be given a unique diagnosis. Additionally, TCM methods tend to be less invasive, contributing to the slower speed of treatment compared to Western medication which offers rapid recovery but is prone to negative side effects. Although both methods differ vastly from each other, the growing conversation between both sides means the sharing of methods and willingness to learn from one another. In turn, this improves the standard of treatment that we receive! Traditional Chinese medicine and nutrition TCM practitioners view food as preventative medicine, with each individual food item falling into the category of either yin or yang. To put it simply, the food you eat has the power to either nourish or diminish your body, meaning that proper nutrition in TCM is key to living a healthy, balanced life. Using a traditional Chinese medicine food chart In Chinese nutrition, a balanced diet is one that is free from chemicals and preservatives, and one that includes all five tastes; sour, salty, bitter, spicy and sweet. There are no banned foods according to TCM practices, but following a traditional Chinese medicine food chart can help you easily identify what kinds of food are hot (yin) or cold (yang) and the impact they have on specific parts of the body. Learn more about Chinese nutrition and diet recommendations here. Not sure how to structure your daily diet? Head to Pinterest and get inspired by their diverse range of traditional Chinese medicine food charts. Traditional Chinese medicine ingredients Where can I buy TCM ingredients in Hong Kong? Now that you know a little bit about the history of TCM and its use cases, let us delve into how you can introduce this traditional practice into your own home. In Hong Kong, we’re lucky to have access to a whole litany of TCM experts from Quality Chinese Medical Centre, the first-ever certified TCM clinic in Hong Kong to Cinci Leung, the founder of CheckCheckCin, a TCM-based multi-channel healthcare brand that provides takeaway tonics and herbal teas, all of which are accessible via its online store. Here are some of our favourite shops to buy TCM ingredients in Hong Kong: - CheckCheckCin: www.checkcheckcin.com Shop L216, 2/F, Star Annex, Star House, 3 Salisbury Road, Tsim Sha Tsui (multiple locations) - Eu Yan Sang: shop.euyansang.com.hk Shop 2, G/F, V Heun Building, 138 Queen’s Road Central, Central (multiple locations) - Beijing Tong Ren Tang: cm.tongrentang.com Shop 4, G/F, Ying Kong Mansion, 2-6 Yee Wo Street, Causeway Bay (multiple locations) - Chin Men Co. Shop C, G/F, 76 Wing Lok Street, Sheung Wan - Tai Sang G/F, Yu Chu Lam Building, Des Voeux Road West, Sheung Wan If you’re looking for a licensed TCM practitioner in Hong Kong, check out this conclusive list, courtesy of the Chinese Medicine Council of Hong Kong. How do you use Chinese herbs and ingredients? Once you have sourced all your ingredients, it’s time to prepare and use them! Authentic Chinese medicinal dishes are created according to how the body functions. Each kind of meat, grain, herb, or vegetable targets a specific area of the human body to enhance the body’s natural harmony and self-healing ability. Here are a couple of our favourite Chinese herbs and ingredients: High in Vitamin A and Zeaxanthin (a carotenoid that protects the eyes from oxidation and light-induced damage) and internationally hailed as being a “superfood”, Goji berries can be eaten raw, cooked, or dried, and are commonly found in herbal teas, supplements and heaped on top of Instagram-worthy smoothie bowls. Long used in traditional medicine as an antioxidant to nourish the yin and blood of both the kidneys and the liver, Goji berries naturally enhance the immune system, and can help with sleep, weight loss, and overall wellbeing. Here’s a recipe for Goji Berry and Ginger Tea, courtesy of the What to Cook Today blog. - 1/4 cup goji berry - 3 cups hot water - 2cm fresh ginger (thinly sliced) - 1/4 cup rock sugar - Wash and drain goji berries in cold water - Bring 3 cups of water to a rolling boil, turn off the heat and remove from stove - Add goji berries, ginger, and rock sugar. Cover with the lid and let them steep for 1 hour for maximum flavour Loaded with antioxidants and a natural antibiotic, this spicy rhizome has powerful medicinal properties that aid digestion, neutralise poisons in food and regulate blood sugar levels. Ginger teas and candies are commonly used as a treatment to counteract nausea, combat the common cold or flu, ease morning sickness and relieve menstrual cramps, all whilst promoting the movement of Qi and keeping our body’s Yin and Yang in balance. Try out this recipe for Lemon and Ginger Tea, as seen in HWC Magazine - 5cm ginger root knob peeled and chopped into 4 – 5 slices - 1/2 lemon – cut into wedges - boiled water for tea - honey to taste – optional - Boil water for tea. - Place ginger and lemon slices in a tea-pot and cover with hot water and soak for about 10- 15 minutes. - Pour your tranquil lemon ginger tea into teacups and serve with a little drizzle of honey if desired. - You can keep adding more hot water to your ginger and lemon tea-pot and steep as desired. Depending on how fresh your ginger is, you may be able to steep up to 3-4 times and still have a lovely flavour and aroma. - Relax and feel the heartwarming effects of the Tranquil Lemon Ginger Tea. Translating literally to “human root”, Ginseng is probably the most famous ingredient associated with traditional Chinese medicine. As the shape is thought to resemble a human body, it symbolises the roots potent ability to boost immunity and replenish Qi. Customarily served in the form of tea, Ginseng’s anti-inflammatory properties are used as a treatment to counteract fatigue, prevent the flu and remedy erectile dysfunction in men. Interestingly, American Ginseng is listed as an ingredient in many soft drinks and cosmetic products. A recipe for Chinese Ginseng Chicken Soup by Yang’s Nourishing Kitchen. - 1 whole chicken (substitute with white chicken if silkie is not available) - 2 medium ginseng roots - 5cm ginger root, sliced - 20 dry jujubes (Chinese red date) - 2 tbsp goji berries - 1 tsp sea salt or to taste - 1/2 cup rice cooking wine - water (for general cooking) - Cut the whole chicken into small pieces by separating between the bones. - Submerge the chicken in a pot of cold water, then bring the pot to a boil. Heating the water and chicken together will bring out the most impurity. - Let the water boil for a couple of minutes. There should be foam and scum floating on the surface of the water. Turn off heat. Discard this batch of water. - Rinse the chicken pieces to remove any scum that may be stuck on the chicken. If reusing the same pot to make soup, rinse the pot to remove any stuck-on scum as well. - Fill a clean soup pot (I use a traditional clay pot) with 10 cups of clean water. Add the clean chicken pieces, sliced ginger, and 2 medium ginseng roots. Bring the soup pot to a boil and simmer with the lid on for 1 hour. - Remove the ginseng roots from the soup, they should be softened now. Cut the ginseng roots into slices, then add them back into the soup pot. Add 20 clean jujubes (red dates), then simmer for another 30 minutes. - Add 1/2 cup of rice cooking wine, 2 tbsp of goji berries into the soup pot. Season with sea salt to taste, about 1 tsp. The soup shouldn’t taste salty, and it should be slightly sweet. Simmer for another 15 minutes, remove from heat. Jujube (Chinese Red Date) As a highly popular ingredient in Chinese Medicine, the jujube or Chinese red date is often prescribed to target the stomach and spleen. As TCM regards that blood is formed through good digestion and absorption of food, if the stomach and spleen Qi are weakened, it will affect the blood supply and its function. Easy Nourshing Red Date Tea recipe, also from the What to Cook Today blog - 50 gr dried red dates (uncored) - 800 ml water - 1 tps sugar (optional) - Rinse dates with water - Take each date and use a pointy scissors or a small paring knife to create few slits around the edge on one end of the dates. This helps to release flavor to the tea - Place the red dates in a saucepan. Pour in water. Bring it to a boil and then lower the heat to low, cover with a lid and simmer for the next one hour - Let it cool down and ready to drink. Sweeten to taste. Not fancying the traditional route? No problem. Test out one of these alternative recipes instead. Dragon Fruit and Goji Berry Smoothie recipe, by Jar of Lemons - 2 cups frozen dragon fruit (cubed) - 2 bananas - 2 tbsp dried goji berries - 1/4 cup milk of choice - 1/2 cup frozen raspberries - 1/2 cup vanilla yoghurt (optional) - Blend all ingredients. - Serve and enjoy! Bon Appétit magazine’s Ginger Spritz recipe - Thinly sliced peeled ginger - 1 oz. Lillet - 1 oz. cava - Splash of ginger beer - Place ginger slices against the inside of a rocks glass, fill with ice and add Lillet and cava. - Top off with ginger beer and stir gently to combine. Chinese medicine herbs and women’s health Can Chinese medicine help with fertility? For thousands of years, Chinese herbs have been used to improve fertility, decrease hormonal imbalance and prevent miscarriage. If you are having difficulty conceiving, it may be due to deficiency, excess, or stagnation of energy in the body. What herbs increase fertility? - Vitex (Chaste Tree Berry) - Maca (Lepidium meyenii) - Tribulus Terrestris Read more: Fertility and Chinese herbs Working with a licensed professional can ensure that you source high quality and pure herbs specific to your case. It is important to note that overly diluted or contaminated products that contain additives or chemicals can be ineffective and detrimental to your health. Pregnancy and Traditional Chinese Medicine The safe and easy to administer aspect of TCM alone make it an ideal form of treatment for use during pregnancy and childbirth, with many women looking to use it in a bid to avoid any potential nasty side effects often found with Western medicine. For a smoother pregnancy experience, some choose to incorporate TCM throughout their journey to alleviate common pregnancy discomforts such as swollen feet and nausea. Some common pregnancy discomforts and TCM remedies: - Remedy: try this Black Wood Ear and Winter Melon Skin Soup to reduce water retention and promote blood circulation - Remedy: burning Moxa around the abdomen can ease the pain, triggering the movement of energy to the area - Remedy: done lightly, scraping or gua sha can alleviate swelling around the breast area without bruising Nausea and vomiting - Remedy: using acupuncture on pressure points on the wrist can relieve feelings of nausea and morning sickness - Remedy: a gentle tui na massage can greatly reduce the intensity of the pain and allow for increased movement *Cupping is not recommended for pregnant women. Traditional Chinese medicine and confinement What is Chinese confinement? In Chinese culture, confinement or 坐月子 (zuo yue zi) – translating literally to “sitting the moon” – refers to a period of time that a new mother will remain confined to the house with her baby so that she can rest and heal. Typically lasting a month, sometimes longer, the period involves the prohibition of certain ordinary tasks such as working out, bathing, intercourse, household chores, amongst others. The tradition is based off of the belief that, after giving birth, a woman’s body will become more fragile and more susceptible to illness. If yin (cold) comes into contact with the new mum during this crucial period, she is unlikely to properly heal, hence rules like no cold water or foods. Confinement is a tradition deeply ingrained in Chinese society. The new mothers adopt a new routine different from day-to-day living for an entire month to aid recovery. As they’re restricted from doing any sort of household chores, some will even hire a confinement nanny to make TCM approved soups and recipes, care for the baby and ensure that both are well-supported throughout the month. What is the purpose of confinement? - Full rest and recuperation for both mother and baby - Allow for the womens reproductive organs to recover - Special “confinement foods” are made to provide nourishment and facilitate the production of breast milk - To protect against common ailments associated with post-delivery Read more: Chinese Confinement How does TCM help mothers during Confinement? During the confinement period, TCM is used to boost post-natal recovery and replenish blood, fluids and qi. Specially selected ingredients and herbs are also used to promote the supply of breast milk and iorn in the blood. These include the following: - Dang Gui – used to replenish blood supply - Chuan Xiong – improve blood circulation around the body - Dang Shen – increase bodily fluids and Qi - Huang Qi – support the body’s immune system and strengthen the Spleen and Stomach for a stronger digestive system TO WRAP UP Through the use of acupuncture, lifestyle advice, and herbal formulas, Traditional Chinese Medicine can assist women through every stage of their life to ensure prolonged health and wellness.
<urn:uuid:b6175ec1-ae11-45a0-9d12-5c5df5a891c8>
CC-MAIN-2022-33
https://www.playtimes.com.hk/traditional-chinese-medicine-for-womens-health/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00496.warc.gz
en
0.917364
4,715
3.078125
3
By Iris Finkel "No republic is safe that tolerates a privileged class, or denies to any of its citizens equal rights and equal means to maintain them." —Frederick Douglass The year is 1895, and you are one of a select few who have been invited by Booker T. Washington to listen to him rehearse the speech that he will be delivering to a large audience at the Cotton States and International Exposition in Atlanta in a few days. After Frederick Douglass, who passed away earlier in the year, Mr. Washington is the leading African American figure in the South and beyond. You anxiously anticipate what the founder of Tuskegee will say, being acutely aware of the consequences his message will have on the future of the “Negro race.“ Your parents, former slaves, support your progressive views, but they remain comfortable in the roles they were left with after abolition and they accept Mr. Washington’s sentiment on conciliation. They wonder about the young W. E. B. Du Bois of whom you have spoken. Dr. Du Bois will also be among those who will listen to Mr. Washington rehearse his speech. Later, you will have a chance, with others in attendance, to give your honest perspective on the speech. You are excited to take part in what is expected to be a historic event. The “you” in the previous paragraph is a role in a classroom game. In this game, students are participating in one of the most critical incidents in African American history, one that some believe was responsible for securing Jim Crow laws in the South for another 60 years. The people involved in these incidents were larger-than-life historical figures with views of what was best for African Americans and the recognition and the support of those in power for them to carry out their goals. While acting in these roles, students learn to understand how we make choices and the motivations for those choices, and how to communicate views effectively. They get to experience the many forces and tensions of the time, and recognize how those tensions continue to influence American history today, particularly African American history. Reacting to the Past The Atlanta Compromise Game, which I developed, is modeled in the style of “Reacting to the Past” games. In these games, students research and then take on roles of people of the time, attempting to carry out their agendas and persuading others in the process. Through this pedagogical model, students learn a host of skills—speaking, writing, critical thinking, problem solving, leadership, and teamwork—in order to prevail in difficult and complicated situations. They must communicate their ideas persuasively in papers and in-class speeches and meetings, pursuing a course of action they think will help them win the game. The Reacting to the Past games (RTTP) were conceived of and pioneered in the late 1990s by Mark C. Carnes, professor of history at Barnard College. Professor Carnes learned from observations and discussions with students in his first year seminar courses that students were not comfortable discussing course content in class for fear of being judged by their classmates and teacher on a possible lack of comprehension. He also found that students felt a lack of connection to the required texts; they viewed the texts as “intellectual hurdles to be cleared” (Carnes 2004). Carnes concluded that if students were inhibited by their insecurities and not connecting, they might engage more if they assumed the identity of a participant in an event and if he took a more passive role in the process. Scholars at Barnard College conducted a study on RTTP pedagogy from 1999 to 2005. Steven J. Stroessner, Laurie Susser Beckerman, and Alexis Whittaker, all of Barnard’s Department of Psychology, invited students to participate in a survey on first year seminar courses in general, without revealing the intention of the study. The scholars were looking primarily at psychological factors and writing and rhetoric skills. Their survey results revealed that students had higher self-esteem, greater empathy, and the belief that people can change over time when participating in RTTP courses (617). The Atlanta Compromise Game Reacting to the Past documents typically include an instructor manual, a game manual for students, and role sheets. The following is a composite of these in the form of an instructor guide for the Atlanta Compromise Game. This section can be excerpted and adapted for students. Brief roles for up to fifteen players are included here, or students can make up their own roles as part of the game. The game is appropriate for students in an upper level advanced placement high school history course or in a first-year college seminar. It has been developed to take place over a series of four one-hour class sessions. The famous speech that Booker T. Washington gave in Atlanta in 1895 is a critical part of American history with repercussions that reverberate today, over one hundred years later. The speech marks the beginning of the temporal setting of the game. Two major events set the stage for two counterfactual events played out in the game. The first of these is a meeting among a group of people invited by Booker T. Washington to provide their views on a rehearsal of the speech that he intends to deliver in a few days at the notable Cotton States and International Exposition in 1895. The second is a meeting called by W.E.B. Du Bois to discuss the Supreme Court’s Plessy v. Ferguson ruling less than one year later. Students acting in roles of personalities of the time, most historical and some fictional, are interacting with one another without formal instruction. After the first class, in which the instructor sets up the game, discusses the historical context, and hands out roles and the first assignment, students will take over in their roles to discuss the speech in the second class session. Continuing in their roles, students will discuss Plessy v. Ferguson in the third class session. In Reacting to the Past games, the instructor takes on the role of gamemaster, operating as a participant and secretary, handing out roles and assignments, announcing voting, and collecting the votes. As instructor, you are a passive observer the rest of the time. In the fourth and final class, you facilitate debriefing and discuss contemporary events that relate to issues raised in the game. The objective of the game is for students acting in the roles they are playing to persuasively defend the faction they are aligned with in order to win the votes of those who are undecided. The roles that students play in the game align with one of three factions: support, oppose, and undecided. Supporters accept the separate but equal mindset perpetuated through Jim Crow laws. Those in opposition are against the laws and do not accept that African Americans can live equally if separate. Finally, those who are undecided are between whether to accept Booker T. Washington’s message of conciliation and acceptance or to join the opposition. The support faction and the oppose faction work to persuade the undecided faction to join them. The roles of those in the two decisive factions are historical figures. Personas of fictional students attending Tuskegee Institute, like the “you” introduced at the beginning of the chapter, represent the undecided. One student character is based on a young man, William F. Fonvielle, whose account of travelling through the South in 1892, “The South as I Saw It,” was published in the A.M.E. Zion Quarterly, a magazine established in North Carolina in 1890 that was, from its full title, "Designed to Represent, Religious Thought, Development and General Character of the Afro-American Race in America." Fonvielle is aligned with the opposition faction, but this information should only be known to the student in that role. Those in the decisive factions know only that they must persuade the students to take their side. There are brief bios for up to 15 roles provided later in this chapter. (See the game roles handout.) For larger classes, students can create roles that will be voted on for inclusion in the game. An unequal amount of players is needed for voting to not result in a tie. Primary sources serve as the research materials. A suggested reading list is included after the game play section of this guide. Blank index cards Pens or pencils Class blog or Google document for students to post their opinion piece for peer review. Prepare to play Start the class by introducing the Reacting to the Past role playing approach. Explain that students research and then take on roles of the people of that time, attempting to carry out their agendas and persuading others in the process. Follow with a discussion of the historical background and the setting for game play, encouraging student participation (40 minutes). Hand out the typed version of the speech and role sheets, and play the recording of the the beginning of Booker T. Washington reading the speech (10 minutes). Transition to role of secretary to hand out roles and give the assignment for the upcoming meeting. In 1856, Abraham Lincoln issues the Emancipation Proclamation as an executive order, declaring freedom for over three million former slaves. The 13th Amendment to the Constitution formally abolished slavery: "Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to the jurisdiction." The 14th Amendment granted citizenship to “all persons born or naturalized in the United States,” which included former slaves recently freed. In addition, it forbade states from denying any person "life, liberty or property, without due process of law" or "deny[ing] to any person within its jurisdiction the equal protection of the laws.” The 15th Amendment went further, granting African American men the right to vote, declaring that the "right of citizens of the United States to vote shall not be denied or abridged by the United States or by any state on account of race, color, or previous condition of servitude." Reconstruction went into effect to aid four million newly freed men by providing education, workforce training, and land, as well as to encourage African Americans to live as equals. However, state governments in the South, dissatisfied with the change of status for all, established their own legal “black codes” that enforced separation of the races. This was under the guise of equal rights for all, separately. These codes, known as Jim Crow laws, disenfranchised blacks. Booker T. Washington, born a slave in 1856, persevered through financial hardship to attend Hampton University. While there, he befriended the white founder of Hampton, General Samuel C. Chapman. Recognizing Washington’s potential, General Chapman appointed him head of a new school in Alabama. In 1881, Washington became the head of the Tuskegee Normal and Industrial Institute, adding to his developing reputation as an African American leader. By 1895, Washington was an established representative for African Americans in the South. He was writing and speaking widely about the evolution and progress of his community, but less on the unceasing oppression that southern African Americans still suffered. In the spring of 1895, he was invited to accompany a committee of nearly all White men from Atlanta to appear before a committee of all White members of Congress to appeal for help for the upcoming Exposition. During his time speaking, as he recounted in Up From Slavery, he announced that that “the Atlanta Exposition would present an opportunity for both races to show what advance they had made since freedom, and would at the same time afford encouragement to them to make still greater progress” (101). Two years after an 1890 Louisiana statute that provided for segregated “separate but equal” railroad accommodations, Homer Plessy, a fair-skinned African American, was arrested for violating the statute; he had deliberately tested the law, convinced that it was unconstitutional. He was found guilty on the grounds that the law was a reasonable exercise of the state’s police powers based on custom, usage, and tradition. Presiding at the trial was John H. Ferguson. After the verdict, Plessy filed a petition for writs of prohibition and certiorari in the Supreme Court of Louisiana against Ferguson, asserting that segregation stigmatized blacks and was in violation of the 13th and 14th Amendments. When Booker T. Washington delivered his speech, this important case was still pending. Develop your role using primary source materials, when available. In your role, think about the motivations behind your alignment to your faction. Read and reflect on Booker T. Washington’s speech. Write a position from the perspective of your role. Respond to the following prompts. Do you support the speech? Why or why not? What would you suggest that Mr. Washington add to or remove from the speech? Why? If you are Booker T. Washington, why are you committed to the belief that conciliation is the answer? support Is there anything you would like to add or remove? If so, why? Do you support Jim Crow laws? Develop a persuasive argument for why you do or do not support segregation. Day two will take place in a meeting room at the Tuskegee Institute. Arrange desks/tables and chairs to simulate a meeting room. Have name cards available for students to pick up as they enter class. They will seat themselves, placing the cards in front of them. The group convenes to discuss their views on the speech. Each person must state their position. Others can enter discussion to counter the position and then state their own. Booker T. Washington will state his position last. As gamemaster, you should take note of time and urge each person to speak, particularly if one person attempts to control the discussion. Allow most of the class time for group discussion. Fifteen minutes before the end of class, in your role as secretary, stand up to say that the meeting will be ending in five minutes. Call the end of the meeting. Hand out index cards for voting. Everyone will cast their (mandatory) vote based on the persuasiveness of positions represented in terms of separate but equal. Those aligned with a decisive faction are expected to not betray their faction. Undecided votes will determine which faction wins this round. Collect cards, count votes, and call out winning faction. Hand out assignment for next session. “Wilberforce, 24, Sept., ‘95 My Dear Mr Washington: Let me heartily congratulate you upon your phenomenal success at Atlanta -- it was a word fitly spoken. W. E. B. Du Bois” The time is now eight months later. After the meeting discussing the speech that Booker T. Washington was to give at the Exposition, Washington thanked you in a personal note for your attendance and for your opinion. He added that the words he initially wrote were the ones he felt most deeply and that inspired him to deliver those same words at the Exposition on September 18, 1895. The ruling of Plessy v. Ferguson is decided on May 18, 1896. Angered, W. E. B. Du Bois decides to invite all those in attendance at the meeting before Booker T. Washington’s speech to discuss the ruling. Look for an article to read on the ruling in a historical news source. As the role you are playing, write an opinion piece about the ruling for a newspaper, persuasively stating why you support it or oppose it. In this alternate version of 1895, a class blog will serve as the newspaper publishing your opinion. Each person will read each other’s opinions before class. Additionally, prepare notes on what you will discuss at the meeting. Write your comments on the ruling that you would like to discuss at the meeting. Background on the case: After Louisiana passed the Separate Car Act in 1890, enacted to allow rail carriers to segregate train cars, the Comité des Citoyens (Committee of Citizens) of New Orleans decided to challenge the law in the courts. On June 7, 1892, Homer Plessy, a fair-skinned “octoroon” (a person of one-eighth Black ancestry), purchased a first-class ticket and sat in white-only car. He was arrested and jailed for remaining in the car. The case was brought to trial in a New Orleans court and Plessy was convicted of violating the law. He then filed a petition against the judge in that trial, Hon. John H. Ferguson, at the Louisiana Supreme Court, arguing that the segregation law violated the Equal Protection Clause of the Fourteenth Amendment, which forbids states from denying "to any person within their jurisdiction the equal protection of the laws," as well as that it violated the Thirteenth Amendment, which banned slavery. The Court upheld the doctrine of "Separate but Equal" and ruled against Plessy. Read the case here. Day three will take place in a meeting room at Wilberforce University in Ohio. Your responsibilities as gamemaster will be the same as they were during day two, but you are now W. E. B. Du Bois’ secretary. End the game with a quote from Du Bois’ essay “Of Booker T. Washington and Others,” published in Souls of Black Folk: “In the history of nearly all other races and peoples the doctrine preached at such crises has been that manly self-respect is worth more than lands and houses, and that a people who voluntarily surrender such respect, or cease striving for it, are not worth civilizing. In answer to this, it has been claimed that the Negro can survive only through submission. Mr. Washington distinctly asks that black people give up, at least for the present, three things,— First, political power, Second, insistence on civil rights, Third, higher education of Negro youth,— and concentrate all their energies on industrial education, the accumulation of wealth, and the conciliation of the South. This policy has been courageously and insistently advocated for over fifteen years, and has been triumphant for perhaps ten years. As a result of this tender of the palm-branch, what has been the return? In these years there have occurred: The disfranchisement of the Negro. The legal creation of a distinct status of civil inferiority for the Negro. The steady withdrawal of aid from institutions for the higher training of the Negro” (53). Write a reflection on your participation in the game, including: Your general feelings about portraying your role. Your connection to the motivations behind the person you portrayed. Your connection to the motivations of your faction. Your perception of the historical events viewed through the role you played. Read the following: Du Bois, W. E. B. 1903. "Of Booker T. Washington and Others." Souls of Black Folk. Project Gutenberg. Brown v. Board of Education ruling. 1954. Topeka 347 U.S. 483. Coates, Ta-Nehisi. 2009. “The Tragedy and Betrayal of Booker T. Washington.” The Atlantic. 31 Mar 2009. Debriefing and discussion of readings. Discuss reflections. Encourage everyone to participate in this discussion. Those who do not participate will turn in their reflection. Discuss readings. Suggestion for discussion: Consider a potential catalyst that could have rid the South of Jim Crow before 1954, when the separate but equal doctrine was overturned by Brown v. Board of Education. Optional: Timeline game For a class of fifteen students, create three identical timelines for three teams of five players. Use example below for increments. Write one event from list on individual sticky notes. Make three sets. Hang timelines around room, give each group a set of events. Each player takes two to three events to place on timeline. The first team to place events correctly along the timeline wins. Booker T. Washington speech at Cotton Exposition (1895) Plessy v. Ferguson ruling (1896) Souls of Black Folk published (1903) Harlem Renaissance (1920s-1930s) Brown v. Board of Education (1954) Montgomery Bus Boycott (1956) Mississippi civil rights workers’ murders (June 21-22, 1964) Civil Rights Act (enacted July 2, 1964) Rodney King’s beating by the LAPD and subsequent LA riots (1991) Barack Obama’s first term as President of the United States (2008) Use of #blacklivesmatter hashtag on social media (2013) Ferguson protests after Michael Brown’s death by a white police officer (2014) Suggested Reading List "The Atlanta Exposition: President Cleveland Starts the Machinery in Motion." 1895. The New York Times: 19 Sept. 1895. "Separate Coach Law Upheld: The Supreme Court Decides a Case from Louisiana." 1896. The Washington Post 19 May 1896: 6. Available in ProQuest Historical Newspapers. Douglass, Frederick. 1866. "Reconstruction." The Atlantic Monthly. Du Bois, W. E. B. 1899. "A Negro Schoolmaster in the South." The Atlantic Monthly. ---. 1903. "Of Booker T. Washington and Others." Souls of Black Folk. Project Gutenberg. Fonvielle, W. F. "The South As I Saw It." A.M.E. Zion Church Quarterly (1894): 149-58. African American Historical Serials Collection. Web. Wells-Barnett, Ida B. A Red Record Tabulated Statistics and Alleged Causes of Lynchings in the United States, 1892-1893-1894. Chicago: Donohue & Henneberry, 1895. Project Gutenberg. Washington, Booker T. 1896. "The Awakening of the Negro." The Atlantic Monthly. ---. 1896. "Aims to Uplift a Race: Booker T. Washington as The Negro's Industrial Benefactor." The Washington Post 21 June 1896: 25. ProQuest Historical Newspapers. ---. 1901. Up From Slavery. Project Gutenberg. Washington, Booker T, and Du Bois, W. E. B. 1907. The Negro in the South, His Economic Progress in Relation to His Moral and Religious Development: Being the William Levi Bull Lectures for the Year 1907. Philadelphia: G.W. Jacobs & Co. Washington, Booker T, Louis R. Harlan, and Raymond Smock. 1889. The Booker T. Washington Papers. Volume 3 1889-95. p 567-589. Urbana: University of Illinois Press, 1972. Carnes, Mark. “Setting Students’ Minds on Fire.” The Chronicle of Higher Education, October 8, 2004. Web. Reacting to the Past. Barnard College. 2016. https://reacting.barnard.edu/curriculum Stroessner, Steven J., Laurie Susser Beckerman, and Alexis Whittaker. “All the World’s a Stage? Consequences of a Role-Playing Pedagogy on Psychological Factors and Writing and Rhetorical Skill in College Undergraduates.” Journal of Educational Psychology 101, no. 3 (2009): 605–620.
<urn:uuid:4cad42ef-c797-49c7-aa1f-0934441ce0d0>
CC-MAIN-2022-33
https://www.hastac.org/blogs/irislyn/2016/12/05/chapter-3-atlanta-compromise-reacting-past
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573744.90/warc/CC-MAIN-20220819161440-20220819191440-00496.warc.gz
en
0.950844
5,019
3.984375
4
John II of France |King of France | |Reign||22 August 1350 – 8 April 1364| |Coronation||26 September 1350| |Regent||Dauphin Charles (1356–1360)| |Born||26 April 1319| Le Mans, France |Died||8 April 1364 (aged 44)| Savoy Palace, London, England |Burial||7 May 1364| |Issue||Charles V of France| Louis I, Duke of Anjou John, Duke of Berry Philip II, Duke of Burgundy Joan, Queen of Navarre Marie, Duchess of Bar Isabella, Countess of Vertus |Father||Philip VI of France| |Mother||Joan of Burgundy| John II (French: Jean II; 26 April 1319 – 8 April 1364), called John the Good (French: Jean le Bon), was King of France from 1350 until his death in 1364. When he came to power, France faced several disasters: the Black Death, which killed nearly 40% of its population; popular revolts known as Jacqueries; free companies (Grandes Compagnies) of routiers who plundered the country; and English aggression that resulted in catastrophic military losses, including the Battle of Poitiers of 1356, in which John was captured. While John was a prisoner in London, his son Charles became regent and faced several rebellions, which he overcame. To liberate his father, he concluded the Treaty of Brétigny (1360), by which France lost many territories and paid an enormous ransom. In an exchange of hostages, which included his second son Louis, Duke of Anjou, John was released from captivity to raise funds for his ransom. Upon his return to France, he created the franc to stabilize the currency and tried to get rid of the free companies by sending them to a crusade, but Pope Innocent VI died shortly before their meeting in Avignon. When John was informed that Louis had escaped from captivity, he voluntarily returned to England, where he died in 1364. He was succeeded by his son Charles V. John was nine years old when his father had been crowned as Philip VI of France. Philip VI's ascent to the throne was unexpected: previously, several of Philip IV's sons and successors had died without sons and heirs themselves, and so because of Salic law, all female descendants of John's uncle Philip the Fair were passed over; it was also disputed because it bypassed the claim of a closer relative of Philip the Fair, Edward III of England, his grandson through his daughter Isabella. Thus, as the new King of France, John's father Philip VI had to consolidate his power in order to protect his throne from rival claimants; therefore, he decided to marry off his son John quickly at the age of thirteen to form a strong matrimonial alliance, at the same time conferring upon him the title of Duke of Normandy. Search for a wife and first marriage Initially a marriage with Eleanor of Woodstock, sister of King Edward III of England, was considered, but instead Philip invited John of Luxembourg, King of Bohemia, to Fontainebleau. Bohemia had aspirations to control Lombardy and needed French diplomatic support. A treaty was drawn up. The military clauses stipulated that, in the event of war, Bohemia would support the French army with four hundred infantrymen. The political clauses ensured that the Lombard crown would not be disputed if the king of Bohemia managed to obtain it. Philip selected Bonne of Bohemia as a wife for his son, as she was closer to child-bearing age (16 years), and the dowry was fixed at 120,000 florins. John reached the age of majority, 13 years and one day, on 27 April 1332, and received overlordship of the duchy of Normandy, as well as the counties of Anjou and Maine. The wedding was celebrated on 28 July at the church of Notre-Dame in Melun in the presence of six thousand guests. The festivities were prolonged by a further two months when the young groom was finally knighted at the cathedral of Notre-Dame in Paris. As the new Duke of Normandy, John was solemnly granted the arms of a knight in front of a prestigious assembly bringing together the kings of Bohemia and Navarre, and the dukes of Burgundy, Lorraine and Brabant. Duke of Normandy Accession and rise of the English and the royalty Upon his accession as Duke of Normandy in 1332, John had to deal with the reality that most of the Norman nobility was already allied with the English camp. Effectively, Normandy depended economically more on maritime trade across the English Channel than on river trade on the Seine. The duchy had not been English for 150 years, but many landowners had holdings across the Channel. Consequently, to line up behind one or other sovereign risked confiscation. Therefore, Norman members of the nobility were governed as interdependent clans, which allowed them to obtain and maintain charters guaranteeing the duchy a measure of autonomy. It was split into two key camps, the counts of Tancarville and the counts of Harcourt, which had been in conflict for generations. Tension arose again in 1341. King Philip, worried about the richest area of the kingdom breaking into bloodshed, ordered the bailiffs of Bayeux and Cotentin to quell the dispute. Geoffroy d'Harcourt raised troops against the king, rallying a number of nobles protective of their autonomy and against royal interference. The rebels demanded that Geoffroy be made duke, thus guaranteeing the autonomy granted by the charter. Royal troops took the castle at Saint-Sauveur-le-Vicomte and Geoffroy was exiled to Brabant. Three of his companions were decapitated in Paris on 3 April 1344. Meeting with the Avignon Papacy and the King of England In 1342, John was in Avignon, then a part of the Papal States, at the coronation of Pope Clement VI, and in the latter part of 1343, he was a member of a peace parley with Edward III of England's chancery clerk. Clement VI was the fourth of seven Avignon Popes whose papacy was not contested, although the supreme pontiffs would ultimately return to Rome in 1378. Relations with the Normans and rising tensions By 1345, increasing numbers of Norman rebels had begun to pay homage to Edward III, constituting a major threat to the legitimacy of the Valois kings. The defeat at the Battle of Crécy on 26 August 1346, and the capitulation of Calais on 3 August 1347, after an eleven-month siege, further damaged royal prestige. Defections by the nobility, whose land fell within the broad economic influence of England, particularly in the north and west, increased. Consequently, King Philip VI decided to seek a truce. Duke John met Geoffroy d'Harcourt, to whom the king agreed to return all confiscated goods, even appointing him sovereign captain in Normandy. John then approached the Tancarville family, whose loyalty could ultimately ensure his authority in Normandy. The marriage of John, Viscount of Melun, to Jeanne, the only heiress of the county of Tancarville, ensured that the Melun-Tancarville party remained loyal to John, while Geoffroy d'Harcourt continued to act as defender for Norman freedoms and thus of the reforming party. Black Death and second marriage On 11 September 1349, John's wife, Bonne of Bohemia (Bonne de Luxembourg), died at the Maubuisson Abbey near Paris, of the Black Death, which was devastating Europe. To escape the pandemic, John, who was living in the Parisian royal residence, the Palais de la Cité, left Paris. On 9 February 1350, five months after the death of his first wife, John married Joan I, Countess of Auvergne, in the royal Château de Sainte-Gemme (that no longer exists), at Feucherolles, near Saint-Germain-en-Laye. King of France Philip VI, John's father, died on 22 August 1350, and John's coronation as John II, king of France, took place in Reims the following 26 September. Joanna, his second wife, was crowned queen of France at the same time. In November 1350, King John had Raoul II of Brienne, Count of Eu seized and summarily executed, for reasons that remain unclear, although it was rumoured that he had pledged the English the County of Guînes for his release. In 1354, John's son-in-law and cousin, Charles II of Navarre, who, in addition to his Kingdom of Navarre in the Pyrenees mountains, bordering between France and Spain, also held extensive lands in Normandy, was implicated in the assassination of the Constable of France, Charles de la Cerda, who was the favorite of King John. Nevertheless, in order to have a strategic ally against the English in Gascony, John signed the Treaty of Mantes with Charles on 22 February 1354. The peace did not last between the two, and Charles eventually struck up an alliance with Henry of Grosmont, the first Duke of Lancaster. The following year, on 10 September 1355 John and Charles signed the Treaty of Valognes, but this second peace lasted hardly any longer than the first, culminating in a highly dramatic event where, during a banquet on 5 April 1356 at the Royal Castle in Rouen attended by the King's son Charles, Charles II of Navarre and a number of Norman magnates and notables of the French king burst through the door in full armor, swords in hand, along with his entourage which included his brother Phillip, younger son Louis and cousins, as well as over a hundred fully armed knights waiting outside. He lunged over and grabbed Charles of Navarre shouting, "let no one move if he does not want to be dead with this sword." With his son the banquet host, the Dauphin Charles on his knees pleading for him to stop, the King grabbed Navarre by the throat and pulled him out of his chair yelling in his face, "Traitor, you are not worthy to sit at my son's table!" He then ordered the arrests of all the guests including Navarre and, in what many considered to be a rash move as well as a political mistake, he had John, the Count of Harcourt and several other Norman lords and notables summarily executed later that night in a yard nearby while he stood watching. This act, which was largely driven by revenge for Charles of Navarre's and John of Harcourt's pre-meditated plot that killed John's favorite, Charles de La Cerda, would push much of what remaining support the King had from the lords in Normandy away to King Edward and the English camp, setting the stage for the English invasion and the resulting Battle of Poitiers in the months to come. Battle of Poitiers In 1355, the Hundred Years' War had flared up again, and on July 1356, Edward, the Black Prince, son of Edward III of England, took an army on a great chevauchée through France. John pursued him with an army of his own. In September the two forces met a few miles southeast of Poitiers. John was confident of victory—his army was probably twice the size of his opponent's—but he did not immediately attack. While he waited, the papal legate went back and forth, trying to negotiate a truce between the leaders. There is some debate over whether the Black Prince wanted to fight at all. He offered his wagon train, which was heavily loaded with loot. He also promised not to fight against France for seven years. Some sources claim that he even offered to return Calais to the French crown. John countered by demanding that 100 of the Prince's best knights surrender themselves to him as hostages, along with the Prince himself. No agreement could be reached. Negotiations broke down, and both sides prepared for combat. On the day of the battle, John and 17 knights from his personal guard dressed identically. This was done to confuse the enemy, who would do everything possible to capture the sovereign on the field. In spite of this precaution, following the destruction and routing of the massive force of French knights at the hands of the ceaseless English longbow volleys, John was captured as the English force charged to finish their victory. Though he fought with valor, wielding a large battle-axe, his helmet was knocked off. Surrounded, he fought on until Denis de Morbecque, a French exile who fought for England, approached him. "Sire," Morbecque said. "I am a knight of Artois. Yield yourself to me and I will lead you to the Prince of Wales." Surrender and capture King John surrendered by handing him his glove. That night King John dined in the red silk tent of his enemy. The Black Prince attended to him personally. He was then taken to Bordeaux, and from there to England. The Battle of Poitiers would be one of the major military disasters not just for France, but at any time during the Middle Ages. While negotiating a peace accord, John was at first held in the Savoy Palace, then at a variety of locations, including Windsor, Hertford, Somerton Castle in Lincolnshire, Berkhamsted Castle in Hertfordshire, and briefly at King John's Lodge, formerly known as Shortridges, in East Sussex. Eventually, John was taken to the Tower of London. Prisoner of the English As a prisoner of the English, John was granted royal privileges that permitted him to travel about and enjoy a regal lifestyle. At a time when law and order was breaking down in France and the government was having a hard time raising money for the defence of the realm, his account books during his captivity show that he was purchasing horses, pets, and clothes while maintaining an astrologer and a court band. Treaty of Brétigny The Treaty of Brétigny (drafted in May 1360) set his ransom at an astounding 3 million crowns, roughly two or three years worth of revenue for the French Crown, which was the largest national budget in Europe during that period. On 31 June 1360 John left the Tower of London and proceeded to Eltham Palace where Queen Philippa had prepared a great farewell entertainment. Passing the night at Dartford, he continued towards Dover, stopping at the Maison Dieu of St Mary at Ospringe, and paying homage at the shrine of St Thomas Becket at Canterbury on 4 July. He dined with the Black Prince—who had negotiated the Treaty of Brétigny—at Dover Castle, and reached English-held Calais on 8 July. Leaving his son Louis of Anjou in Calais as a replacement hostage, John was allowed to return to France to raise the funds. The Treaty of Brétigny was ratified in October 1360. Louis' escape and returning to England On 1 July 1363, King John was informed that Louis had escaped. Troubled by the dishonour of this action, and the arrears in his ransom, John did something that shocked and dismayed his people: he announced that he would voluntarily return to captivity in England. His council tried to dissuade him, but he persisted, citing reasons of "good faith and honour." He sailed for England that winter and left the impoverished citizens of France again without a king. John was greeted in London in 1364 with parades and feasts. A few months after his arrival, however, he fell ill with an unknown malady. He died at the Savoy Palace in April 1364. His body was returned to France, where he was interred in the royal chambers at Saint Denis Basilica. John suffered from fragile health. He engaged little in physical activity, practised jousting rarely, and only occasionally hunted. Contemporaries report that he was quick to get angry and resort to violence, leading to frequent political and diplomatic confrontations. He enjoyed literature and was patron to painters and musicians. The image of a "warrior king" probably emerged from the courage in battle he showed at the Battle of Poitiers and the creation of the Order of the Star. This was guided by political need, as John was determined to prove the legitimacy of his crown, particularly as his reign, like that of his father, was marked by continuing disputes over the Valois claim from both Charles II of Navarre and Edward III of England. From a young age, John was called to resist the decentralising forces affecting the cities and the nobility, each attracted either by English economic influence or the reforming party. He grew up among intrigue and treason, and in consequence he governed in secrecy only with a close circle of trusted advisers. He took as his wife Bonne of Bohemia and fathered 11 children in eleven years. Due to his close relationship with Charles de la Cerda, rumours were spread by Charles II of Navarre of a romantic attachment between the two. La Cerda was given various honours and appointed to the high position of connetable when John became king; he accompanied the king on all his official journeys to the provinces. La Cerda's rise at court excited the jealousy of the French barons, several of whom stabbed him to death in 1354. La Cerda's fate paralleled that of Edward II of England's Piers Gaveston and John II of Castile's Álvaro de Luna; the position of a royal favourite was a dangerous one. John's grief on La Cerda's death was overt and public. |Ancestors of John II of France| (House of Valois) - Charles V of France (21 January 1338 – 16 September 1380) - Catherine (1338–1338) died young - Louis I, Duke of Anjou (23 July 1339 – 20 September 1384), married Marie of Blois - John, Duke of Berry (30 November 1340 – 15 June 1416), married Jeanne of Auvergne - Philip II, Duke of Burgundy (17 January 1342 – 27 April 1404), married Margaret of Flanders - Joan (24 June 1343 – 3 November 1373), married Charles II (the Bad) of Navarre - Marie (12 September 1344 – October 1404), married Robert I, Duke of Bar - Agnes (9 December 1345 – April 1350) - Margaret (20 September 1347 – 25 April 1352) - Isabelle (1 October 1348 – 11 September 1372), married Gian Galeazzo I, Duke of Milan On 19 February 1350, at the royal Château de Sainte-Gemme, John married Joanna I of Auvergne (d. 1361), Countess of Auvergne and Boulogne. Joanna was the widow of Philip of Burgundy, the deceased heir of that duchy, and the mother of the young Philip I, Duke of Burgundy (1344–61) who became John's stepson and ward. John and Joanna had three children, all of whom died shortly after birth: - Blanche (b. November 1350) - Catherine (b. early 1352) - a son (b. early 1353) John II was succeeded by his son, Charles, who reigned as Charles V of France, known as The Wise. - François Autrand (1994). Charles V le Sage. Paris: Fayard. p. 13. - Autrand, Françoise, Charles V, Fayard, Paris, 1994, 153. - Favier, Jean, La Guerre de Cent Ans, Fayard, Paris, 1980, p. 140 - Papal Coronations in Avignon, Bernard Schimmelpfennig, Coronations: Medieval and Early Modern Monarchic Ritual, ed. János M. Bak, (University of California Press, 1990), pp. 191-192. - Sumption, Jonathan, Trial by Battle: The Hundred Years War I, Faber & Faber, 1990, p. 436. - Autrand, Françoise, Charles V, Fayard, Paris, 1994, p. 60 - Anselme de Sainte-Marie, Père (1726). Histoire généalogique et chronologique de la maison royale de France [Genealogical and chronological history of the royal house of France] (in French). Vol. 1 (3rd ed.). Paris: La compagnie des libraires. p. 105. - Jones, Michael. "The last Capetians and early Valois Kings, 1314-1364", Michael Jones, The New Cambridge Medieval History: Volume 6, c.1300-c.1415, (Cambridge University Press, 2000), 391. - Autrand, Françoise (1994). Charles V : le Sage. Paris: Fayard. p. 909. ISBN 2-213-02769-2. - Borel d’Hauterive, André. Notice Historique de la Noblesse (Tome 2 ed.). p. 391. - Hunt, William (1889). Stephen, Leslie (ed.). Dictionary of National Biography. Vol. 17. London: Smith, Elder & Co. pp. 90–101., citing Fœdera, iii, 486; Chandos, l. 1539 . In - Stanley, Arthur Penrhyn (1906). Historical Memorials of Canterbury. London: J. M. Dent & Co. pp. 234, 276–9. - Kosto, Adam J. (2012). Hostages in the Middle Ages. Oxford University Press. p. 163. ISBN 9780199651702. Retrieved 11 February 2021. - Deviosse, J. Jean Le Bon, Paris, 1985, p. 223-236; Françoise Autrand, Charles V, Fayard 1994, p.106 - Anselme 1726, pp. 100–101. - Anselme 1726, p. 103. - Anselme 1726, pp. 87–88. - Anselme 1726, pp. 542–544 - Anselme 1726, pp. 83–87. - Joni M. Hand, Women, Manuscripts and Identity in Northern Europe, 1350-1550, (Ashgate Publishing, 2013), 12. - Marguerite Keane, Material Culture and Queenship in 14th-century France: The Testament of Blanche of Navarre (1331-1398), (Brill, 2016), 17. - Jean de Venette, The Chronicle of Jean de Venette, translator Jean Birdsall, editor Richard A. Newhall, (Columbia University Press, 1953), 312. - Gallo, F. Alberto (1995). Music in the Castle: Troubadours, Books, and Orators in Italian Courts of the Thirteenth, Fourteenth, and Fifteenth Centuries. University of Chicago Press. p. 54.
<urn:uuid:cbad902e-4d75-4b13-855f-800e6b98a14c>
CC-MAIN-2022-33
https://en.wikipedia.org/wiki/John_II_of_France
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00496.warc.gz
en
0.964879
5,182
2.953125
3
|Discontinued||September 28, 2007| |Designed by||Intel with Pat Gelsinger as lead architect| |Max. CPU clock rate||16 to 100 MHz[a]| |FSB speeds||16 MHz to 50 MHz| |Data width||32 bits| |Address width||32 bits| |Virtual address width||32 bits (linear); 46 bits (logical)| |L1 cache||8 KB to 16 KB| |Architecture and classification| |Technology node||1 µm to 0.6 µm| |Instruction set||x86 including x87 (except for "SX" models)| The Intel 486, officially named i486 and also known as 80486, is a microprocessor. It is a higher-performance follow-up to the Intel 386. The i486 was introduced in 1989. It represents the fourth generation of binary compatible CPUs following the 8086 of 1978, the Intel 80286 of 1982, and 1985's i386. A typical 50 MHz i486 executes around 40 million instructions per second (MIPS), reaching 50 MIPS peak performance. It is approximately twice as fast as the i386 or i286 per clock cycle. The i486's improved performance is thanks to its five-stage pipeline with all stages bound to a single cycle. The enhanced FPU unit on the chip was significantly faster than the i387 FPU per cycle. The intel 80387 FPU ("i387") was a separate, optional math coprocessor that was installed in a motherboard socket alongside the i386. The i486 was succeeded by the original Pentium. The i486 was announced at Spring Comdex in April 1989. At the announcement, Intel stated that samples would be available in the third quarter and production quantities would ship in the fourth quarter. The first i486-based PCs were announced in late 1989. The first major update to the i486 design came in March 1992 with the release of the clock-doubled 486DX2 series. It was the first time that the CPU core clock frequency was separated from the system bus clock frequency by using a dual clock multiplier, supporting 486DX2 chips at 40 and 50 MHz. The faster 66 MHz 486DX2-66 was released that August. The fifth-generation Pentium processor launched in 1993, while Intel continued to produce i486 processors, including the triple-clock-rate 486DX4-100 with a 100 MHz clock speed and a L1 cache doubled to 16 KB. Earlier, Intel had decided not to share its 80386 and 80486 technologies with AMD. However, AMD believed that their technology sharing agreement extended to the 80386 as a derivative of the 80286. AMD reverse-engineered Intel 386 chip and produced the 40 MHz Am386DX-40 chip, which was cheaper and had lower power consumption than Intel's best 33 MHz version. Intel attempted to prevent AMD from selling the processor, but AMD won in court, which allowed it to establish itself as a competitor. AMD continued to create clones, releasing the first-generation Am486 chip in April 1993 with clock frequencies of 25, 33 and 40 MHz. Second-generation Am486DX2 chips with 50, 66 and 80 MHz clock frequencies were released the following year. The Am486 series was completed with a 120 MHz DX4 chip in 1995. AMD's long-running 1987 arbitration lawsuit against Intel was settled in 1995, and AMD gained access to Intel's 80486 microcode. This led to the creation of two versions of AMD's 486 processor - one reverse-engineered from Intel's microcode, while the other used AMD's microcode in a cleanroom development process. However, the settlement also concluded that the 80486 would be AMD's last Intel clone. Another 486 clone manufacturer was Cyrix, which was a fabless co-processor chip maker for 80286/386 systems. The first Cyrix 486 processors, the 486SLC and 486DLC, were released in 1992 and used the 80386 package. Both Texas Instruments-manufactured Cyrix processors were pin-compatible with 386SX/DX systems, which allowed them to become an upgrade option. However, these chips could not match the Intel 486 processors, having only 1 KB of cache memory and no built-in math coprocessor. In 1993, Cyrix released its own Cx486DX and DX2 processors, which were closer in performance to Intel's counterparts. Intel and Cyrix sued each other, with Intel going for patent infringement and Cyrix going with antitrust claims. In 1994 Cyrix won and dropped its antitrust claim. In 1995, both Cyrix and AMD began looking at a ready market for users wanting to upgrade their processors. Cyrix released a derivative 486 processor called the 5x86, based on the Cyrix M1 core, which was clocked up to 120 MHz and was an option for 486 Socket 3 motherboards. AMD released a 133 MHz Am5x86 upgrade chip, which was essentially an improved 80486 with double the cache and a quad multiplier that also worked with the original 486DX motherboards. Am5x86 was the first processor to use AMD's performance rating and was marketed as Am5x86-P75, with claims that it was equivalent to the Pentium 75. Kingston Technology launched a 'TurboChip' 486 system upgrade that used a 133 MHz Am5x86. Intel responded by making a Pentium OverDrive upgrade chip for 486 motherboards, which was a modified Pentium core that ran up to 83 MHz on boards with a 25 or 33 MHz front-side bus clock. OverDrive wasn't popular due to speed and price. The 486 was declared obsolete as early as 1996, with a Florida school district's purchase of a fleet of 486DX4 machines in that year sparking controversy. New computers equipped with 486 processors in discount warehouses became scarce, and an IBM spokesperson called it a "dinosaur". Even after the Pentium series of processors gained a foothold in the market, however, Intel continued to produce 486 cores for industrial embedded applications. Intel discontinued production of i486 processors in late 2007. The instruction set of the i486 is very similar to the i386, with the addition of a few extra instructions, such as CMPXCHG, a compare-and-swap atomic operation, and XADD, a fetch-and-add atomic operation that returned the original value (unlike a standard ADD, which returns flags only). The i486's performance architecture is a vast improvement over the i386. It has an on-chip unified instruction and data cache, an on-chip floating-point unit (FPU) and an enhanced bus interface unit. Due to the tight pipelining, sequences of simple instructions (such as ALU reg,reg and ALU reg,im) could sustain single-clock-cycle throughput (one instruction completed every clock). These improvements yielded a rough doubling in integer ALU performance over the i386 at the same clock rate. A 16 MHz i486 therefore had performance similar to a 33 MHz i386. The older design had to reach 50 MHz to be comparable with a 25 MHz i486 part.[d] Differences between i386 and i486 - An 8 KB on-chip (level 1) SRAM cache stores the most recently used instructions and data (16 KB and/or write-back on some later models). The i386 had no internal cache but supported a slower off-chip cache (not officially a level 2 cache because i386 had no internal level 1 cache). - An enhanced external bus protocol to enable cache coherency and a new burst mode for memory accesses to fill a cache line of 16 bytes within five bus cycles. The 386 needed eight bus cycles to transfer the same amount of data. - Tightly coupled[b] pipelining completes a simple instruction like ALU reg,reg or ALU reg,im every clock cycle (after a latency of several cycles). The i386 needed two clock cycles. - Integrated FPU (disabled or absent in SX models) with a dedicated local bus; together with faster algorithms on more extensive hardware than in the i387, this performed floating-point calculations faster than the i386/i387 combination. - Improved MMU performance. - New instructions: XADD, BSWAP, CMPXCHG, INVD, WBINVD, INVLPG. Just as in the i386, a flat 4 GB memory model could be implemented. All "segment selector" registers could be set to a neutral value in protected mode, or to zero in real mode, and using only the 32-bit "offset registers" (x86-terminology for general CPU registers used as address registers) as a linear 32-bit virtual address bypassing the segmentation logic. Virtual addresses were then normally mapped onto physical addresses by the paging system except when it was disabled. (Real mode had no virtual addresses.) Just as with the i386, circumventing memory segmentation could substantially improve performance for some operating systems and applications. On a typical PC motherboard, either four matched 30-pin (8-bit) SIMMs or one 72-pin (32-bit) SIMM per bank were required to fit the i486's 32-bit data bus. The address bus used 30-bits (A31..A2) complemented by four byte-select pins (instead of A0,A1) to allow for any 8/16/32-bit selection. This meant that the limit of directly addressable physical memory was 4 gigabytes as well (230 32-bit words = 232 8-bit words). Intel offered several suffixes and variants (see table). Variants include: - Intel RapidCAD: a specially packaged Intel 486DX and a dummy floating-point unit (FPU) designed as pin-compatible replacements for an i386 processor and 80387 FPU. - i486SL-NM: i486SL based on i486SX. - i487SX (P23N): i486DX with one extra pin sold as an FPU upgrade to i486SX systems; When the i487SX was installed, it ensured that an i486SX was present on the motherboard but disabled it, taking over all of its functions. - i486 OverDrive (P23T/P24T): i486SX, i486SX2, i486DX2 or i486DX4. Marked as upgrade processors, some models had different pinouts or voltage-handling abilities from "standard" chips of the same speed. Fitted to a coprocessor or "OverDrive" socket on the motherboard, they worked the same as the i487SX. The maximal internal clock frequency (on Intel's versions) ranged from 16 to 100 MHz. The 16 MHz i486SX model was used by Dell Computers. One of the few i486 models specified for a 50 MHz bus (486DX-50) initially had overheating problems and was moved to the 0.8-micrometer fabrication process. However, problems continued when the 486DX-50 was installed in local-bus systems due to the high bus speed, making it unpopular with mainstream consumers. Local-bus video was considered a requirement at the time, though it remained popular with users of EISA systems. The 486DX-50 was soon eclipsed by the clock-doubled i486DX2, which although running the internal CPU logic at twice the external bus speed (50 MHz), was nevertheless slower because the external bus ran at only 25 MHz. The i486DX2 at 66 MHz (with 33 MHz external bus) was faster than the 486DX-50, overall. More powerful i486 iterations such as the OverDrive and DX4 were less popular (the latter available as an OEM part only), as they came out after Intel had released the next-generation Pentium processor family. Certain steppings of the DX4 also officially supported 50 MHz bus operation, but it was a seldom-used feature. Voltage L1 cache* Introduced Notes i486DX (P4) 20, 25 MHz 5 V 8 KB WT April 1989 The original chip without clock multiplier i486SL 20, 25, 33 MHz 5 V or 3.3 V 8 KB WT November 1992 Low-power version of the i486DX, reduced VCore, SMM (System Management Mode), stop clock, and power-saving features — mainly for use in portable computers i486SX (P23) 16, 20, 25 MHz 5 V 8 KB WT September 1991 An i486DX with the FPU part disabled; later versions had the FPU removed from the die to reduce area and hence cost. i486DX2 (P24) 40/20, 50/25 MHz 5 V 8 KB WT March 1992 The internal processor clock runs at twice the clock rate of the external bus clock i486DX-S (P4S) 33 MHz; 50 MHz 5 V or 3.3 V 8 KB WT June 1993 SL Enhanced 486DX i486DX2-S (P24S) 40/20 MHz, 5 V or 3.3 V 8 KB WT June 1993 i486SX-S (P23S) 25, 33 MHz 5 V or 3.3 V 8 KB WT June 1993 SL Enhanced 486SX i486SX2 50/25, 66/33 MHz 5 V 8 KB WT March 1994 i486DX2 with the FPU disabled IntelDX4 (P24C) 75/25, 100/33 MHz 3.3 V 16 KB WT March 1994 Designed to run at triple clock rate (not quadruple, as often believed; the DX3, which was meant to run at 2.5× the clock speed, was never released). DX4 models that featured write-back cache were identified by an "&EW" laser-etched into their top surface, while the write-through models were identified by "&E". i486DX2WB (P24D) 50/25 MHz, 5 V 8 KB WB October 1994 Enabled write-back cache. IntelDX4WB 100/33 MHz 3.3 V 16 KB WB October 1994 i486DX2 (P24LM) 90/30 MHz, 2.5–2.9 V 8 KB WT 1994 i486GX up to 33 MHz 3.3 V 8 KB WT Embedded ultra-low-power CPU with all features of the i486SX and 16-bit external data bus. This CPU is for embedded battery-operated and hand-held applications. *WT = write-through cache strategy, WB = write-back cache strategy Other makers of 486-like CPUs Processors compatible with the i486 were produced by companies such as IBM, Texas Instruments, AMD, Cyrix, UMC, and STMicroelectronics (formerly SGS-Thomson). Some were clones (identical at the microarchitectural level), others were clean room implementations of the Intel instruction set. (IBM's multiple-source requirement was one of the reasons behind its x86 manufacturing since the 80286.) The i486 was, however, covered by many Intel patents, including from the prior i386. Intel and IBM had broad cross-licenses of these patents, and AMD was granted rights to the relevant patents in the 1995 settlement of a lawsuit between the companies. AMD produced several clones using a 40 MHz bus (486DX-40, 486DX/2-80, and 486DX/4-120) which had no Intel equivalent, as well as a part specified for 90 MHz, using a 30 MHz external clock, that was sold only to OEMs. The fastest running i486-compatible CPU, the Am5x86, ran at 133 MHz and was released by AMD in 1995. 150 MHz and 160 MHz parts were planned but never officially released. Cyrix made a variety of i486-compatible processors, positioned at the cost-sensitive desktop and low-power (laptop) markets. Unlike AMD's 486 clones, the Cyrix processors were the result of clean-room reverse engineering. Cyrix's early offerings included the 486DLC and 486SLC, two hybrid chips that plugged into 386DX or SX sockets respectively, and offered 1 KB of cache (versus 8 KB for the then-current Intel/AMD parts). Cyrix also made "real" 486 processors, which plugged into the i486's socket and offered 2 or 8 KB of cache. Clock-for-clock, the Cyrix-made chips were generally slower than their Intel/AMD equivalents, though later products with 8 KB caches were more competitive, albeit late to market. The Motorola 68040, while not i486 compatible, was often positioned as its equivalent in features and performance. Clock-for-clock basis the Motorola 68040 could significantly outperform the Intel chip. However, the i486 had the ability to be clocked significantly faster without overheating. Motorola 68040 performance lagged behind the later production i486 systems. Motherboards and buses Early i486-based computers were equipped with several ISA slots (using an emulated PC/AT-bus) and sometimes one or two 8-bit-only slots (compatible with the PC/XT-bus).[e] Many motherboards enabled overclocking of these from the default 6 or 8 MHz to perhaps 16.7 or 20 MHz (half the i486 bus clock) in several steps, often from within the BIOS setup. Especially older peripheral cards normally worked well at such speeds as they often used standard MSI chips instead of slower (at the time) custom VLSI designs. This could give significant performance gains (such as for old video cards moved from a 386 or 286 computer, for example). However, operation beyond 8 or 10 MHz could sometimes lead to stability problems, at least in systems equipped with SCSI or sound cards. Some motherboards came equipped with a 32-bit EISA bus that was backward compatible with the ISA-standard. EISA offered attractive features such as increased bandwidth, extended addressing, IRQ sharing, and card configuration through software (rather than through jumpers, DIP switches, etc.) However, EISA cards were expensive and therefore mostly employed in servers and workstations. Consumer desktops often used the simpler, faster VESA Local Bus (VLB). Unfortunately prone to electrical and timing-based instability; typical consumer desktops had ISA slots combined with a single VLB slot for a video card. VLB was gradually replaced by PCI during the final years of the i486 period. Few Pentium class motherboards had VLB support as VLB was based directly on the i486 bus; much different from the P5 Pentium-bus. ISA persisted through the P5 Pentium generation and was not completely displaced by PCI until the Pentium III era. Late i486 boards were normally equipped with both PCI and ISA slots, and sometimes a single VLB slot. In this configuration, VLB or PCI throughput suffered depending on how buses were bridged. Initially, the VLB slot in these systems was usually fully compatible only with video cards (fitting as "VESA" stands for Video Electronics Standards Association); VLB-IDE, multi I/O, or SCSI cards could have problems on motherboards with PCI slots. The VL-Bus operated at the same clock speed as the i486-bus (basically a local bus) while the PCI bus also usually depended on the i486 clock but sometimes had a divider setting available via the BIOS. This could be set to 1/1 or 1/2, sometimes even 2/3 (for 50 MHz CPU clocks). Some motherboards limited the PCI clock to the specified maximum of 33 MHz and certain network cards depended on this frequency for correct bit-rates. The ISA clock was typically generated by a divider of the CPU/VLB/PCI clock. One of the earliest complete systems to use the i486 chip was the Apricot VX FT, produced by British hardware manufacturer Apricot Computers. Even overseas in the United States it was popularized as "The World's First 486". The AMD Am5x86 and Cyrix Cx5x86 were the last i486 processors often used in late-generation i486 motherboards. They came with PCI slots and 72-pin SIMMs that were designed to run Windows 95, and also used for 80486 motherboards upgrades. While the Cyrix Cx5x86 faded when the Cyrix 6x86 took over, the AMD Am5x86 remained important given AMD K5 delays. Computers based on the i486 remained popular through the late 1990s, serving as low-end processors for entry-level PCs. Production for traditional desktop and laptop systems ceased in 1998, when Intel introduced the Celeron brand, though it continued to be produced for embedded systems through the late 2000s. In the general-purpose desktop computer role, i486-based machines remained in use into the early 2000s, especially as Windows 95 through 98 and Windows NT 4.0 were the last Microsoft operating systems to officially support i486-based systems. Windows 2000 could run on a i486-based machine, although with a less than optimal performance, due to the minimum hardware requirement of a Pentium processor. However, as they were overtaken by newer operating systems, i486 systems fell out of use except for backward compatibility with older programs (most notably games), especially given problems running on newer operating systems. However, DOSBox was available for later operating systems and provides emulation of the i486 instruction set, as well as full compatibility with most DOS-based programs. The i486 was eventually overtaken by the Pentium for personal computer applications, although Intel continued production for use in embedded systems. In May 2006, Intel announced that production of the i486 would stop at the end of September 2007. - List of Intel microprocessors - Motorola 68040, although not compatible, was often positioned as the Motorola equivalent to the Intel 486 in terms of performance and features. - VL86C020, ARM3 core of similar time frame and comparable MIPS performance on integer code (25 MHz for both), with 310,000 transistors (in a 1.5 µm process) instead of 1 million - AMD versions up to 120 and 160 MHz - The 386, 286, and even the 8086 all had overlapping fetch, decode, execution (calculation), and write back; however, tightly pipelined usually means that all stages perform their respective duties within the same length time slot. In contrast loosely pipelined implies that some kind of buffering is used to decouple the units and allow them to work more independently. Both the original 8086 and the x86-chips of today are "loosely pipelined" in this sense, while the i486 and the original Pentium worked in a "tightly pipelined" manner for typical instructions. This included most "CISC" type instructions as well as the simple load/store-free "RISC-like" ones, although the most complex also used some dedicated microcode control. - Simple instructions spend only a single clock cycle at each pipeline stage.[b] - The pre-DX2 i486 parts did not use a clock multiplier and are therefore comparable to a twice-higher clocked 386/286. - In general, 8-bit ISA slots in these systems were implemented just by leaving off the shorter "C"/"D" connector of the slot, though the copper traces for a 16-bit slot were still there on the motherboard; the computer could tell no difference between an 8-bit ISA adapter in such a slot and the same adapter in a 16-bit slot, and there were still enough 8-bit adapters in circulation that vendors figured they could save money on a few connectors this way. Also, leaving off the 16-bit extension to the ISA connector allowed use of some early 8-bit ISA cards that otherwise could not be used due to the PCB "skirt" hanging down into that 16-bit extension space. IBM was the first to do this in the IBM AT. - Intel (July 1997). Embedded Intel486 Processor Hardware Reference Manual (273025-001). - 486 32-bit CPU breaks new ground in chip density and operating performance. (Intel Corp.) (product announcement) EDN | May 11, 1989 | Pryce, Dave - Lewis, Peter H. (October 22, 1989). "THE EXECUTIVE COMPUTER; The Race to Market a 486 Machine". The New York Times. Retrieved May 5, 2010. - Yates, Darren (November 2020). "Four. Eight. Six". APC. No. 486. Future Publishing. pp. 52–55. ISSN 0725-4415. - Lilly, Paul (April 14, 2009). "A Brief History of CPUs: 31 Awesome Years of x86". PC Gamer. Retrieved August 7, 2021. - Chauvet, Berenice D. (July 15, 1996). "School buys outdated computer model". Sun Sentinel. Tribune Publishing. Archived from the original on July 2, 2021. - "AMD-Intel Litigation History". yannalaw.com. - "CISC: The Intel 80486 vs. The Motorola MC68040". July 1992. Retrieved May 20, 2013. - 68040 Microprocessor Archived February 16, 2012, at the Wayback Machine - Lavin, Paul; Nadeau, Michael E. (September 1989). "The 486s Are Here". Byte. pp. 95–98. Retrieved April 30, 2022. - "Minimum Hardware Requirements for a Windows 98 Installation". January 24, 2001. Archived from the original on December 5, 2004. - "Windows NT 4.0 Workstation" (in German). WinHistory.de. - "WORLD RECORD*: Windows 2000 running on Intel i486 SX 25 MHz". July 29, 2013. - "System Requirements". DOSBox.com. - Tony Smith (May 18, 2006). "Intel cashes in ancient chips. i386, i486, i960 finally for the chop". HARDWARE. Archived from the original on August 13, 2011. Retrieved May 20, 2012.
<urn:uuid:66ea6177-78dc-4ee0-bf48-e84a124456eb>
CC-MAIN-2022-33
https://en.wikipedia.org/wiki/I486
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00497.warc.gz
en
0.95047
5,799
2.6875
3
|Hermit and Doctor of the Church| Stridon (possibly Strido Dalmatiae, on the border of Dalmatia and Pannonia (located in modern Croatia) |Died||30 September 420 (aged approximately 73–78)| Bethlehem, Palaestina Prima |Education||Catechetical School of Alexandria| |Most of the Vulgate| De viris illustribus |Tradition or movement||Trinitarianism| |Notable ideas||Perpetual virginity of Mary| |Venerated in||Catholic Church| Eastern Orthodox Church |Major shrine||Basilica of Saint Mary Major, Rome, Italy| |Feast||30 September (Latin Catholic Church), 15 June (Eastern Orthodox Church)| |Attributes||Lion, cardinal attire, cross, skull, trumpet, owl, books and writing material| |Patronage||Archaeologists; archivists; Bible scholars; librarians; libraries; school children; students; translators; Morong, Rizal; Dalmatia| |Influences||Paula of Rome| |Influenced||Virtually all of subsequent Christian theology, including Catholic, Eastern Orthodox and some Protestant| |Part of a series on| Jerome (//; Latin: Eusebius Sophronius Hieronymus; Greek: Εὐσέβιος Σωφρόνιος Ἱερώνυμος; c. 342 – c. 347 – 30 September 420), also known as Jerome of Stridon, was a Christian priest, confessor, theologian, and historian; he is commonly known as Saint Jerome. Jerome was born at Stridon, a village near Emona on the border of Dalmatia and Pannonia. He is best known for his translation of the Bible into Latin (the translation that became known as the Vulgate) and his commentaries on the whole Bible. Jerome attempted to create a translation of the Old Testament based on a Hebrew version, rather than the Septuagint, as Latin Bible translations used to be performed before him. His list of writings is extensive, and beside his Biblical works, he wrote polemical and historical essays, always from a theologian's perspective. Jerome was known for his teachings on Christian moral life, especially to those living in cosmopolitan centers such as Rome. In many cases, he focused his attention on the lives of women and identified how a woman devoted to Jesus should live her life. This focus stemmed from his close patron relationships with several prominent female ascetics who were members of affluent senatorial families. Due to Jerome's work, he is recognised as a saint and Doctor of the Church by the Catholic Church, and as a saint in the Eastern Orthodox Church,[a] the Lutheran Church, and the Anglican Communion. His feast day is 30 September (Gregorian calendar). Eusebius Sophronius Hieronymus was born at Stridon around 342–347 AD. He was of Illyrian ancestry, although whether he was able to speak the Illyrian language is a subject of controversy. He was not baptized until about 360–369 in Rome, where he had gone with his friend Bonosus of Sardica to pursue rhetorical and philosophical studies. (This Bonosus may or may not have been the same Bonosus whom Jerome identifies as his friend who went to live as a hermit on an island in the Adriatic.) Jerome studied under the grammarian Aelius Donatus. There he learned Latin and at least some Greek, though he probably did not yet acquire the familiarity with Greek literature that he later claimed to have acquired as a schoolboy. As a student, Jerome engaged in the superficial escapades and sexual experimentation of students in Rome; he indulged himself quite casually but he suffered terrible bouts of guilt afterwards. To appease his conscience, on Sundays he visited the sepulchers of the martyrs and the Apostles in the catacombs. This experience reminded him of the terrors of hell: Often I would find myself entering those crypts, deep dug in the earth, with their walls on either side lined with the bodies of the dead, where everything was so dark that almost it seemed as though the Psalmist's words were fulfilled, Let them go down quick into Hell.[Psalm 55:15] Here and there the light, not entering in through windows, but filtering down from above through shafts, relieved the horror of the darkness. But again, as soon as you found yourself cautiously moving forward, the black night closed around and there came to my mind the line of Virgil, "Horror ubique animos, simul ipsa silentia terrent".[b] Conversion to Christianity Seized with a desire for a life of ascetic penance, Jerome went for a time to the desert of Chalcis, to the southeast of Antioch, known as the "Syrian Thebaid" from the number of eremites inhabiting it. During this period, he seems to have found time for studying and writing. He made his first attempt to learn Hebrew under the guidance of a converted Jew; and he seems to have been in correspondence with Jewish Christians in Antioch. Around this time he had copied for him a Hebrew Gospel, of which fragments are preserved in his notes. It is known today as the Gospel of the Hebrews which the Nazarenes considered to be the true Gospel of Matthew. Jerome translated parts of this Hebrew Gospel into Greek. As protege of Pope Damasus, Jerome was given duties in Rome, and he undertook a revision of the Vetus Latina Gospels based on Greek manuscripts. He also updated the Psalter containing the Book of Psalms then in use in Rome, based on the Septuagint. In Rome, Jerome was surrounded by a circle of well-born and well-educated women, including some from the noblest patrician families, such as the widows Lea, Marcella and Paula, with Paula's daughters Blaesilla and Eustochium. The resulting inclination of these women towards the monastic life, away from the indulgent lasciviousness in Rome, and his unsparing criticism of the secular clergy of Rome, brought a growing hostility against him among the Roman clergy and their supporters. Soon after the death of his patron Pope Damasus I on 10 December 384, Jerome was forced to leave his position at Rome after an inquiry was brought up by the Roman clergy into allegations that he had an improper relationship with the widow Paula. Still, his writings were highly regarded by women who were attempting to maintain a vow of becoming a consecrated virgin. His letters were widely read and distributed throughout the Christian empire and it is clear through his writing that he knew these virgin women were not his only audience. Additionally, Jerome's condemnation of Blaesilla's hedonistic lifestyle in Rome had led her to adopt ascetic practices, but it affected her health and worsened her physical weakness to the point that she died just four months after starting to follow his instructions; much of the Roman populace were outraged at Jerome for causing the premature death of such a lively young woman, and his insistence to Paula that Blaesilla should not be mourned, and complaints that her grief was excessive, were seen as heartless, polarising Roman opinion against him. Translation of the Bible (382–405) Jerome was a scholar at a time when that statement implied a fluency in Greek. He knew some Hebrew when he started his translation project, but moved to Jerusalem to strengthen his grip on Jewish scripture commentary. A wealthy Roman aristocrat, Paula, funded his stay in a monastery in Bethlehem and he completed his translation there. He began in 382 by correcting the existing Latin-language version of the New Testament, commonly referred to as the Vetus Latina. By 390 he turned to translating the Hebrew Bible from the original Hebrew, having previously translated portions from the Septuagint which came from Alexandria. He believed that the mainstream Rabbinical Judaism had rejected the Septuagint as invalid Jewish scriptural texts because of what were ascertained as mistranslations along with its Hellenistic heretical elements.[c] He completed this work by 405. Prior to Jerome's Vulgate, all Latin translations of the Old Testament were based on the Septuagint, not the Hebrew. Jerome's decision to use a Hebrew text instead of the previous-translated Septuagint went against the advice of most other Christians, including Augustine, who thought the Septuagint inspired. Modern scholarship, however, has sometimes cast doubts on the actual quality of Jerome's Hebrew knowledge. Many modern scholars believe that the Greek Hexapla is the main source for Jerome's "iuxta Hebraeos" (i.e. "close to the Hebrews", "immediately following the Hebrews") translation of the Old Testament. However, detailed studies have shown that to a considerable degree Jerome was a competent Hebraist. For the next 15 years, until he died, Jerome produced a number of commentaries on Scripture, often explaining his translation choices in using the original Hebrew rather than suspect translations. His patristic commentaries align closely with Jewish tradition, and he indulges in allegorical and mystical subtleties after the manner of Philo and the Alexandrian school. Unlike his contemporaries, he emphasizes the difference between the Hebrew Bible "Apocrypha" and the Hebraica veritas of the protocanonical books. In his Vulgate's prologues, he describes some portions of books in the Septuagint that were not found in the Hebrew as being non-canonical (he called them apocrypha); for Baruch, he mentions by name in his Prologue to Jeremiah and notes that it is neither read nor held among the Hebrews, but does not explicitly call it apocryphal or "not in the canon". His Preface to the Books of Samuel and Kings (commonly called the Helmeted Preface) includes the following statement: This preface to the Scriptures may serve as a "helmeted" introduction to all the books which we turn from Hebrew into Latin, so that we may be assured that what is not found in our list must be placed amongst the Apocryphal writings. Wisdom, therefore, which generally bears the name of Solomon, and the book of Jesus, the Son of Sirach, and Judith, and Tobias, and the Shepherd are not in the canon. The first book of Maccabees I have found to be Hebrew, the second is Greek, as can be proved from the very style. Jerome's commentaries fall into three groups: Historical and hagiographic writings Description of vitamin A deficiency The following passage, taken from Saint Jerome's "Life of St. Hilarion" which was written about 392, appears to be the earliest account of the etiology, symptoms and cure of severe vitamin A deficiency: From his thirty-first to his thirty-fifth year he had for food six ounces of barley bread, and vegetables slightly cooked without oil. But finding that his eyes were growing dim, and that his whole body was shrivelled with an eruption and a sort of stony roughness (impetigine et pumicea quad scabredine) he added oil to his former food, and up to the sixty-third year of his life followed this temperate course, tasting neither fruit nor pulse, nor anything whatsoever besides. Jerome's letters or epistles, both by the great variety of their subjects and by their qualities of style, form an important portion of his literary remains. Whether he is discussing problems of scholarship, or reasoning on cases of conscience, comforting the afflicted, or saying pleasant things to his friends, scourging the vices and corruptions of the time and against sexual immorality among the clergy, exhorting to the ascetic life and renunciation of the world, or debating his theological opponents, he gives a vivid picture not only of his own mind, but of the age and its peculiar characteristics. Because there was no distinct line between personal documents and those meant for publication, we frequently find in his letters both confidential messages and treatises meant for others besides the one to whom he was writing. Due to the time he spent in Rome among wealthy families belonging to the Roman upper-class, Jerome was frequently commissioned by women who had taken a vow of virginity to write to them in guidance of how to live their life. As a result, he spent a great deal of his life corresponding with these women about certain abstentions and lifestyle practices. Jerome warned that those substituting false interpretations for the actual meaning of Scripture belonged to the "synagogue of the Antichrist". "He that is not of Christ is of Antichrist," he wrote to Pope Damasus I. He believed that "the mystery of iniquity" written about by Paul in 2 Thessalonians 2:7 was already in action when "every one chatters about his views." To Jerome, the power restraining this mystery of iniquity was the Roman Empire, but as it fell this restraining force was removed. He warned a noble woman of Gaul: He that letteth is taken out of the way, and yet we do not realize that Antichrist is near. Yes, Antichrist is near whom the Lord Jesus Christ "shall consume with the spirit of his mouth." "Woe unto them," he cries, "that are with child, and to them that give suck in those days."... Savage tribes in countless numbers have overrun all parts of Gaul. The whole country between the Alps and the Pyrenees, between the Rhine and the Ocean, has been laid waste by hordes of Quadi, Vandals, Sarmatians, Alans, Gepids, Herules, Saxons, Burgundians, Allemanni, and—alas! for the commonweal!—even Pannonians. His Commentary on Daniel was expressly written to offset the criticisms of Porphyry, who taught that Daniel related entirely to the time of Antiochus IV Epiphanes and was written by an unknown individual living in the second century BC. Against Porphyry, Jerome identified Rome as the fourth kingdom of chapters two and seven, but his view of chapters eight and 11 was more complex. Jerome held that chapter eight describes the activity of Antiochus Epiphanes, who is understood as a "type" of a future antichrist; 11:24 onwards applies primarily to a future antichrist but was partially fulfilled by Antiochus. Instead, he advocated that the "little horn" was the Antichrist: We should therefore concur with the traditional interpretation of all the commentators of the Christian Church, that at the end of the world, when the Roman Empire is to be destroyed, there shall be ten kings who will partition the Roman world amongst themselves. Then an insignificant eleventh king will arise, who will overcome three of the ten kings. ...After they have been slain, the seven other kings also will bow their necks to the victor. In his Commentary on Daniel, he noted, "Let us not follow the opinion of some commentators and suppose him to be either the Devil or some demon, but rather, one of the human race, in whom Satan will wholly take up his residence in bodily form." Instead of rebuilding the Jewish Temple to reign from, Jerome thought the Antichrist sat in God's Temple inasmuch as he made "himself out to be like God." Jerome identified the four prophetic kingdoms symbolized in Daniel 2 as the Neo-Babylonian Empire, the Medes and Persians, Macedon, and Rome. Jerome identified the stone cut out without hands as "namely, the Lord and Savior". Jerome refuted Porphyry's application of the little horn of chapter seven to Antiochus. He expected that at the end of the world, Rome would be destroyed, and partitioned among ten kingdoms before the little horn appeared. Reception by later Christianity Jerome is the second-most voluminous writer—after Augustine of Hippo (354–430)—in ancient Latin Christianity. The Catholic Church recognizes him as the patron saint of translators, librarians and encyclopedists. Jerome translated many Biblical texts from Hebrew, Aramaic and Greek into Latin. His translations formed part of the Vulgate; the Vulgate eventually superseded the preceding Latin translations of the Bible (the Vetus Latina). The Council of Trent in 1546 declared the Vulgate authoritative "in public lectures, disputations, sermons and expositions". Jerome showed more zeal and interest in the ascetic ideal than in abstract speculation. He lived as an ascetic for four or five years in the Syrian desert and later, for 34 years, near Bethlehem. Nevertheless, his writings show outstanding scholarship and his correspondence has great historical importance. Jerome is also often depicted with a lion, in reference to the popular hagiographical belief that Jerome had tamed a lion in the wilderness by healing its paw. The source for the story may actually have been the second century Roman tale of Androcles, or confusion with the exploits of Saint Gerasimus (Jerome in later Latin is "Geronimus");[d] it is "a figment" found in the thirteenth-century Golden Legend by Jacobus de Voragine. Hagiographies of Jerome talk of his having spent many years in the Syrian desert, and artists often depict him in a "wilderness", which for West European painters can take the form of a wood. From the late Middle Ages, depictions of Jerome in a wider setting became popular. He is either shown in his study, surrounded by books and the equipment of a scholar, or in a rocky desert, or in a setting that combines both aspects, with him studying a book under the shelter of a rock-face or cave mouth. His study is often shown as large and well-provided for, he is often clean-shaven and well-dressed, and a cardinal's hat may appear. These images derive from the tradition of the evangelist portrait, though Jerome is often given the library and desk of a serious scholar. His attribute of the lion, often shown at a smaller scale, may be beside him in either setting. The subject of "Jerome Penitent" first appears in the later 15th century in Italy; he is usually in the desert, wearing ragged clothes, and often naked above the waist. His gaze is usually fixed on a crucifix and he may beat himself with his fist or a rock. Jerome is often depicted in connection with the vanitas motif, the reflection on the meaninglessness of earthly life and the transient nature of all earthly goods and pursuits. In the 16th century Saint Jerome in his study by Pieter Coecke van Aelst and workshop, the saint is depicted with a skull. Behind him on the wall is pinned an admonition, Cogita Mori ("Think upon death"). Further reminders of the vanitas motif of the passage of time and the imminence of death are the image of the Last Judgment visible in the saint's Bible, the candle and the hourglass. Saint Jerome in the Wilderness, Leonardo da Vinci, 1480–1490, Vatican Museums Jerome Penitent in the Wilderness. Copper engraving, Albrecht Dürer 1494–1498 Hieronymus in Gehäus. Copper engraving, Albrecht Dürer 1514 Saint Jerome in the Wilderness by Lucas Cranach the Elder c. 1515 - Bible translations - Church Fathers - Eusebius of Cremona - Ferdinand Cavallera - Genesius of Arles - International Translation Day - Letter of Jerome to Pope Damasus - Order of St. Jerome - Prologus Galeatus - In the Eastern Orthodox Church he is known as Saint Jerome of Stridonium or Blessed Jerome. "Blessed" in this context does not have the sense of being less than a saint, as it does in the West. - Patrologia Latina 25, 373: Crebroque cryptas ingredi, quae in terrarum profunda defossae, ex utraque parte ingredientium per parietes habent corpora sepultorum, et ita obscura sunt omnia, ut propemodum illud propheticum compleatur: Descendant ad infernum viventes (Ps. LIV,16): et raro desuper lumen admissum, horrorem temperet tenebrarum, ut non-tam fenestram, quam foramen demissi luminis putes: rursumque pedetentim acceditur, et caeca nocte circumdatis illud Virgilianum proponitur (Aeneid. lib. II): "Horror ubique animos, simul ipsa silentia terrent." - "(...) die griechische Bibelübersetzung, die einem innerjüdischen Bedürfnis entsprang (...) [von den] Rabbinen zuerst gerühmt (...) Später jedoch, als manche ungenaue Übertragung des hebräischen Textes in der Septuaginta und Übersetzungsfehler die Grundlage für hellenistische Irrlehren abgaben, lehte man die Septuaginta ab." (Homolka 1999, pp. 43–) - Eugene Rice has suggested that in all probability the story of Gerasimus's lion became attached to the figure of Jerome some time during the seventh century, after the military invasions of the Arabs had forced many Greek monks who were living in the deserts of the Middle East to seek refuge in Rome. Rice 1985, pp. 44–45 conjectures that because of the similarity between the names Gerasimus and Geronimus – the late Latin form of Jerome's name – 'a Latin-speaking cleric … made St Geronimus the hero of a story he had heard about St Gerasimus; and that the author of Plerosque nimirum, attracted by a story at once so picturesque, so apparently appropriate, and so resonant in suggestion and meaning, and under the impression that its source was pilgrims who had been told it in Bethlehem, included it in his life of a favourite saint otherwise bereft of miracles.'" (Salter 2001, p. 12) - Kurian & Smith 2010, p. 389: Jerome ("Hieronymus" in Latin), was born into a Christian family in Stridon, modern-day Strigova in northern Croatia - "St. Jerome (Christian scholar)". Britannica Encyclopedia. 2 February 2017. Archived from the original on 24 March 2017. Retrieved 23 March 2017. - Scheck 2008, p. 5. - Ward 1950, p. 7: "It may be taken as certain that Jerome was an Italian, coming from that wedge of Italy which seems on the old maps to be driven between Dalmatia and Pannonia." - Streeter 2006, p. 102: "Jerome was born around 330 AD at Stridon, a town in northeast Italy at the head of the Adriatic Ocean." - Schaff, Philip, ed. (1893). A Select Library of Nicene and Post-Nicene Fathers of the Christian Church. 2nd series. Vol. VI. Henry Wace. New York: The Christian Literature Company. Archived from the original on 11 July 2014. Retrieved 7 June 2010. - Williams 2006. - Pevarello 2013, p. 1. - Walsh 1992, p. 307. - Kelly 1975, pp. 13–14. - Payne 1951, pp. 90–92. - Jerome, Commentarius in Ezzechielem, c. 40, v. 5 - P. Vergilius Maro, Aeneid Theodore C. Williams, Ed. Perseus Project Archived 11 November 2013 at the Wayback Machine (retrieved 23 August 2013) - Payne 1951, p. 91. - Rebenich 2002, p. 211: Further, he began to study Hebrew: 'I betook myself to a brother who before his conversion had been a Hebrew and...' - Pritz, Ray (1988), Nazarene Jewish Christianity: from the end of the New Testament, p. 50, In his accounts of his desert sojourn, Jerome never mentions leaving Chalcis, and there is no pressing reason to think... - "Saint Jerome in His Study". The Walters Art Museum. Archived from the original on 16 May 2013. Retrieved 18 September 2012. - Salisbury & Lefkowitz 2001, pp. 32–33. - Pierre Nautin, article "Hieronymus", in: Theologische Realenzyklopädie, Vol. 15, Walter de Gruyter, Berlin & New York 1986, pp. 304–315, [309–310]. - Michael Graves, Jerome's Hebrew Philology: A Study Based on his Commentary on Jeremiah, Brill, 2007: 196–198 : "In his discussion he gives clear evidence of having consulted the Hebrew himself, providing details about the Hebrew that could not have been learned from the Greek translations."[ISBN missing] - "The Bible". Archived from the original on 13 January 2016. Retrieved 14 December 2015. - Edgecomb, Kevin P., Jerome's Prologue to Jeremiah, archived from the original on 31 December 2013, retrieved 14 December 2015 - "Jerome's Preface to Samuel and Kings". Archived from the original on 2 December 2015. Retrieved 14 December 2015. - Taylor, F. Sherwood (23 December 1944). "St. Jerome and Vitamin A". Nature. 154 (3921): 802. Bibcode:1944Natur.154Q.802T. doi:10.1038/154802a0. S2CID 4097517. - "regulae sancti pachomii 84 rule 104. - W. H. Fremantle, "Prolegomena to Jerome", V. - "Hiëronymus in zijn studeervertrek". lib.ugent.be. Retrieved 2 October 2020. - See Jerome's The Dialogue against the Luciferians Archived 1 January 2014 at the Wayback Machine, p. 334 in A Select Library of Nicene and Post-Nicene Fathers of the Christian Church : St. Jerome: Letters and select works, 1893. Second Series By Philip Schaff, Henry Wace. - See Jerome's Letter to Pope Damasus Archived 13 March 2017 at the Wayback Machine, p. 19 in A Select Library of Nicene and Post-Nicene Fathers of the Christian Church : St. Jerome: Letters and select works, 1893. Second Series By Philip Schaff, Henry Wace. - See Jerome's Against the Pelagians, Book I Archived 1 January 2014 at the Wayback Machine, p. 449 in A Select Library of Nicene and Post-Nicene Fathers of the Christian Church : St. Jerome: Letters and select works, 1893. Second Series By Philip Schaff, Henry Wace. - See Jerome's Letter to Ageruchia Archived 2014-01-01 at the Wayback Machine, pp. 236–237 in A Select Library of Nicene and Post-Nicene Fathers of the Christian Church : St. Jerome: Letters and select works, 1893. Second Series By Philip Schaff, Henry Wace. - Eremantle, note on Jerome's commentary on Daniel, in NPAF, 2d series, Vol. 6, p. 500. - "Jerome, Commentario in Danielem". Archived from the original on 26 May 2010. Retrieved 6 May 2008. - "Jerome, Commentaria in Danelem, chap. 2, verses 31–40". Archived from the original on 26 May 2010. Retrieved 6 May 2008. - "Jerome, Commentaria in Danieluem, chap. 2, verse 40". Archived from the original on 26 May 2010. Retrieved 6 May 2008. - "Jerome, Commentario in Danielem, chap. 7, verse 8". Archived from the original on 26 May 2010. Retrieved 6 May 2008. - "Jerome, Commentaria in Danielem, chap. 8, verse 5". Archived from the original on 26 May 2010. Retrieved 6 May 2008. - "St. Jerome: Patron Saint of Librarians | Luther College Library and Information Services". Lis.luther.edu. Archived from the original on 4 July 2013. Retrieved 2 June 2014. - "Is the Vulgate the Catholic Church's Official Bible?". NCR. Retrieved 8 December 2021. '[This] sacred and holy Synod—considering that no small utility may accrue to the Church of God, if it be made known which out of all the Latin editions, now in circulation, of the sacred books, is to be held as authentic—ordains and declares, that the said old and vulgate edition, which, by the lengthened usage of so many years, has been approved of in the Church, be, in public lectures, disputations, sermons and expositions, held as authentic; and that no one is to dare, or presume to reject it under any pretext whatever' [Decree Concerning the Edition and Use of the Sacred Books, 1546]. - The Oxford Dictionary of the Christian Church. Oxford University Press; 2005. ISBN 978-0-19-280290-3. Vulgate. pp. 1722–1723. - Power, Edward J. (1991). A Legacy of Learning: A History of Western Education. SUNY Press. p. 102. ISBN 978-0-7914-0610-6. his exceptional scholarship produced ... - Louth, Andrew (2022). "Jerome". The Oxford Dictionary of the Christian Church. Oxford University Press. pp. 872–873. ISBN 978-0-19-263815-1. His correspondence is of great interest and historical importance. - "The Calendar". The Church of England. Retrieved 8 April 2021. - Hope Werness, Continuum encyclopaedia of animal symbolism in art, 2006 - Williams 2006, p. 1. - "Saint Jerome in Catholic Saint info". Catholic-saints.info. Archived from the original on 29 April 2014. Retrieved 2 June 2014. - Herzog, Sadja. “Gossart, Italy, and the National Gallery's Saint Jerome Penitent.” Report and Studies in the History of Art, vol. 3, 1969, pp. 67–70, JSTOR, Accessed 29 Dec. 2020. - "Saint Jerome in His Study". The Walters Art Museum. Archived from the original on 18 September 2012. Retrieved 6 September 2012. - The Collection: Saint Jerome Archived 22 October 2012 at the Wayback Machine, gallery of the religious art collection of New Mexico State University, with explanations. Retrieved 10 August 2007. - Andrew Cain and Josef Lössl, Jerome of Stridon: His Life, Writings and Legacy (London and New York, 2009) - Homolka, W. (1999). Die Lehren des Judentums nach den Quellen. Die Lehren des Judentums nach den Quellen (in German). Vol. Bd. 3. Munich: Knesebeck. ISBN 978-3-89660-058-5 – via Verband der Deutschen Juden. - Kelly, J.N.D. (1975). Jerome: His Life, Writings, and Controversies. New York: Harper & Row. - Kurian, G.T.; Smith, J.D. (2010). The Encyclopedia of Christian Literature. The Encyclopedia of Christian Literature. Scarecrow Press. ISBN 978-0-8108-7283-7. - Payne, Robert (1951), The Fathers of the Western Church, New York: Viking Press - Pevarello, Daniele (2013). The Sentences of Sextus and the origins of Christian ascetiscism. Tübingen: Mohr Siebeck. ISBN 978-3-16-152579-7. - Rebenich, Stefan (2002), Jerome, ISBN 978-0415199063 - Rice, E.F. (1985). Saint Jerome in the Renaissance. Johns Hopkins symposia in comparative history. Johns Hopkins University Press. ISBN 978-0-8018-2381-7. - Salisbury, J.E.; Lefkowitz, M.R. (2001). "Blaesilla". Encyclopedia of Women in the Ancient World. ABC-CLIO E-Books. ABC-CLIO. ISBN 978-1-57607-092-5. - Salter, David (2001). Holy and Noble Beasts: Encounters With Animals in Medieval Literature. D. S. Brewer. ISBN 978-0-85991-624-0. - Scheck, Thomas P. (2008). Commentary on Matthew. The Fathers of the Church. Vol. 117. ISBN 978-0-8132-0117-7. - Streeter, Tom (2006). The Church and Western Culture: An Introduction to Church History. AuthorHouse. - Walsh, Michael, ed. (1992), Butler's Lives of the Saints, New York: HarperCollins - Ward, Maisie (1950). Saint Jerome. London: Sheed & Ward. - Williams, Megan Hale (2006). The Monk and the Book: Jerome and the Making of Christian Scholarship. Chicago: U of Chicago Press. ISBN 978-0-226-89900-8. - Biblia Sacra Vulgata [e.g. edition published Stuttgart, 1994, ISBN 3-438-05303-9] - This article uses material from the Schaff–Herzog Encyclopedia of Religious Knowledge. - St. Jerome (pdf) from Fr. Alban Butler's Lives of the Saints - The Life of St. Jerome, Priest, Confessor and Doctor of the Church - Herbermann, Charles, ed. (1913). Catholic Encyclopedia. New York: Robert Appleton Company. . - Jewish Encyclopedia: Jerome - St. Jerome – Catholic Online - St Jerome (Hieronymus) of Stridonium Orthodox synaxarion - Further reading of depictions of Saint Jerome in art - Saint Jerome, Doctor of the Church at the Christian Iconography web site - Here Followeth the Life of Jerome from Caxton's translation of the Golden Legend - Works of Saint Jerome at Somni - Beati Hyeronimi Epistolarum liber, digitized codex (1464) - Epistole de santo Geronimo traducte di latino, digitized codex (1475–1490) - Hieronymi in Danielem, digitized codex (1490) - Sancti Hieronymi ad Pammachium in duodecim prophetas, digitized codex (1470–1480) - Colonnade Statue in St Peter's Square - Works by Jerome at LibriVox (public domain audiobooks) - Chronological list of Jerome's Works with modern editions and translations cited - Opera Omnia (Complete Works) from Migne edition (Patrologia Latina, 1844–1855) with analytical indexes, almost complete online edition - Lewis E 82 Vitae patrum (Lives of the Fathers) at OPenn - Lewis E 47 Bible Commentary at OPenn - Migne volume 23 part 1 (1883 edition) - Migne volume 23 part 2 (1883 edition) - Migne volume 24 (1845 edition) - Migne volume 25 part 1 (1884 edition) - Migne volume 25 part 2 (1884 edition) - Migne volume 28 (1890 edition?) - Migne volume 30 (1865 edition) - Jerome (1887). The pilgrimage of the holy Paula. Palestine Pilgrims' Text Society. - English translations of Biblical Prefaces, Commentary on Daniel, Chronicle, and Letter 120 (tertullian.org) - Jerome's Letter to Pope Damasus: Preface to the Gospels - English translation of Jerome's De Viris Illustribus - Translations of various works (letters, biblical prefaces, life of St. Hilarion, others) (under "Jerome") - Lives of Famous Men (CCEL) - Apology Against Rufinus (CCEL) - Letters, The Life of Paulus the First Hermit, The Life of S. Hilarion, The Life of Malchus, the Captive Monk, The Dialogue Against the Luciferians, The Perpetual Virginity of Blessed Mary, Against Jovinianus, Against Vigilantius, To Pammachius against John of Jerusalem, Against the Pelagians, Prefaces (CCEL) - Audiobook of some of the letters
<urn:uuid:f9ce7dfc-9e41-4cdf-920b-6b80e7afa4c3>
CC-MAIN-2022-33
https://findatwiki.com/Jerome
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573667.83/warc/CC-MAIN-20220819100644-20220819130644-00096.warc.gz
en
0.924253
8,105
2.734375
3
This topic includes the following sections: The JDK and the JRE have minimum processor, disk space, and memory requirements for 64-bit Windows platform. Before installing the JDK or the JRE on your 64-bit Windows platform, you must verify that it meets the following minimum processor, disk space, and memory requirements. Both the JDK and JRE require at minimum a Pentium 2 266 MHz processor. Disk Space Requirements For JDK 10, you are given the option of installing the following features: Public Java Runtime Environment When you install 64-bit JDK, then 64-bit public JRE also gets installed. The following table provides the disk requirements for the installed features: |Development Tools: 64-bit platform||500 MB| |Source Code||54.2 MB| |Public Java Runtime Environment||200 MB| |Java Update||2 MB| On Windows 64-bit operating systems, the Java runtime requires a minimum of 128 MB of memory. The minimum physical RAM is required to run graphically based applications. More RAM is recommended for applets running within a browser using the Java Plug-in. Running with less memory may cause disk swapping, which has a severe effect on performance. Very large programs may require more RAM for adequate performance. For supported processors and browsers, see Oracle JDK Certified Systems Configurations. For any text in this document that contains the following notation, you must substitute the appropriate update version number: If you are downloading the JDK installer for 64-bit systems for update 10 Interim 0, Update 2, and Patch 1, then the file name If you are downloading the JRE installer for 64-bit systems for update 10 Interim 0, Update 2, and Patch 1, then the file name You run a self-installing executable file to unpack and install the JDK on Windows computers. In a browser, go to the Java SE Development Kit 10 Downloads page and click Accept License Agreement. Under the Download menu, click the Download link that corresponds to the .exe for your version of Windows. Download the file Note:Verify the successful completion of file download by comparing the file size on the download page and your local drive. Instead of double-clicking or opening the JDK installer, you can perform a silent, noninteractive, JDK installation by using command-line arguments. The following table lists example installation scenarios and the commands required to perform them. The notation jdk stands for the downloaded installer file base name, such as |Install JDK and public JRE in silent mode.|| |Install development tools and source code in silent mode but not the public JRE.|| jdk.exe /s ADDLOCAL="ToolsFeature,SourceFeature" |Install development tools, source code, and the public JRE in silent mode.|| jdk.exe /s ADDLOCAL="ToolsFeature,SourceFeature,PublicjreFeature" |Install the public JRE in the specified directory jdk.exe /s /INSTALLDIRPUBJRE=C:\test It is useful to set the PATH variable permanently for JDK 10 so that it is persistent after rebooting. PATH variable is set automatically for the JRE. This topic only applies to the JDK. If you do not set the PATH variable, then you must specify the full path to the executable file every time that you run it. For example: C:\> "C:\Program Files\Java\jdk-10\bin\javac" MyClass.java PATHvariable permanently, add the full path of the jdk-10\bindirectory to the PATHvariable. Typically, the full path is: PATHvariable on Microsoft Windows: binfolder of the JDK installation to the PATHvariable in System Variables. PATH environment variable is a series of directories separated by semicolons (;) and is not case-sensitive. Microsoft Windows looks for programs in the PATH directories in order, from left to right. You should only have one bin directory for a JDK in the path at a time. Those following the first instance are ignored. If you are not sure where to add the JDK path, append it. The new path takes effect in each new command window that you open after setting the The following is a typical value for the When installing JRE on Windows computers, you must select the JRE installer that is appropriate for your Windows system. The 64-bit Windows operating systems come with a 64-bit Internet Explorer (IE) browser as the standard (default) for viewing web pages. Install JRE on Windows computers by performing the actions described in the following topics: To use the Windows Online Installer, you must be connected to the internet. If you are running behind a proxy server, then you must have your proxy settings correctly configured. If they are not configured, or are incorrectly configured, then the installer will terminate with the following message: The installer cannot proceed with the current Internet Connection settings. Please visit the following website for more information. If you see this message, check your proxy settings: In the Control Panel , double-click Internet Options, select the Connections tab, and click the LAN Settings. If you do not know what the correct settings should be, check with your internet provider or system administrator. The JRE Installer is located on the Java SE Runtime Environment 10 Downloads page. The following JRE installers are available for you to download: The Windows Offline installer and Windows installer contains everything that is required to install the JRE. The Microsoft Windows Installer (MSI) Enterprise JRE Installer is also available, which enables you to install the JRE across your enterprise. It requires a commercial license for use in production. The private JRE installed with the JDK is not registered. To register the JRE, you must set the PATH environment variable to point to JAVA_HOME is the location where you installed the private JRE . See Setting the PATH Environment Variable. By default, the Java Access Bridge is disabled. To enable it, see Enabling and Testing Java Access Bridge in the Java Platform, Standard Edition Java Accessibility Guide. To access essential Java information and functions in Microsoft Windows 7 and Windows 10 machines, after installation, click the Start menu and then select Java. The Java directory provides access to Help, Check for Updates, and Configure Java. The Microsoft Windows 8 and Windows 8.1 do not have a Start menu. However, the Java information is available in the following Start directory: The installation program for the Microsoft Windows version of the Java SE Runtime Environment uses the registry to store path and version information. It creates the following registry keys: This key contains the string CurrentVersion, with a value that is the highest installed version on the system. JavaHome: the full path name of the directory in which the JRE is installed RuntimeLib: the full path name of the Java runtime DLL HKEY_LOCAL_MACHINE\Software\JavaSoft\Java Web Start\ This key is created for Java Web Start. If there are two versions of JDK or JRE installed on a system, one with the new version-string format introduced in JDK 10, and the other with the older version format, then there will be two different CurrentVersion registry key values. For example, if JDK 1.8.0 and JDK 10 are installed, then the following registry keys are created: "HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\Java Development Kit" for JDK 1.8.0 and "HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\JDK" for JDK 10. The registry layout for this example is: "HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\JDK\10" "HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\JDK" "@CurrentVersion" = 10 "HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\Java Development Kit\1.8" "HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\Java Development Kit\1.8.0" "HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\Java Development Kit" "@CurrentVersion" = 1.8 @CurrentVersion is a registry string in the "JDK" or "Java Development Kit" key. For the same example, if the JRE is installed, then the registry layout is: "HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\JRE\10" "HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\JRE" "@CurrentVersion" = 10 "HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\Java Runtime Environment\1.8" "HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\Java Runtime Environment\1.8.0" "HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\Java Runtime Environment" "@CurrentVersion" = 1.8 @CurrentVersion is a registry string in the "JRE" or "Java Runtime Environment" key. Use the Java item in the Windows Start menu to access essential Java information and functions, including Help, API documentation, the Java Control Panel, checking for updates, and Java Mission Control. During JDK install, Java menu items are added to the Windows Start menu to provide easy access to Java resources and a Java Development Kit folder is created in the Windows Start menu, which contains the following items: Reference Documentation: Opens the Online API documentation web page. Java Mission Control: Opens the Java Mission Control profiling and diagnostics tools suite. Java Mission Control is a commercial feature available to users with a Java SE Advanced license. During JDK installation and uninstallation processes, the appropriate start menu items are updated so that they are associated with the latest JDK version on the system Note:The Windows 7 and Windows 10 have a Start menu; however, the menu is not available in Windows 8 and Windows 8.1. The JDK and Java information in Windows 8 and Windows 8.1 is available in the following Start directory: During JRE installation, Java menu items are added to the Windows Start menu to provide easy access to Java resources and a Java folder is created in the Windows Start menu, which contains the following items: About Java: Opens the Java Control Panel with focus on the General tab. The tab displays the latest JRE version installed on the system. Check for Updates: Opens the Java Control Panel with focus on the Update tab Configure Java: Opens the Java Control Panel with focus on the General tab Get Help: Opens the Java Help Center Visit Java.com: Opens the Java Download page During JRE installation and uninstallation processes, the appropriate start menu items are updated so that they are associated with the latest JRE version on the system. Note:The Windows 7 and Windows 10 have Start menu, however the menu is not available in Windows 8 and Windows 8.1. The JRE and Java information in Windows 8 and Windows 8.1 is available in the following Start directory: Java Web Start is an application-deployment technology that gives you the power to run full-featured applications with a single click from your web browser. With Java Web Start, you can download and run applications, such as a complete spreadsheet program or an internet chat client, without going through complicated installation procedures. With Java Web Start, you run applications simply by clicking a web page link. If the application is not present on your computer, Java Web Start automatically downloads all necessary files. It then caches the files on your computer so that the application is always ready to be run anytime that you want - either from an icon on your desktop or from the browser link. No matter which method you use to run the application, the most current, available version of the application is always presented to you. Upgrading from Previous Versions If you have a previous version of Java Web Start, do not uninstall it. Uninstalling it will cause the download cache to be cleared, and all previously installed Java Web Start application data will have to be downloaded again. The new version will write over previous installations and automatically update browsers to use the new version. The configuration files and the program files folder used by Java Web Start have changed, but all your settings will remain intact after the upgrade because Java Web Start will translate your settings to the new form. The only way to uninstall Java Web Start is to uninstall the JDK or JRE. Uninstalling the JDK or JRE will not, however, remove the cache for previous versions of Java Web Start. Previous releases have separate uninstallation instructions for Java Web Start. You may see a misleading message if you do the following: Download and cache a Java Web Start application with the JDK or JRE. Remove the JDK or JRE using Add or Remove Programs from the Windows Control Panel. Remove the Java Web Start application using Add or Remove Programs. When you remove the application, you see an Uninstaller Error dialog box saying: An error occurred while trying to remove Java-Application:nameApp. It may have already been uninstalled. Would you like to remove Java-Application: name App from the Add or Remove program list? If you say Yes to this, then you will see another Uninstaller Error dialog box saying: You do not have sufficient access to remove Java-Application:nameApp from the Add or Remove Program list. Please contact your system administrator. The message is displayed when you have removed the Java Web Start application while uninstallating the JDK or JRE, but this is not reflected in the Add or Remove Programs. Refresh the Add or Remove Programs by pressing F5 or reopen the panel. To avoid seeing the misleading message, either press F5 or reopen the dialog box. Any Java Web Start application that was downloaded and cached with the JDK or JRE will no longer appear in the list of currently installed programs. Java Plug-in technology, included as part of the JRE, establishes a connection between popular browsers and the Java platform. This connection enables applets on websites to be run within a browser on the desktop. The Java Plug-in is automatically enabled for supported web browsers during installation of the JRE. No user intervention is necessary. In Java SE 10, the version of the Java Plug-in that is available in versions of the JRE prior to Java SE 6 Update 10 has been deprecated. However, this earlier version of the Java Plug-in is still shipped with Java SE 10 for compatibility purposes but is no longer fully supported. It will be removed in a future release. When the installed JRE falls below the security baseline or passes its built-in expiration date, an additional warning is shown to users to update their installed JRE to the latest version. For businesses that manage the update process centrally, users attempting to update their JRE individually, may cause problems. A deployment property, deployment.expiration.check.enabled is available that can be used to disable the JRE out of date warning. To suppress this specific warning message, add the following entry in the deployment properties file: To disable automatic updates, on the Update tab of the Java Control Panel, deselect the Check for Updates Automatically check box. Use either of the following ways to uninstall JRE: Go to Add/Remove Programs utility in the Microsoft Windows Control Panel and uninstall the older versions of JRE. Remove JRE using the online Java Uninstall Tool. The Java Removal Tool is integrated with the JRE installer. After JRE 10 is installed, the Java Removal Tool provides the list of outdated Java versions in the system and helps you to remove them. The Java Uninstall tool will not run if your system administrator specified a deployment rule set in your organization. A deployment rule set enables enterprises to manage their Java desktop environment directly and continue using legacy business applications in an environment of ever-tightening Java applet and Java Web Start application security policies. A deployment rule set enables administrators to specify rules for applets and Java Web Start applications; these rules may specify that a specific JRE version must be used. Consequently, the Java Uninstall tool will not run if it detects a deployment rule set to ensure that no required JREs are uninstalled. See Deployment Rule Set in the Java Platform, Standard Edition Deployment Guide. The following sections provide tips for working around problems that are sometimes seen during or while following installation instructions. System Error During Decompression If you see the error message system error during decompression, then you might not have enough space on the disk that contains your Program Cannot Be Run in DOS Mode If you see the error message This program cannot be run in DOS mode, then do the following: Open the MS-DOS shell or command prompt window. Right-click the title bar. Select the Program tab. Ensure that the item Prevent MS-DOS-based programs from detecting Windows is not selected. Select OK again. Exit the MS-DOS shell. Restart your computer. Source Files in Notepad In Microsoft Windows, when you create a new file in Microsoft Notepad and then save it for the first time, Notepad usually adds the .txt extension to the file name. Therefore, a file that you name Test.java is actually saved as Test.java.txt. Note that you cannot see the .txt extension unless you turn on the viewing of file extensions (in Microsoft Windows Explorer, deselect Hide file extensions for known file types under Folder Options). To prevent the .txt extension, enclose the file name in quotation marks, such as "Test.java" when entering information in the Save As dialog box. Characters That Are Not Part of the System Code Page It is possible to name directories using characters that are not part of the system locale's code page. If such a directory is part of the installation path, then generic error 1722 occurs, and installation is not completed. Error 1722 is a Windows installer error code. It indicates that the installation process has failed. The exact reason for this error is not known at this time. To prevent this problem, ensure that the user and system locales are identical, and that the installation path contains only characters that are part of the system locale's code page. User and system locales can be set in the Regional Options or Regional Settings control panel. The associated bug number is 4895647. These are frequently asked questions about JDK 10 and JRE 10 online installation and Java updates on Windows computers. 1. I downloaded the installer and it is less than 1 megabyte. Why is it so small? The Windows Online Installer for the JRE will download more installer files. Using this installer helps users to avoid downloading unnecessary files. 2. I had the Java Control Panel open for Java Update and the About tab showed the version of the JRE installed in my computer. Then I ran Java Update, and the version of the JRE that the Java Control Panel is showing has not changed. Why is this? You need to close and restart the Java Control Panel to get the updated Control Panel. 3. Netscape/Mozilla is not working correctly with Java Plug-in. Why? First, close all the browsers sessions. If this does not work, reboot the system and try again. 4. I try to install on the D:\ drive and Java Update is still installing files onto the C:\ drive. Why? Regardless of whether an alternate target directory was selected, Java Update needs to install some files on the Windows system drive. 5. How can I uninstall the Java Update version that I just installed? If you want to uninstall the JRE, then use the Add/Remove Programs utility in the Microsoft Windows . Select the Control Panel and then Add/Remove Programs. 6. After the JRE bootstrap installer is downloaded and executed, why does the message "This installer cannot proceed with the current Internet Connection settings of your system. In your Windows Control Panel, please check Internet Options -> Connections to make sure the settings and proxy information are correct." appear? The JRE bootstrap installer uses the system Internet Connection settings to connect to the web for downloading extra files. If you are behind a firewall and require proxy settings, then ensure that the proxy settings in Internet Options/Internet Properties are set up properly (select Start, then Control Panel, then Internet Options/Internet Properties, then Connections, and then LAN Settings). If you can browse the external web (for example, outside the firewall) with Internet Explorer, then your proxy settings are properly set up. The installer does not understand the proxy settings specified in Netscape/Mozilla. 7. I found the jusched.exe process running in the background of my system after installing JRE. Is there a way to shut it down? jusched.exe is the scheduler process of Java Update. This process runs automatically. To shut in the Java Control Panel on the Update tab, deselect the Check for Updates Automatically check box. 8. When I click the Update Now button from the Java Control Panel, it complains about the system being "offline." What does that mean? Java Update can be run only if the system is connected to the network. A system that is not connected to the network is referred to as being offline. When the Update Now is clicked, it will check the online/offline status of your system. If your computer does not have internet access, then the error message is displayed. Check that your system is currently connected to the internet and try again. 9. I followed the instructions to install a specific version of the JRE. After the installation, a message is displayed from system tray saying an update is available for download. What should I do? The message is part of the Java Auto Update mechanism, which detects at user login time if a newer version of the JRE is available for download. In the system tray, click the Java Update icon to download and install the update. 10. I encountered the error "This installation package could not be opened. Contact the application vendor to verify that this is a valid Windows Installer package." when running the Java SE installer. There are several possible reasons for this error to be displayed; a few are listed: Network connection fails. Download manager software interrupts the download process. Another application, such as an antivirus application, may interrupt the installation process. To address these problems, ensure that the third-party downloader applications are turned off and the network connection is configured properly. Also, if a proxy is in use, then ensure that the proxy authentication is turned off. 11. I encountered the error "Error 1722. There is a problem with this Windows installer package. A program run as part of the setup did not finish as expected. Contact your support personnel or package vendor." See Error 1722: Problem with Windows Installer Package. If you encounter any other errors or issues, then you can access Java Help Center, which contains solutions for issues that you might encounter when downloading and installing Java on your system. In particular, you can search for solutions by error number. Searching for "Error 1722" returns a solution to this issue.
<urn:uuid:36d82158-6a85-454f-ae78-0d5fb4560c19>
CC-MAIN-2022-33
https://docs.oracle.com/javase/10/install/installation-jdk-and-jre-microsoft-windows-platforms.htm
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00697.warc.gz
en
0.82248
5,411
3.015625
3
The most essential maneuver to be undertaken in the field of law is to develop various administrative and judicial techniques to interpret the statutes. Courts are always busy in the endeavor to unfold the meanings, expressions and removing inconsistencies. Interpretation helps or rather opens the doors for the Court to explore the possibilities beyond the words of the legislation or the statute itself. In a way, interpretation of statutes has revolutionaries the legal system in our country by constantly ameliorating the laws according to the The general rule applied before interpretation of a statute is that prima facie the statutes must be given an ordinary meaning. But if the meaning of the provisions in the statutes is unclear, ambiguous, or cannot be understood in its plain reading then the tools or aids of interpretation are resorted. There are various tools or aids that are used to interpret the statues. These aids of interpretation are broadly classified into: - external aid and - internal aid. Internal aids are the aids that are found within the Act or Statute. For instance, the title of an Act, headings or the titles prefixed to the provisions in the Acts, punctuations, marginal notes, illustrations, the definition section or any other tool that is within the Act itself constitutes an internal aid. Whereas external aids are the ones that are found outside the Act, i.e foreign judgments, international treaties, parliamentary history, historical facts, etc. The Supreme Court in the case of B. Prabhakar Rao v. State of Andhra Pradesh Where internal aids are not forthcoming, we can always have recourse to external aids to discover the object of the legislation. External aids are not ruled out. “It has to be insisted that the aim and purposes of the legislative to enact a law or a statute is an important guidepost for the statutory interpretation.” This is where the external aids of interpretation enter the picture. “It is important that the external aids be consulted not only for making the choice between various possible meanings of the text itself, but in checking up an apparently plain and explicit meaning, in finding other possible meanings not apparent in the text, and in applying the chosen meaning to the case at hand.” While using extrinsic aids it is to be kept in mind that they are to be applied to the Indian facts and circumstances especially when it comes to the aid of foreign judgments. A blind application of a foreign judicial decision will be detrimental to the purpose of interpretation. Foreign Judgments As An Aid To Interpretation Need, Relevance and With the growing interconnectedness of the laws and treaties, Nations are familiarizing themselves within the international platform by mutually agreeing to cooperate, agree and follow. The text and interpretations of various international instruments like the UDHR, Geneva Conventions, European Conventions, etc. are being referred and borrowed by countries from the ones that are party's to the same. In this process the judiciary of one country may borrow foreign judicial decisions of other nations or international adjudicatory bodies to understand how they interpret the treaties a, laws, doctrines, etc. However, there is not much debate when it comes to the use of foreign judgments as a tool for interpretation while referring to international laws but there are reservations when it comes to the domestic law. The scientific, social, philosophical and economical changes that happen in a nation are not always incorporated into or as legislations. To expect such changes to reflect in the legislations would take a lot of time and often involves, a huge procedure. Hence, we can say that the connection between the social conditions and the legal standards is’nt always present. But when a problem arrives at the doors of justice, the lack of a clear legal solution is not relevant for the judges. In such a case the judiciary may use the decisions of the foreign Courts where such a conflict or dispute was resolved. In today’s changing times where very less problems are limited to a single country, which is likely to have already arisen and been solved in other countries. The foreign decisions may act as guideposts for the judiciary while deciding the case and the direction it is going in. Trans Judicial communication can be understood as the communication between the judicial organs of different nations and organizations across the globe. Anne Slaughter an international lawyer, political analyst and a political scientist wrote an article on trans-judicial communication in 1994 where she described the three different approaches that a Court can consider to use foreign - Vertical means: This approach is used when the courts refer to the decisions given by the international institutions that adjudicate like the ICC, ICJ, etc. whether or not their countries are in fact parties to that particular adjudicatory institution functions. - Horizontal means: the domestic courts use the judicial decisions given by other nations to interpret its own laws. Such borrowing of constitutional cases between the nations will introduce a new line of thinking. - Mixed horizontal and vertical means: the domestic courts may cite foreign decisions from other nations with respect to the interpretation of obligations applicable to both the jurisdiction under international instruments or law. To understand this in an easier way we can say that the judges directly refer to the applicable international obligations and are also free to refer to the decisions of the courts of the foreign nations to understand how those nations interpreting and implementing the obligations are created by such These are the three means of trans-judicial communication, by examining these three means one can notice and understand how the reference to foreign law is contemplated both in international and national law. More seeds are being sown for more trans-judicial communication because of the growing trend of internationalization of legal education. One more reason that is attributing to this communication is the increase in the easy accessibility of foreign legal material for the judges to refer to. Significance Of Foreign Judgments: An Aid To Interpretation In India Over the years Statutory Interpretations has clawed its way into the legal consciousness of the Indian judiciary. It is often witnessed that the judiciary in India refers to the foreign judgments given by the Courts of other Nations to construct the statutes in our country. There is no denying that the major part of law in India has been borrowed from the common law system. independence, it was a common practice that the judiciary would borrow the judgments decided in England and apply them in India for interpreting statutes. But after independence with the introduction to our Constitution, the Supreme Court started to lean on and gave more access to the precedents set by the American and other Courts in the world. Indian Courts have openly sought for guidance from the foreign decisions in cases where similar disputes that arose before our Courts were already dealt by foreign Courts. The Indian Constitution draws inspiration from the Constitutions of many Nations like the United States, Canada, Australia, etc. When a country’s Supreme Law is inspired from many foreign nations then it is pertinent that the Indian judiciary would look for guidance from these nations with regard to constitutional matters from these nations. Ever since the promulgation of the Constitution in the year 1950, the Indian Courts have often depended on the decisions of other common law The Indian Judiciary in some of its most important landmark judgments used a myriad of foreign decisions to interpret law, introduce doctrines and understand the possibilities of adopting new ideas of approach. The following are some of the most prominent judgments that used and discussed about foreign judicial decisions as an important aid of interpretation: The Puttaswami judgment Justice K S Puttaswami and Others v. Union of India and Others is a historic judgment that reaffirmed the ‘right to privacy’ as a fundamental constitutional right. The Court in this case held that ‘right to privacy’ is an integral part of the fundamental rights guaranteed by the Constitution. The Court even made a comparative analysis of the concept of privacy in other jurisdictions from comparative law perspective and limited such an analysis to United Kingdoms, United States, South Africa and Canada. It also went on to examine the judicial decisions made by the European Court of Human Rights, the inter-American Court of Human Rights, etc. This probe of the Court was indicative of the fact that the Apex Court wanted to be thorough with the way in which the concept of right to privacy was pursued in various places across the globe based on the histories of the societies they govern and the challenges before them. Some of the important judicial decisions borrowed from the United Kingdom include Semayne’s Case, Entrick v. Carrington, Prince Albert v. Strange and many plethora of cases dealing with the right to privacy right from the 17th century to the current day. From the United States cases from as early as 1886 to the current day were explored, for instance, Boyd v. United States, Griswold v. Connecticut, United States v. Miller, etc. While studying the right to privacy in South Africa, the Supreme Court of India though it fit to refer to cases such as National Media Ltd. v. Jooste where the Court observed that the right to privacy is an individual condition of life; Bernstein v. Bester and Ors where the Court held that the scope of privacy can be closely associated or related to the concept of identity; NM and Ors. v. Smith and Ors., among other cases. Some of the landmark cases referred to from Canada include Her Majesty, The Queen v. Brandon Roy Dyment, R v. This judgment can essentially serve as a comprehensive document that records historical landmark cases from foreign countries, international bodies, doctrines and laws related to the privacy laws. The essence of this decision lies in the fact that the Indian Court was open to referring to foreign decisions and use them to guide the Court in the right direction. Navtej Singh Johar and Ors. v. Union of India The Supreme Court through this case decriminalized homosexuality by saying that the LGBTQ community has the same rights as that of any ordinary citizen and that sexual orientation is an crucial aspect to privacy. In declaring this judgment the Apex Court considered the International perspective of this issue and studied the laws in the United States, Canada, South Africa, United Kingdom and other Courts and Jurisdictions. The Court specially considered the decisions of the foreign Courts in Law v. Canada, James Egan and John Norris Nesbit v. Her Majesty The Queen in Right of Canada and Anr., Paris Adult Theatre I v. Slaton, A.R. Coriel v. The Netherlands, etc. where the cases upheld the right to privacy to individuals and reiterated that the choice of their sexual identity is a very personal matter. Moreover in cases like Ashok Kumar Thakur v. Union of India and Others where the Court on record reiterated the importance of the foreign decisions for interpretation and also the relevance and applicability of such foreign decisions to the facts and circumstances of the domestic case must be kept in mind before applying such foreign decision. The Honorable Judge in this case stated that, “…the judges in every case must look into the heart of things and regard the facts of every case concretely much as a jury would do; and yet, not quite as a jury, for we are considering here a matter of law and not just one of fact; Do these "laws" (foreign judicial decisions) which have been called in question offend a still greater law before which even they must bow?” In Forasol v. ONGC, General Electric Company v, Renusagar Power Company and many other landmark cases the Court considered the foreign decisions to have the persuasive value and used such decisions as a guiding light while treading in new areas of law or existing ones. The Use Of Foreign Decisions As An Aid To Interpretation By Different Nations: A Comparitive Analysis Countries like Canada, South Africa, Nicaragua, India, France, Germany, England and Wales are known to openly and often use foreign decisions as an aid to interpretation. Over the years the constitutional systems in several nations across the globe, especially the ones that follow a common law legal system have been borrowing foreign doctrines and decisions very often from each other. A civil law country, Argentina sometimes uses the support of the foreign decisions to interpret the domestic laws in the country. Most importantly such foreign decisions are used or cited to demonstrate how various countries around the globe are dealing with particular problems or issues. The Argentinian Courts have used foreign judgments (mostly form the United Nations) especially in cases dealing with the Constitutional matters as the Constitution of the country was inspired from that of the United States Constitution. mid-1930’s the Supreme Court of Argentina applied the precedents from the America as a means of Constitutional interpretation. But with regards to the civil and commercial matters it is witnessed that the European continental law is referred to by the domestic Courts. A foreign Court’s decision can be effectively used in Argentina provided that the following conditions are met with: - The foreign law on which the foreign decision is relevant must bare a close resemblance between the national laws from a statutory point of - The facts of the case in the foreign judicial decisions coincide with that of the case or dispute before the domestic judge. - If the concept or idea of justice is either similar to or equivalent in the foreign jurisdiction to that of the domestic Court. Once these conditions are fulfilled or met with then foreign law (referring to case laws here) can become a valid argument supporting the conclusion being drawn by the domestic Court. The attitude adopted by Canada with respect to the transnational judicial dialogues is that of a constant source of inspiration and happiness reinforced into the judicial legitimacy. Canada’s judicial recourse to foreign law has influenced and helped the country to cultivate a more open and multi-cultural approach towards the law. The country is known for readily accepting the transfer of legal ideas without inhibitions. Out of the 10 provinces in Canada, except for Quebec which has the civil law jurisdiction rest 9 follow the common law. So for a foreign case law to be adopted by more than one province, it must obtain the recognition on a provice-by-province manner. Until the 1970’s the Canadian Courts routinely followed the judgments of the highest Court’s in Wales and England. But, even till this date the judicial decisions from England and Wales are followed twice more than any other country in the decisions made by the Courts in Canada. The resemblance between the American Bills of Rights and the Canadian Charter has encouraged the Courts in Canada to refer to the decisions taken by the Courts in the United States with regards to the matters relating to it. The Judges of the Canadian shows have been consistent in showing their interest in the American law. The statistics shows that the Canadian judges have cited American case laws forty times more than that of the American Judges citing a Canadian case law. The “next frontier,” as it were, for expansion of the enforcement of foreign judgments in Canada probably lies in the penal, revenue and other public laws defense to enforcement. In Germany the Courts occasionally use foreign decisions to interpret the Constitutional Law. It uses a comparative method to interpret the Constitutional Principles. The Federal Constitutional Court, i.e. the Court that has exclusive jurisdiction over constitutional matters also uses foreign decisions to determine or understand the content of international law especially the developing sphere of human rights. Over the years the Federal Constitutional Court has taken judgments or decisions from the Supreme Court in America than that of any other country or jurisdiction. China follows a civil law model which means the legal system is primarily sourced from the law and not the case laws. The Courts in China rarely cite foreign decisions directly in their judgments compared to other civil law countries like Spain, France, etc. but there does exist a nexus between how the foreign decisions are influencing the judges while making their The defamation laws America have has played an important role in a few domestic judgments of the Chinese Court. So it can be said that though the Chinese Courts with respect to the use of foreign decisions is not a direct one but undoubtedly there is an indirect impact of the decisions made by the Chinese Courts as they have on very rare occasions have leaned towards concepts and principles developed in foreign nations. In France the Courts do not cite the decisions of the foreign Courts or academic authorities as a rule. If such a citation is used in the decision arrived at by the Court then may lead to ir being legally challenged for annulment. But, the references to the foreign decisions as an aid to interpretation can be noticed in the material they prepare for a particular case or other studies conducted by various prestigious institutions that especially specialize in In a few cases the Courts however did cite foreign decisions braking, the tradition of not citing. The only exception to this being that, the Courts are allowed to use the case laws from the European Court of Human Rights. However, the judiciary in France keeps themselves informed of the growing trends in the judicial system and changing law in all parts of the Observations And Suggestions While applying foreign judgments to interpret statues or legal aspects, the judiciary must make sure that the facts of the judgment being applied are similar to or relevant while using them. A blind application of a foreign judicial decision will be detrimental to the purpose of interpretation. The judiciary is responsible for the socio-legal developments of the nation. Hence, it must be very vigilant and aware of the socio legal developments around the globe and must adopt these changes though the decisions it takes. This is where the foreign judicial decisions come into picture. For instance, the LGBTQ+ momentum around the globe influenced the judiciary to recognize the right of an individual to associate him/herself to a particular gender under Article 21 of the Constitution, the Court referred to various foreign decisions while deciding The trend of using foreign judgments in the decisions taken by the judiciary in India is followed by the judiciary at the higher levels of hierarchy and it can be seen that the lower judiciary does not indulge in using such decisions much comparatively. Though the judiciary at the lower level refers to the judgments passed by the higher judiciary, its application of such decisions would create an open mind even at the lower level where the scope to look for different meanings in interpreting will be high. The socio-historical context of every country is very different form one another. With the growing increase in the overwhelming weight of international opinions and the recognition of some rights and legal aspects by nations is enabling the Judiciary to engage and exchange the methods applied to solve an issue before the court. This accumulation of wisdom through the system of borrowing judicial decisions to interpret law is one of the best ways to internationalize the legal system. The Indian Court’s openness towards accepting or using foreign judicial decisions while interpreting statues/ law reflects upon the interconnectedness between the legal systems of different regions. It is important to remember that the foreign judgments have an influential value and are not obligatory or binding decisions in India, they can act as important guideposts to interpret in Over the years the Constitutional Courts in the countries that follow a common law legal system such as India, United Kingdom, Canada, have become some of the most important promoters of the increasing importance of the comparative constitutional law. In these countries the reliance on foreign precedents is becoming a common place in the public litigation. This trans-judicial communication among nations is pushing or rather encouraging the Nations to rely upon such precedents and laws. - Erra R, 'The Use Of Comparative Law Before The French Administrative Courts' (2004) 156 Brit. Insti, & Comp. - Judicial Recourse To Foreign Law: A New Source Of Inspiration? (UCL - K. Tripathi P, 'Foreign Precedents And Constitutional Law' (1957) 57 Columbia Law Review - K.G. Balakrishnan C, 'The Role Of Foreign Precedents In A Country's Legal System' (2010) 22 National Law School of India Review accessed 12 October - Koehnen M, and Klein A, The Recognition And Enforcement Of Foreign Judgments In Canada (International Bar Association Annual Conference 2010) accessed 12 October 2020 - Leibman B, 'Innovation Through Intimidation: An Empirical Account Of Defamation Litigation In China' (2006) 33 Harvard International Law Journal - Landis.J, A Note on "Statutory Interpretation”, 46 HARWARD L.R., - Miller J, 'R, Judicial Review And Constitutional Stability: A Sociology Of The U.S. Model And Its Collapse In Argentina' (1997) 21 HASTINGS INT’L & COMP. L. REV - Slaughter A, 'The Typology Of Transjudicial Communication' (1994) 29 U Richmond L. R. - The Impact Of Foreign Law On Domestic Judgments' (Loc.gov, 2010) accessed 12 October 2020 - Tushnet M, 'The Possibilities Of Comparative Constitutional Law' (1999) 108 Yale Law Journal - Forasal v Oil and Natural Gas Commission (1984) Supreme Court, AIR - Justice KS Puttaswamy and Ors v Union of India (UOI) and Ors (2018) Supreme Court, 9 SCJ (Supreme Court) - Navtej Singh Johar and Ors. v. union of India (2018) Supreme Court, AIR 2018 SC 4321 - B. Prabhakar Rao v. State of Andhra Pradesh (2018)Supreme Court, 1985 S.C.R. Supl. (2) 573. - B. Prabhakar Rao v. State of Andhra Pradesh, 1985 S.C.R. Supl. (2) 573. - Landis.J, A Note on "Statutory Interpretation”, 46 HARWARD L.R., 881, - De Sloovère, F.. Extrinsic Aids in the Interpretation of Statutes. 88 University of Pennsylvania L. R., 527, 527-555 (1940). - Anne Slaughter, The Typology of Transjudicial Communication, 29 U Richmond L. R. , 99 (1994). - Chief Justice K.G. Balakrishnan, The Role of Foreign Precedents in a Country's Legal System, 22 National Law School of India Review , 7,9 (2010), (last visited Oct 12, 2020) - Justice K.S. Puttaswamy and Ors. v. Union of India (UOI) and Ors., AIR 2017 - Peter Semayene v. Richard Gresham, 77ER 194. - Entrick v. Carrington, (1765) 19 St. Tr. 1029. The Court in this case held that, “By the laws of England, every invasion of private property, be it ever so minute, is a trespass. - Prince Albert v. Strange, (1849) 41 ER 1171 - Boyd v. United States, 116 US 616 (1886). The Supreme Court of the United States laid down some principles in the cases that state the very essence of the constitutional liberty and security, “The principles laid down in this opinion affect the very essence of constitutional liberty and security... they apply to all invasions on the part of the government and its employees of the sanctity of a man's home and the privacies of life. It is not the breaking of his doors and the rummaging of his drawers that constitutes the essence of the offence, but it is the invasion of his indefeasible right of personal security, personal liberty, and private property,-it is the invasion of this sacred right...” - Griswold v. Connecticut, 381 US 479 (1965). The Court in this case observed that right to privacy emanated from ‘penumbras ’ of the fundament constitutional rights and guarantees in the Bill of Rights, which altogether create the zones of privacy. - United States v. Miller, 425 US 435 (1976). - National Media Ltd. v. Jooste, 1996 (3) SA 262 (A). - Bernstein v. Bester and Ors., 1996 (2) SA 751 (CC) - NM and Ors v. Smith and Ors., 2007 (5) SA 250 (CC). The Court stated that the more intimate the information, the more important it is in fostering privacy, dignity and autonomy that an individual makes primary decision whether to release the information. - Her Majesty, the Queen v. Brandon Roy Dyment, (1988) 2 SCR 417. Privacy is at the heart of liberty in a modern state. - R v. Spencer, (2014) SCC 43. - Navtej Sigh Johar v. Union of India, AIR 2018 SC 4321 - Law v. Canada (Minister of Employment and Immigration), 1999 1 S.C.R. - James Egan and John Norris Nesbit v. Her Majesty The Queen in Right of Canada and Anr. 2 SCR 513 - A.R. Coeriel and M.A.R. Aurik v. The Netherlands - Ashok Kumar v. Union of India and Ors., (2008) INSC 614 - Forasol v. Oil and Natural Gas Commission, AIR 1984 SC 241 - General Elctric Company v. Renusagar Power Company, (1987) 4 SCC 137 - The Impact of Foreign Law on Domestic Judgments Loc.gov, (last visited Oct 12, 2020) - Patricia Marcel Casal, Recepcion Del Derecho Extranjero Como Argumento: Derecho Comprando (Editorial Belgrano) (1997) - Judicial Recourse To Foreign Law: A New Source Of Inspiration? (UCL Press) - Markus Koehnen & Amanda Klein, The Recognition And Enforcement Of Foreign Judgments In Canada (International Bar Association Annual Conference, 2010). - The Impact of Foreign Law on Domestic Judgments Loc.gov, - Markus K.& Amanda Klein, The Recognition And Enforcement Of Foreign Judgments In Canada, Saturday 2 October, 2010 - Benjamin L. Leibman, Innovation Through Intimidation: An Empirical Account of Defamation Litigation in China, 33 Harvard International Law Journal , 104 (2006). The Fan Zhiyi Case introduced the concept of ‘public person’ through the decision. It could be pointed out that the concept of a ‘public person’ is not included in the Chinese Laws, Regulations, or legal interpretations. It is interesting to note that this concept evolved under the First Amendment Law of the United States. When China used this concept in this case (Fan Zhiyi Case), the Supreme People’s Court (the highest Court in China) it provided guidelines to all its lower Courts thereby making it a judicial precedent binding on similar cases that might arise in the future. - The Impact of Foreign Law on Domestic Judgments Loc.gov, available at: (last visited Oct 12, 2020). - Roger Erra, The Use of Comparative Law Before the French Administrative Courts, 156 Brit. Insti, & Comp. (2004) - M. Tushnet, The Possibilities of Comparative Constitutional Law, 108 Yale Law Journal , 1225 (1999).
<urn:uuid:0f21d9a0-b14b-4887-8f24-d86d165fd7fa>
CC-MAIN-2022-33
https://www.legalserviceindia.com/legal/article-4778-the-use-of-foreign-judgments-as-an-external-aid-of-interpretation-a-critical-analysis.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00697.warc.gz
en
0.895414
6,379
3.015625
3
Breast pain is also known as mastalgia. Mastalgia is classified as cyclical or noncyclic breast pain. Breast pain during menstruation is known as cyclical mastalgia. Noncyclical breast pain is caused by diseases of breast, muscles, rib and sensory nerve. Breast lies over muscle and ribs. Pain caused by muscle and rib diseases underneath the breast is often misdiagnosed as breast pain. The topic covers all disease that causes shooting pain in breast and originates in breast or underneath the breast. Causes of Breast Pain - Cyclical Mastalgia (breast pain) - Non-cyclical Mastalgia (breast pain) - Large Breast - Breast Cyst - Breast Infection - Breast Cancer - Musculoskeletal Causes - Pectoral Muscle spasm - Intercostal muscles - Rib contusion - Rib hairline fracture - Rib displace fracture - Neuralgic Pain- - Intercostal Neuralgia - Post-herpetic Neuralgia - Radicular Pain Cyclical Mastalgia As A Cause of Breast Pain Most cyclic breast pain is observed during menstrual cycle.1 Cyclic breast pain is thus caused by hormonal changes frequently seen during menstruation. Cyclic pain is seen during ovulation and continuous during entire period of menstruation. Cyclical breast pain during menstruation is caused by either hormonal changes or side effects of medication. One of the hormonal changes observed in individual suffering with non-cyclical breast pain during menstrual period is less progesterone than estrogen. The other hormonal causes observed to trigger non-cyclic headache is decrease level of hormone prolactin. Medication that triggers menstrual pain are oral contraceptives, hormone therapy, psychotropic drugs, and some cardiovascular agents. Symptoms of Cyclical Mastalgia - Pain- Intensity of breast pain is mild to moderate. Wearing tight fitting bra increases pain intensity. Breast pain is mostly bilateral but in few cases pain is unilateral. Pain radiates to side of breast as well as armpit. - Breast Swelling- Breast is big and heavy. Diagnosis of Cyclical Mastalgia: Cause of Pain in Breast - Mammography- Mammography is an X-Ray examination of breast. During examination, breast is compressed between parallel plate. X-Ray beam is passed from one plate and recorded over the opposite plate. The information collected creates images of the breast. Images shows the normal breast and swollen breast glands. - 3D Mammography- 3D mammography is also known as breast Tomosynthesis. Tomosysnthesis is similar to CT scan. The multi angle images creates 3D images. The study usage minimum X-Ray exposure. Thus radiation dose delivered is lot less than CT scan and avoids radiation exposure. The test prevents false positive finding of mammography. Findings like absence of cyst, abscess, infection and tumor suggests possible cause of pain is cyclical mastalgia. - Ultrasound examination- The ultrasound examination of breast is also known as sonography. Investigational study of breast is safe and painless. Ultrasound examination of breast involves transmission of ultrasound waves through skin in to the breast tissue. The ultrasound is transmitted through a small probe that is placed over the breast. The high frequency sound waves reflect on a transducer that collects information and converts into images. The study does no cause radiation effects. Ultrasound image shows absence of large cystic swelling, abscess and solid tumor. Treatment of Cyclical Mastalgia: Cause of Breast Pain - Low fat diet - Avoid caffeine - Consume food that contains Vitamin E (Nuts, sunflower seeds, spinach and broccoli) - Local Anesthetics- Local anesthetic lidocaine is mixed in ointment and used for topical application. Ointment causes numbness of the skin and subcutaneous tissue. The effects help to relieve pain in skin, subcutaneous tissue and breast. - NSAIDs- The research data suggests ointment most effective to relive pain contains anti-inflammatory medications diclofenac.2 c. Pain Medications - Non-steroidal anti-inflammatory drugs (NSAIDs)- NSAIDs are prescribed for pain and inflammation. Most frequent anti-inflammatory medications prescribed are motrin, naproxen and tylenol. - Tylenol- Tylenol helps to relieve mild to moderate pain. - Birth Control Pills- Birth control pill adjusts the hormonal imbalance between estrogen and progesterone. - Danazol2– Danazol is a male hormone prescribed for breast pain and endometriosis. Danazol has antigonadotropic and anti-estrogenic activities. Danazol balances the hormonal changes and prevents breast shooting pain. - Bromocriptine3– Bromocriptine blocks prolactin in the hypothalamus and decreases secretion of milk. The bromocriptine treatment decreases breast blood supply that results in less pain. - Tamoxifen4– Tamoxifen is an estrogen blocker and prescribed if cyclical mastalgia is caused by increased blood estrogen concentration. Non-cyclical Mastalgia: Cause of Breast Pain Noncyclic breast pain is the breast pain observed during menstrual and non-menstrual period.5 The non-cyclic breast pain is less common than cyclic breast pain. Noncyclic breast pain may or may not be restricted only to breast. Non-cyclic breast pain is caused by diseases of breast and tissue that surrounds the breast.1 The diseases of skin, muscles, rib and nerves that lies close to breast causes pain around or within breast tissue. - Large breast - Breast Cyst - Breast Infection - Breast Cancer Large breast causes breast pain as the breast drags the muscles and subcutaneous tissue while standing and sitting. Breast Pain is caused by pull and stretch of ligaments as well as tissue underneath the skin. Symptoms of Large Breast Causing Breast Pain - Pain- Mild to moderate breast pain is felt mostly in upper 1/3rd of breast that is in most cases pulled down while standing and sitting. Examination also indicates pain located behind the breast over ribs and pectoralis major muscle. - Enlarged Breast- Most large sized breast is diagnosed during examination. The entire breast is enlarged. The physician performs the examination to rule out cyst and cancer swelling. Cyst is a soft fluctuating swelling and cancer tissue is felt hard during examination. Diagnoses of Large Breast Causing Breast Pain- Mammography- Mammography study is performed to rule out breast cyst, abscess and cancer. Treatment of Large Breast – - Pain Medication- Mild to moderate breast pain is treated with non-steroidal anti-inflammatory medication and tylenol. - Surgery- Breast reduction surgery is performed to reduce the size of the pain. The pain intensity reduces because of reduction in breast volume as well as weight. Breast Cyst Causing Breast Pain Breast glands are tiny sacs known as alveoli that produces milk. Bundle of several breast alveoli form a lobule. Lobules are connected to duct. Duct opens into nipples and carry milk produced by alveoli to nipple. Several lobules and fatty subcutaneous tissue form a breast. Hormonal changes stimulate the alveoli to secrete milk. Hormonal imbalance causes swelling or cyst within alveoli. Cyst is formed by increasing and enlarging the size of the alveolar sac. Cyst is solitary (single) or multiple and size of cyst varies from 1 mm to 30 mm (3 cm). Large sized breast cyst is filled with fluid. Breast cyst is not cancerous. Breast cyst is classified as micro cyst and macrocyst.6 Microcyst is 1 mm to 2 mm in size and macrocyst is larger than 2 mm in size and grows up to 20 to 30 mm (2 to 3 cm) in size. Microcyst is difficult to feel during examination, but seen in mammography studies. Macrocyst is a large cyst and forms an oval swelling within breast tissue. Breast cyst swelling is felt by individual as well as physician during breast examination. Cyst feels soft and tender. Most cysts appear between age 35 and 50 years before menopause. Diagnosis of Breast Cyst Causing Breast Pain - Mammography- Mammography is an X-Ray examination of breast. During examination, breast is compressed between parallel plate. X-Ray beam is passed from one plate and recorded over the opposite plate. The information collected creates images of the breast. Images shows the cystic swelling as well as solid tumor. - 3 D Mammography- 3 D mammography is also known as breast Tomosynthesis. Tomosysnthesis is similar to CT scan created 3 D images, but uses less X-Ray exposure. Thus radiation dose delivered is lot less than CT scan and avoids radiation exposure. The test prevents false positive finding of mammography. - Ultrasound Examination- The ultrasound examination of breast is also known as sonography. Investigational study of breast is safe and painless. Ultrasound examination of breast involves transmission of ultrasound waves through skin in to the breast tissue. The ultrasound is transmitted through a small probe that is placed over the breast. The high frequency sound waves reflect on a transducer that collects information and converts into images. The study does no cause radiation effects. Ultrasound image shows the cystic swelling, abscess and solid tumor. - Needle Biopsy- Needle biopsy procedure is performed in procedure room under aseptic surrounding. The needle placement in cyst is performed while using ultrasound. The fluid inside the cyst is collected and examined under microscope. Treatment of Breast Cyst Causing Breast Pain - No smoking- Smoking causes breast engorgement that frequently results in cystic glands. - Diet- Low fat diet is recommended to prevent breast enlargement and cyst formation. - Topical Ointment - NSAIDs- Ointment containing diclofenac helps to relieve inflammation and pain associated with enlarged breast cyst. - Analgesics (Pain Medications) - Non-steroidal anti-inflammatory drugs (NSAIDs)- Mild to moderate pain as well as inflammation is treated with non-steroidal anti-inflammatory medications. The NSAIDs most frequently used are motrin, naproxen and celebrex. - Tylenol- Tylenol is prescribed for moderate to severe pain. - Antibiotics- Cystic swelling if ignored occasionally gets infected. Infection is treated with antibiotics. - Hormonal Treatment- Hormonal treatment like birth control pill helps to prevent hormonal fluctuation and imbalance. Birth control pills regulate menstruation and prevents cyst formation. - Needle Aspiration- The fluid within the breast cyst is aspirated that helps to reduce swelling and pressure over breast tissue. Needle is placed within cyst under guidance of ultrasound. The fluid is aspirated by gentle negative pressure. - Surgery- Painful large breast cyst is surgically removed. Surgery is performed in outpatient surgery. Breast Infection As A Cause Of Breast Pain Breast infection or inflammation is also known as mastitis. Breast infection is common among breast feeding female. Breast infection is also observed following breast injury. Types of Breast Infection- Breast infection is classified as follows- - Central or Sub-Areolar Breast Infection- Most often observed in female who are chronic smoker. The first sign of infection is pain, retracted nipple and foul smelling discharged from breast. - Granulomatous Lobular Mastitis- The infected firm mass is often mistaken for cancer. The mass is formed when lobular mastitis is inadequately treated with antibiotics. - Peripheral Non-Lactating Mastitis- The infection is localized in surrounding adipose tissue that does not contain any milk producing gland. The infection often follows breast trauma and more common in patient suffering with diabetes and rheumatoid arthritis. Symptoms of Breast Infection Causing Breast Pain - Pain- Breast Pain intensity is mild during initial phase and become severe when abscess is formed. Breast become tender during examination. - Fever- Breast infection causes fever. Temperature is fluctuating between 980 F and 1020 F. - Nausea- Pain and fever is associated with nausea. - Cracked Nipples- Infection spreads into alveoli and pus discharges through the milk duct into nipple. The pus spread over nipple causes crackling of nipple epithelial tissue. - Red Streaks On Breast- The inflammation increases diameter of blood arteries and veins since blood flow is increased to inflammatory tissue. The inflamed blood vessels look like ref streaks on breast skin. - Purulent Discharge- Purulent secretion of pus discharged is observed coming out of nipples. Diagnosis (Investigation) of Breast Infection- - Blood Examination- White blood cell count is increased. - Mammography- The inflamed tissue and abscess is observed over mammography. - 3 D Mammography (Tomosynthesis)- 3 D mammography shows inflamed tissue and abscess in 3 D images. The 3 D images helps to locate the exact depth of abscess and inflammatory tissue. - Ultrasound- Ultrasound helps to confirmed the diagnosis. - Needle biopsy- Needle biopsy helps to examine inflammatory breast tissue and also aspirated abscess under microscope. The bacterial colonies from bacterial culture are examined to find out causative bacteria. Cultural colonies are treated with several antibiotics to find out most effective antibiotics to treat the infection. Most breast infection is caused by staphylococcus aureus. The other bacteria that also causes breast infection are streptococcus and E. Coli. Treatment of Breast Infection Causing Breast Pain- - No smoking - Diet- Less fatty food and no alcohol. - Warm moist compression - NSAIDs- diclofenac ointment - Antibiotic- Erythromycin, Keflex and dicloxacillin - Pain Medication for Breast Pain Caused Due to Breast Infection - NSAIDs- Motrin, Naproxen and Celebrex - Hormonal Treatment - Birth control pills - Needle Aspiration- Large abscess is drained through large diameter needle. Procedure is performed in surgical center under sedation or local anesthetics. - Surgery- Open surgical procedure is performed in surgical center. Procedure is performed under sedation and local anesthesia. Surgery involves skin incision over the most tender area. The abscess area is marked after seeing 3 D and ultrasound images. The abscess area is explored and abscess as well as wall covering abscess is removed. Breast Cancer: Cause of Breast Pain Breast cancer is more common in middle age and elderly female, than young female individuals. Breast cancer is considered as familiar disease since breast cancer is found frequently among close relatives. The chances of developing breast cancer are higher if cancer is observed in siblings and mother. Breast cancer is known to be genetic disease and mutated cancer gene is found to cause the cancer. Breast cancer is triggered by exposure to radiation and disease is frequently seen in females who had radiation treatment. Types of Breast Cancer- - Angiosarcoma- Cancer growth originates in blood vessels. - Ductal Carcinoma In Situ (DCIS)- The cancer develops in epithelial cells of mild duct. Cancer does not spread outside duct. - Invasive Lobular Carcinoma- Cancer growth begins in milk gland. The cancer of epithelial cells of milk gland spreads to distant organs through blood and lymph. The cancer is known as invasive cancer since cancer cells rapidly spreads to distant organs through lymph and blood vessels. - Inflammatory Breast Cancer- Cancer growth begins in breast gland and spread into lymphatic vessels. In few cases lymphatic spread is restricted since lymphatic vessels are blocked by breast gland cancer cells. The lymphatic channels and skin overlying lymph vessels gets inflamed. Entire breast become red and inflamed. The disease is thus known as inflammatory breast cancer. - Paget’s Disease Of Breast- Paget’s disease is rare type of carcinoma. The cancer begins in nipple and spreads over circular dark alveolar tissue. Symptoms and Sign Of Breast Cancer - Pain- Breast Pain intensity is mild to moderate during initial phase. Pain intensity increases in advanced sage of cancer. Inflammatory breast cancer is very painful. Most breast cancer are tender and breast pain increases following examination. - Fever- Fever is observed in individual suffering with inflammatory breast cancer. Temperature varies between 990 F and 1010 F. - Breast Lump- In most cases breast examination indicates small to medium sized breast. The breast lump of cancer is felt firm to hard. The margins of tumor look irregular and painful. - Inverted Nipple- Nipples looks inverted in individual suffering with ductal carcinoma in situ and Paget’s disease of breast. - Skin- Skin shows dimple in individual suffering with ductal carcinoma and invasive lobular carcinoma. Inflammatory changes of skin are observed in individual suffering with inflammatory breast carcinoma. - Areolar Tissue- Areolar tissue shows peeling, scaling and crusting in patient suffering with ductal carcinoma and Paget’s disease of breast. - Enlarged Lymph Node- Axillary and sternal lymph node are enlarged. - Purulent Discharge- Purulent discharged is observed coming out of nipple in patient suffering with ductal carcinoma and Paget’s disease. Diagnosis of Breast Cancer - Blood Examination- White blood cell count is increased in patient suffering with inflammatory breast carcinoma. - Mammography- Prophylactic and diagnostic mammography. Most female age over 30 are recommended to get elective breast mammography yearly or every two years. Early detection and removal of breast cancer cures cancer disease. Mammography helps to diagnosed breast cancer in patient suffering with breast pain. - 3 D Mammography (Tomosynthesis)-Tomosynthesis helps to diagnose ductal carcinoma as well as invasive lobular carcinoma 3 D images helps to targets the cancer growth. - Ultrasound- Ultrasound helps to find solid and fluctuating soft mass in breast. Ultrasound is not used as an investigation to screen breast cancer. - Needle Biopsy- Needle biopsy is performed once cancer growth is felt during palpation and confirmed by mammogram. Procedure is performed in surgical center or doctor’s office. The needle is placed within tumor mass under 3 D mammogram or ultrasound guidance. The tumor tissue is aspirated in syringe and sent to lab for further study to diagnosed cancer. Treatment of Breast Pain Caused Due to Breast Cancer - Excision of tumor mass is also known as lumpectomy. - Excision of breast is also known as mastectomy - Removal of breast and lymph node- This procedure involves mastectomy and removal of lymph node that may be involved in breast cancer. - Radiation Therapy- Treatment involves targeting cancer tissue by high energy rays so as to kill most cancer cells. Radiation therapy may follow surgery. In few cases patients are given chemotherapy after radiation treatment. - Chemotherapy- Chemotherapy treatment is preferred after surgery and radiation treatment just to make sure any left breast cancer cells are killed. Chemotherapy most often preferred to treat breast cancer are Adriamycin, Taxol, Taxotere, Cyclophosphamide and Paraplatin. - Hormonal Therapy- Hormonal therapy like chemotherapy is used after surgery to prevent growth of breast cancer from residual cancer cells. The hormones are taken for 5 years if not more. Breast cancer cells has a receptor for estrogen and estrogen stimulates rapid growth and multiplication of cancer cells. The hormones prescribed to prevent breast cancer growth are classified into following groups- - Anti-estrogen receptor-Drugs like tamoxifen or toremifene blocks the estrogen receptors and prevents growth as well as multiplication of cancer cells. Similarly, faslodex destroys the estrogen receptor and prevents any estrogen effects on cancer cells. - Prevent estrogen synthesis- - Aromatase inhibitors- These drugs inhibit secretion of estrogen. - Luteinizing hormone-releasing hormone (LHRH)- inhibits ovaries to secrete estrogen and ovarian suppression. - Oophorectomy- Removal ovary prevents secretion of estrogen. - A Systematic Review of Current Understanding and Management of Mastalgia Kamal Kataria, Anita Dhar, Anurag Srivastava, Sandeep Kumar, and Amit Goyal, Indian J Surg. 2014 Jun; 76(3): 217–222., Published online 2013 Feb 5. - Cyclic and non-cyclic breast-pain: A systematic review on pain reduction, side effects, and quality of life for various treatments. Groen JW1, Grosfeld S2, Bramer WM3, Ernst MF2, Mullender MM4., Eur J Obstet Gynecol Reprod Biol. 2017 Dec;219:74-93. - Treatment of Cyclical Breast Pain with Bromocriptine R. E. Mansel, P. E. Preece*, L. E. Hughes, First Published October 1, 1980 Research Article - Breast pain Amit Goyal, Consultant Oncoplastic Breast Surgeon and Honorary Associate Professor, BMJ Clin Evid. 2014; 2014: 0812. - New Breast Pain Chart for Objective Record of Mastalgia. Gautam S1, Srivastava A1, Kataria K1, Dhar A1, Ranjan P1, Kumar J1., Indian J Surg. 2016 Jun;78(3):245-8. - Cystic Breast Masses and the ACRIN 6666 Experience Wendie A. Berg, MD, PhD,a,b,* Alan G. Sechtin, MD,b Helga Marques, MS,c and Zheng Zhang, PhDc, Radiol Clin North Am. Author manuscript; available in PMC 2011 Sep 1. - Common Breast Problems. Salzman B1, Collins E1, Hersh L1., Am Fam Physician. 2019 Apr 15;99(8):505-514. - Breast Pain While Running: Right Sports Bra Can Prevent Pain in the Breast in Runners - What Can Cause Shooting Pain In Breast? - Burning Sensation In Breast: Causes, Ways to Get Rid of it - What Causes Cyclical or Menstrual Breast Pain? - Causes of Breast Secretion or Nipple Discharge & its Treatment, Prognosis, Complications
<urn:uuid:f3dd2410-31c9-4879-badc-214c94e1bbc6>
CC-MAIN-2022-33
https://www.epainassist.com/women/breast-pain
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572286.44/warc/CC-MAIN-20220816090541-20220816120541-00294.warc.gz
en
0.916142
4,809
2.984375
3
Palestinian society abuses its children Palestinian society abuses its children, teaching them to hate and kill themselves to kill others Under self-rule on the West Bank and Gaza, child sacrifice has turned into a normative part of the socialization process as the phenomenon of suicide bombers has escalated to epidemic proportions.6 From an early age, children are fed anti-Zionist, anti-Jewish and anti-Western hate propaganda. Mosques, schools, summer camps, and even children’s television programs are exploited to encourage children to become martyrs in an act that will bring them respect and parental pride: - In Hamas-run kindergartens, signs on the walls read: “The children of the kindergarten are the shahids (holy martyrs) of tomorrow.”7 - A television show called “The Children’s Club” shows a young Palestinian, age 9 or 10 proclaiming, “When I wander into Jerusalem, I will become a suicide bomber.”8 - The Palestinian Authority-controlled television,9 broadcasts MTV-style videos for teens that glorify suicide bombing and martyrdom. - A 6th grade Palestinian textbook, Our Beautiful Language, includes the “Shahid Song” that encourages death in war as a shahid or martyr. Other textbooks carry similar messages. - At a Palestine Authority summer camp in 2002, 25,000 children were trained in how to make firebombs, use firearms, and ambush and kidnap targeted enemies.10 - An Islamic Jihad summer school massages the libidos of teenage boys by telling them they will “liberate Palestine from the Jews” by becoming martyrs, and promise the boys that they will be greeted by 72 virgins.11 - Kindergartens, schools, summer camps, and school sports tournaments (and other institutions) are named after terrorists and young suicide bombers, who are used as pedagogic role models.12 One of the most chilling examples of Palestinian role modeling occurred in the case of Aziz Salha, age 20, a participant in the lynching of two Israelis at the Ramallah police station in October 2000.13 The London Telegraph reported how Salha “choked one of the soldiers while others beat him. When he saw that his hands were covered in blood, he went to the window and showed them to the crowd below.” This unforgettable scene, captured by a foreign news crew, is used as the focus for adoration and reenactment14in Gaza kindergartens, much as children in American public school might reenact the signing of the Declaration of Independence. Throughout the Palestinian territories, walls are plastered with posters of young martyrs who are idolized by Palestinian youth the way other teens worship rock stars. Against such a backdrop, Wajdi Hatab, age 14, told his classmates days before being killed: “When I become a martyr, give out kannafa (traditional cake).”15 B’tselem, an Israeli human rights organization that monitors Israeli conduct toward Arabs in the West Bank and Gaza, sharply rebuked Palestinian leadership for making little effort to keep children away from potentially violent confrontations.16 Bassam Zakhour, a bereaved Palestinian father, was far more frank. He blamed Palestinian Authority television for enticing his 14 year-old son to run off with two other schoolmates ‘to kill Jews.’ The trio was chosen by Hamas handlers because their ‘innocent’ looks would not arouse suspicion. They entered a Jewish settlement with knives and explosives packed in their schoolbags.17 Indeed, the age of children volunteering for suicide missions is dropping from men in their 20s to children in their teens and preteens. At the same time, the scope of violence between the first Intifada and the second has escalated. Where Palestinian children threw rocks in the 1980s, they began throwing firebombs in 2000. In more than three years of guerrilla warfare since 2000, Palestinian leaders use children in warfare against Israel in other ways as well. Toddlers have served as cover for terrorist activity by hiding munitions in their clothing. Paramedics found an explosive belt with 21 kilograms of explosives hidden under the pad of an ambulance stretcher carrying an ill Palestinian child.18 The Hezbollah weekly journal reported19 that children had helped make weapons and ammunition in the Jenin refugee camp, and then clashed with Israeli forces after they were armed with grenades and explosives. In July 2003, two Palestinian assailants posed as a family, accompanied by a female accomplice with a 4-year-old child (her niece). The accomplice and child were used as bait in the knife-point kidnapping of a Jewish cab driver. Later, another child passed through Israeli checkpoints while carrying supplies to the kidnappers.20 - February 16, 2002. An 18-year-old boy blew himself up outside a pizzeria in the territories, killing three Israelis and wounding 30. - March 30, 2002. A 16-year-old girl walked into a Jerusalem supermarket and detonated a bomb concealed under her clothing, killing two Israelis and wounding 22 others. - April 23, 2002. Three teenagers from Gaza, armed with knives and explosives, were killed attempting to crawl under a perimeter fence to attack residents of a Jewish settlement. - May 2002. A 16-year-old boy with a suicide belt strapped to his body, was arrested in a taxi near Jenin - June 13, 2002. A 15-year old girl was arrested for throwing a firebomb at IDF soldiers. She admitted she was a recruit. - July 30, 2002. A 17-year-old boy from Beit Jala, an Arab suburb of Jerusalem, became disoriented after being dropped off by his adult handler, blew himself up outside a virtually empty falafel stand in the city and injured five Israelis. The milieu that encourages hatred and revenge and glorifies death draws more and more children into violence. On January 11, 2003, two children, ages 8 and 14, who had armed themselves with knives, were apprehended in an Israeli settlement after trying to stab a Jewish passerby.23 Are these isolated incidents? A survey of 1,000 Palestinian children between the ages of 9 and 16, conducted by the Islamic University in Gaza, found that 73 percent of the children surveyed wanted to be martyrs.24 Countless Palestinian parents support, encourage and praise the sacrifice of their children in suicide bombings and other terrorist attacks. Arab culture holds these child-soldiers in such high regard that parents accept the deaths of their children with pride. A June 2002 public opinion survey conducted by the independent Arab-polling institute Jerusalem Media and Communications Center, found 68 percent of Palestinian adults support suicide bombing operations.25 The father of a 13- year-old says, “I pray that God will choose him [to be a martyr].” The father of another youth who carried out a June 2002 attack outside a Tel Aviv disco declares: “I am very happy and proud of what my son did, and frankly, I’m a bit jealous.”26 Financial incentives to families of suicide bombers also provide parents with reason to acquiesce, especially given the poverty of a majority of Arabs in the West Bank and Gaza, where living standards have plummeted since September 2000. - The Palestinian Authority pays parents $2,000 for each child killed and $300 for each wounded child. - Saudi Arabia pledged $250 million as part of a billion-dollar fund established to aid families whose children are killed. - The Arab Liberation Front, a group loyal to former Iraqi President Saddam Hussein, was paying $10,000 to the parents of each child killed and $25,000 for suicide bombers.27 Moral support also comes from other Arab nations. The Saudi ambassador to Great Britain wrote an ode to a 17-year-old female suicide bomber. One of the most frightful messages among those who justify young suicides came from Dr. Adel Sadeq, chairman of the Arab Psychiatrists Association and head of the department of psychiatry at Ein Shams University in Cairo. He wrote an open letter to President George W. Bush entitled, “Class Isn’t Over Yet, Stupid” that declared:28 “Don’t you understand, stupid, that when a girl of 18 springs blows herself up, this means that her cause is right, and that her people will be victorious sooner or later?” In an interview on Egyptian satellite TV Iqraa, Dr. Sadeq further clarified: “Our culture is one of sacrifice, loyalty and honor. … Bush was mistaken when he said that the girl was killing the future when she chose to kill herself. On the contrary: She died so that others would live. … When the martyr dies a martyr’s death, he attains the height of bliss…. The message to Israel is that we will not cease. … It is very important to convey this message. … The child who threw a stone in 1993 today wraps himself in an explosive belt. … Either we will exist or we will not exist. Either the Israelis or the Palestinians – there is no third option.”29 Some parents and social organizations do protest the barbaric use of children as warriors, although not necessarily criticizing suicide bombing as a tactic. Unfortunately, they are small voices in the wilderness. Some Arab parents have condemned the use of children as combatants, but their voices are isolated and they carry the risk of being ostracized and vilified. In December 2000, a local group of Palestinian women trade unionists called on the Palestinian Authority to stop using children as cannon fodder: “We don’t want to send our sons to the front line, but they are being taken by the Palestinian Authority,” said a mother of six from the West Bank city of Tulkarem.30 A nurse from Gaza who spoke out on television was condemned in the Arabic media as a traitor. Others reveal that they have been threatened by armed Fatah officials for discouraging their children from participating in clashes.31 While Palestinian leaders exhort the public into volunteering their children for suicide missions, they make sure their own children are not among the volunteers. Many Palestinian leaders who tell parents that it is their patriotic duty to sacrifice their children32 have sent their own offspring abroad (as have other Palestinians with the financial means), while others keep their own children under close supervision to ensure their safety. Past’s PA Chairman Yasser Arafat, for instance, sent his wife and young daughter to Paris, where they reportedly lived on a generous monthly PA allowance of $100,000, The Palestinians’ First Lady Suha endorsed suicide operations: “There would have been no greater honor” than watching her son take his own life for the Palestinian struggle for independence – if only she had a son, the Sorbonne graduate told a London-based Arabic paper.33 In October 2000, a London-based Lebanese columnist, Hodo Husseini, condemned Palestinian leadership in the pan-Arab daily Al Sharq al-Awsat by asking: “What kind of enlightened independence will rise on the blood of the children, while the leaders [and] their [own] children and grandchildren are sheltered?” She and other critics were branded as “too Westernized to understand” in an editorial published in the PA’s state-controlled daily Al Hayat al-Jadida. One of the most poignant protests against turning children into warriors came from Abu Saber, a bereaved father. He wrote to the London Arabic daily Al-Hayat about his eldest son who had been convinced to become a shahid, and how he learned that his dead son’s friends “were starting to wrap themselves like snakes around my other son, not yet 17, to direct him down the same path … to avenge his brother’s death.” He asked in anguish:34 “By what right do these leaders send the young people, even young boys in the flower of their youth, to their deaths? Who gave them religious or any other legitimacy to tempt our children and urge them to their deaths?… Why until this very moment haven’t we seen one of the sons or daughters of any of these people don an explosive belt and go out to carry out in deed, not in words, what their fathers preach day and night?” In his letter, Abu Saber cited by name sheikhs and leaders who had sent their sons abroad “the moment the Intifada broke out” – including the son of the past head of Hamas in Gaza – the late Dr. Abdul Al-Rantisi,35 whose wife, he charged, “has refrained from sending her son Muhammad to blow himself up. Instead, she sent him to Iraq, to complete his studies there.” Protecting our children Protecting our children is a universal trait that unites the Family of Man. But in Palestinian society, that standard has been turned on its head Around the world, children are precious gifts to their parents and keys to the future. The loving care we invest in our own children is a human trait that unites different cultures: rich and poor, traditional and hi-tech. The toughest job parents have is to raise their children while making everyday sacrifices and decisions for them. We hug them, love them and watch them grow up, praying that they will come to no harm, and doing everything we can to ensure that. From the poorest barrios in South America to the most wretched slums of Cairo, parents strive to make sure there is food for their children and money for their children’s education. Parents everywhere walk a fine line between the need for parental guidance and youthful independence, setting rules for what their children can and cannot do, trying to ensure that their children will not make mistakes that endanger them. Parents raise their children with the hope that they will grow into happy, responsible, caring, and contributing members of society. That is what unites the Family of Man from Caracas to the Caucuses, from Timbuktu to Katmandu. It is clear that in Palestinian society something has gone dreadfully wrong. Children in Palestinian communities in the West Bank and Gaza are turned into ‘self-destructing human bombs’ capable of carrying out casualty terrorist attacks in the struggle between Palestinians and Israelis – a phenomenon whose seeds can be traced to the first Intifada. It happened because Arab communities within the civil jurisdiction of self-rule under the Palestinian Authority (which includes 97 percent of the Arab residents in the West Bank and 100 percent of those in Gaza) foster a culture that prepares children for armed conflict, consciously and purposely putting them in harm’s way for political gain and tactical advantage in their war against Israel. The PA buses children to violent flashpoints far from their neighborhoods and Arab snipers often hide among the young during battle, using children as human shields. Teenaged perpetrators of suicide attacks have become the norm. In the first Intifada, a similar pattern surfaced, in which women and children led riots while young men in their late teens and early 20s, armed with rocks, sling shots, Molotov cocktails and grenades operated from the rear. There were thousands of Molotov cocktail attacks, more than 100 hand grenade attacks and more than 500 attacks with guns or explosive devices over the course of the first Intifada. Children in elementary and junior high school were encouraged to stone Israelis using rocks and slingshots, knowing that Israeli soldiers could do little beyond taking the youngsters into custody and fining their parents in the hopes they would ground their children. Instead, Palestinian parents sent their children back onto the streets. Some were killed. Others were maimed. Palestinian society praised the transformation of its children into combatants during the first Intifada, dubbing them fondly “the children of the rocks.” Mahmud Darwish, the Palestinian national poet laureate, wrote a poem after the outbreak of the first Intifada, which sanctioned and sanctified their deaths, and praised “Arab youth on the road to victory, each with a coffin on his shoulder.” The poem eventually was set to music, encouraging countless Palestinian children to endanger themselves as a form of socially-condoned conduct that would bring them fame and prestige should they be hurt. This nihilistic bent took an even more destructive path in the second Intifada, as the ‘weapons of choice’ moved from rocks to explosives and the role of the children moved from reckless life-threatening behavior to conscious premeditated suicidal acts. Clearly horrified by the use of children in armed conflict, Israeli author and peace advocate Aharon Megged wrote during the first Intifada: “Not since the Children’s Crusade in 1212 … has there been a horror such as this – no people, no land where adults send children age 8-9 or 14-15 to the front, day-after-day, while they themselves hide in their houses or go out to work far-far away. They continue, and send them time-after-time, and don’t stop them even when they know they are liable to be killed, maimed, beaten or arrested.” But the use of children to fight grownup battles, which germinated in the first Intifada in 1987, has run the full course – not only teaching and training children to kill, a crime shared by those behind an estimated 300,000 child soldiers around the world, but indoctrinating their own offspring to take their own lives. Palestinians Kill Their Children Palestinians are killing their children because they make effective delivery systems for killing Israelis. They also sacrifice them because wounded or dead children paint Israelis as heartless and cruel in the eyes of the world and the Israelis themselves Five months into the first Intifada in 1988, a Palestinian leader told an Israeli reporter: “We will make you cruel.” He said the use of women and children on the front lines, leading violent riots, would make Israelis look bad in the eyes of the world and make the Israelis hate themselves because Israel is morally sensitive. In the first Intifada, the strategy of sending children into battle worked on both fronts: it produced painful headlines and anguished Israelis, leading to negative coverage of Israel abroad, including articles by American Jews who worried that Israel was losing its soul. The feeling of having been ‘tainted’ was reflected in a letter sent by an Israeli medic in the reserves to MK Haim Oron, writing that while his unit’s behavior was devoid of any case where “soldiers or officers stepped out of bounds,” the unpleasant task of apprehending rock-throwing youth was unbearable. “But now the Palestinians hate me and I hate myself. So what the hell do I do?” While the mobilization of children on the front lines did not have the effect Palestinians ultimately sought – a unilateral Israeli withdrawal without peace – Palestinians did note the success the strategy had in demonizing Israel in the eyes of the world and the Israelis themselves. This so-called success encouraged Palestinians to enlarge the role of their children by using them as human shields, direct combatants and suicide bombers and by glorifying, rather than mourning, their deaths. As long as the deaths of children serve the Palestinian cause, Palestinian leaders will continue to employ this strategy. If deploying Palestinian children as combatants and targeting Israeli children is to halt, the world community must take a clear moral stand. The death of Arab children on the front lines – extolled as shahids or martyrs – has become a cynical weapon in the arsenal of Arab leaders. They have learned that when their children are killed, they gain world sympathy, especially in Europe and North America – where the death of any child is viewed as a tragedy and portrayed as such in the media, regardless of circumstance. In January 1990, at the close of the second year of the first Intifada, an Israeli journalist wrote of the sacrifice of Palestinian children and what seems to fuel it: “The numbers are horrendous. However these child victims of the Intifada are not targets. They are weapons. Few … in the West stop to ask – Who sends children to the front with coffins on their shoulders and potentially lethal projectiles in their hand? … The Intifada is unconventional warfare, using women and children as weapons, because it is a psychological war … [for] the hearts and minds of world opinion … to erode traditional support of Israel by the diaspora … to victimize Israelis by manipulating moral sensibilities inherent in Jewish ethics and Western society to undermine motivation and paralyze the Israeli body politic by systematic de-legitimization of our self-image … The only way to break this brutal and vicious circle and put an end to Palestinian moral-mental blackmail is to get to the source and recognize that the youthful victims and their elder victimizers hail from the same camp.” Not much has changed since then except that the Palestinians’ exploitation of children has reached new heights. Their 1988 threat to Israel – “We will make you cruel” – hangs in the air. With sometimes 20 or more tips of planned terrorist attacks in their final stage of execution every day, Israelis are forced, against their will and against their humanitarian instincts, to take extreme measures to protect their own children from these onslaughts. Perversely, Israel is condemned for protecting herself from these lethal ‘children.’ To add insult to injury, the hapless victims are often not mentioned by name in the world press – not even in short obituaries – while the young perpetrators are the focus of compassionate coverage, with long, empathetic profiles like the one about the suicide bomber in The New York Times Sunday Magazine. It described the killer as a person who “raised doves and adored children.” A 2002 Washington Post editorial headlined “Death Wish,” following a conference in which 57 Islamic nations rejected the idea that Palestinian ‘resistance’ to Israel had anything to do with terrorism, said: “In effect, the Islamic conference sanctioned not only terrorism but also suicide as a legitimate political instrument…. It is hard to imagine any other grouping in the world’s nations that could reach such a self-destructive and morally repugnant conclusion.” The Post castigated Muslim states and suggested their behavior was liable to be the seeds to their own destruction. It concluded: “The Palestinian national cause will never recover – nor should it – until its leadership is willing to break definitely with the bombers. Article 38 of the United Nations Convention on the Rights of the Child (adopted in 1989) condemns the recruitment and involvement of children in hostilities and armed conflicts. In 2000, the UN General Assembly adopted a treaty that raises the age limit for compulsory recruitment and participation in combat to age 18. Article 36 of the same UN document calls on states to protect children against any kind of exploitation. United Nations Under-Secretary-General Olara Otunni condemned terrorist groups’ use of children as human shields, gunmen and suicide bombers. At a UN Security Council debate on January 14, 2003 devoted to measures to protect children in armed conflict, he said: “We have witnessed child victims at both ends of these acts: Children have been used as suicide bombers and children have been killed by suicide bombings. Nothing can justify this. I call on the Palestinian authorities to do everything within their powers to stop all participation by children in this conflict.” The UN could do much more. Although the United Nations Relief and Works Agency (or UNRWA) funds nearly all PA-controlled schools in the West Bank and Gaza, UNRWA rejects criticism that it allows Palestinian pedagogues and educators to propagate hatred of Israel and identification with suicidal martyrdom, saying UNRWA has no mandate to set curricula or means to control terrorist activity within its camps. When Arab children are killed or injured, it makes headlines in Western media reports. But rather than investigate who is behind the participation of children in armed confrontation, Western journalists tend to report what they see on the streets. Palestinian Child Abuse 1: Palestinian society abuses its children: http://www.mythsandfacts.org/Conflict/9/childrendyingtokill1.htm 2: Protecting our children: http://www.mythsandfacts.org/Conflict/9/childrendyingtokill1.htm 3: Palestinians Kill Their Children: http://www.mythsandfacts.org/Conflict/9/childrendyingtokill1.htm 4: International law: http://www.mythsandfacts.org/Conflict/9/childrendyingtokill1.htm
<urn:uuid:4a4d483e-bd41-4188-9070-c6333a014d00>
CC-MAIN-2022-33
https://palestinianhumanrights.wordpress.com/2014/07/17/stop-palestinian-child-abuse/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572021.17/warc/CC-MAIN-20220814083156-20220814113156-00096.warc.gz
en
0.964019
5,144
2.546875
3
What is Self-Injury? Self-Injury (SI) (also called self-harm, self-inflicted violence, or non-suicidal self-injury) is the act of deliberately harming one’s own body, such as by cutting or burning, that is not meant as a suicidal act. Self-injury is an unhealthy way to cope with emotional pain, anger, and frustration. Self-harm is the deliberate infliction of damage to your own body and includes cutting, burning, and other forms of injury. While cutting can look like attempted suicide, it’s often not; most people who mutilate themselves do it as a way to regulate mood. People who hurt themselves may be motivated by a need to distract themselves from inner turmoil or to quickly release anxiety that builds due to an inability to express intense emotions. Self-harm or self-injury means hurting yourself on purpose. One common method is cutting yourself with a knife. But any time someone deliberately hurts herself is classified as self-harm. Some people feel an impulse to burn themselves, pull out hair or pick at wounds to prevent healing. Extreme injuries can result in broken bones. Hurting yourself—or thinking about hurting yourself—is a sign of emotional distress. These uncomfortable emotions may grow more intense if a person continues to use self-harm as a coping mechanism. Learning other ways to tolerate the mental pain will make you stronger in the long term. Self-harm also causes feelings of shame. The scars caused by frequent cutting or burning can be permanent. Drinking alcohol or doing drugs while hurting yourself increases the risk of a more severe injury than intended. And it takes time and energy away from other things you value. Skipping classes to change bandages or avoiding social occasions to prevent people from seeing your scars is a sign that your habit is negatively affecting work and relationships. The most common type of self-injury is skin-cutting, but self-harm refers to a wide range of behaviors, including burning, scratching, trichotillomania, poisoning, and other types of injurious behaviors. There is a complex relationship between self-injury, which is not a suicidal act, and suicide. Self-harming behavior may be potentially life-threatening. There also exists a higher risk of suicide in those who self-injure. The DSM-IV lists self-injury as a symptom of Borderline Personality Disorder; however, people who suffer depression, stress, anxiety, self-loathing, eating disorders, substance abuse, additional personality disorders, and perfectionism may also engage in self-injurious behavior., Self-harm is most common in adolescence and teen years, usually beginning between the ages of 12 and 24; however, self-injury is not limited to the teen years. Self-injury can start at any age. It’s estimated that two million people from all races and backgrounds in the US injure themselves in some way. Young women are more likely than young men to engage in self-injurious behavior. Why Do People Self-Injure? Self-harm is not a mental illness, but a behavior that indicates a lack of coping skills. Several illnesses are associated with it, including borderline personality disorder, depression, eating disorders, anxiety or post-traumatic stress disorder. Self-harm occurs most often during the teenage and young adult years, though it can also happen later in life. Those at the most risk are people who have experienced trauma, neglect or abuse. For instance, if a person grew up in an unstable family, it might have become a coping mechanism. If a person binge drinks or does drugs, he is also at greater risk of self-injury, because alcohol and drugs lower self-control. The urge to hurt yourself may start with overwhelming anger, frustration or pain. When a person is not sure how to deal with emotions, or learned as a child to hide emotions, self-harm may feel like a release. Sometimes, injuring yourself stimulates the body’s endorphins or pain-killing hormones, thus raising their mood. Or if a person doesn’t feel many emotions, he might cause himself pain in order to feel something “real” to replace emotional numbness. Once a person injures herself, she may experience shame and guilt. If the shame leads to intense negative feelings, that person may hurt herself again. The behavior can thus become a dangerous cycle and a long-time habit. Some people even create rituals around it. Self-harm isn’t the same as attempting suicide. However, it is a symptom of emotional pain that should be taken seriously. If someone is hurting herself, she may be at an increased risk of feeling suicidal. It’s important to find treatment for the underlying emotions. There’s no single cause that leads to self-injurious behavior. The mixture of emotions that trigger one to self-injure is complex. Generally, self-injury is the result of an inability to cope with deep psychological pain. Physical pain distracts the sufferer from painful emotions or helps the person who self-injures to feel a sense of control over an otherwise uncontrollable situation. Emotional emptiness – feeling empty inside – may lead to self-harm, as it allows the sufferer to feel something – anything. It’s an external way to express inner turmoil. Self-injury can be a way to punish the self for perceived faults. Risk Factors for Self-Injury: There are certain factors that may increase the risk for self-injury. These include: - Most people who self-injure begin as teenagers. Self-injury tends to escalate over the years. - Having friends who self-injure increases the likelihood that someone will begin to self-injure. - Having gone through sexual, emotional, physical child abuse or neglect. - Drug or alcohol use – many of those who self-injure do so under the influence of drugs and/or alcohol. - Being overly self-critical, lacking impulse control, having poor problem-solving skills. - Mental illnesses such as depression, borderline personality disorder, anxiety problems, PTSD, eating disorders, and drug or alcohol abuse. Common Traits And Signs of Self-Injurers: While cutting and self-harming occurs most frequently in adolescents and young adults, it can happen at any age. Because clothing can hide physical injuries, and inner turmoil can be covered up by a seemingly calm disposition, self-injury in a friend or family member can be hard to detect. In any situation, you don’t have to be sure that you know what’s going on in order to reach out to someone you’re worried about. Of course not everyone who self-injures will display all of the following characteristics. Some may identify with one or two; some may identify with none at all. Here are some common characteristics of those who self-injure, and red flags you can look for: Blood stains on clothing, towels, or bedding; blood-soaked tissues Childhood trauma or significant parenting deficits. Many adapt to the trauma by developing unhealthy fantasies about being being rescued from their grief. Difficulties in impulse control, like eating disorders or drug abuse. Covering up. A person who self-injures may insist on wearing long sleeves or long pants, even in hot weather. Engaging in Magical Thinking: physical wounds make you immune to other, greater harm. Fear of changes – everyday changes or any kind of new experience – people, places, and things. This may include the fear of getting well or stopping the self-injurious behavior. Feel undeserving of proper self-care. Many people who self-injure ignore their own needs, like a good diet, enough sleep, and exercise. They may be apathetic to their appearance or feel undeserving of such care. Frequent “accidents.” Someone who self-harms may claim to be clumsy or have many mishaps, in order to explain away injuries. Growing up in an environment where intense emotions weren’t allowed. History of childhood illness or severe illness and/or disability in a close family member. Isolation and irritability. Your loved one is experiencing a great deal of inner pain—as well as guilt at how they’re trying to cope with it. This can cause them to withdraw and isolate themselves. Limited social support network, due to shame of self-harm or because they have poor social skills. These social skills may include being hypersensitive and an inability to tune into the needs of others. Low self-esteem coupled with a powerful need for love and acceptance by others. They may adopt an unhealthy care-taking role or take on too much responsibility for what happens in a relationship. Needing to be alone for long periods of time, especially in the bedroom or bathroom. Sharp objects or cutting instruments, such as razors, knives, needles, glass shards, or bottle caps, in the person’s belongings. Unexplained wounds or scars from cuts, bruises, or burns, usually on the wrists, arms, thighs, or chest What Are Some Forms of Self-Injury? While self-injury may take on many different forms, most people who self-injure stab or cut their skin with a sharp object. However, self-injury types are only limited to the individual’s inventiveness and determination to harm themselves Self-harm is a way of expressing and dealing with deep distress and emotional pain. It includes anything you do to intentionally injure yourself. Some of the more common ways include: - Cutting or severely scratching your skin - Burning or scalding yourself - Hitting yourself or banging your head - Punching things or throwing your body against walls and hard objects - Sticking objects into your skin - Intentionally preventing wounds from healing - Swallowing poisonous substances or inappropriate objects - Carving words/symbols on skin - Interfering with wound healing - Head banging - Pulling out hair - Piercing skin with sharp object Self-harm can also include less obvious ways of hurting yourself or putting yourself in danger, such as driving recklessly, binge drinking, taking too many drugs, and having unsafe sex. Regardless of how you self-harm, injuring yourself is often the only way you know how to: - Cope with feelings like sadness, self-loathing, emptiness, guilt, and rage - Express feelings you can’t put into words or release the pain and tension you feel inside - Feel in control, relieve guilt, or punish yourself - Distract yourself from overwhelming emotions or difficult life circumstances - Make you feel alive, or simply feel something, instead of feeling numb How Do I Know if I Self-Injure? Cutting is not the only way that someone can self-injure. Picking scabs compulsively, pulling out hair, burning, punching, hitting your head against the wall, and many other methods are considered self-injury. Sometimes, people drink harmful substances like bleach or detergent if they are self-injuring. If you use one of these methods or a similar method, especially when in emotional conflict, you likely self-injure. You don’t have to require stitches or a trip to the emergency room to self-injure. Even if you think it isn’t “bad enough,” it is. Help is out there, regardless of your situation. Does Self-Harm Help? It’s important to note that those who self-injure do so for many reasons – and self-injury often helps to soothe these issues. Understanding the reasons that one self-injures can help to ascertain ways to stop the self-harming. Emotional Reasons for Self-Injuring: - Self-soothing to calm intense emotions - To punish yourself or express self-loathing - Exerting control over your own body - Express things that cannot be put into words - Distraction from emotional pain - Regulate strong emotions Okay, So If Self-Harm Helps, Why Bother Stopping? The relief that comes from cutting or self-harming is only temporary and creates far more problems than it solves. Relief from cutting or self-harm is short lived, and is quickly followed by other feelings like shame and guilt. Meanwhile, it keeps you from learning more effective strategies for feeling better. Keeping the secret of self-harm is difficult and lonely. Maybe you feel ashamed or maybe you just think that no one would understand. But hiding who you are and what you feel is a heavy burden. Ultimately, the secrecy and guilt affects your relationships with friends and family members and how you feel about yourself. Self-harm may provide a temporary relief from the turbulence inside, but it comes at a steep price. In the long run, self-injury causes more problems than it stops. It makes it almost impossible to learn healthy coping mechanisms. You can hurt yourself badly, even if you don’t mean to. It’s easy to end up with an infected wound or misjudge the depth of a cut, especially if you’re also using drugs or alcohol. You’re at risk for bigger problems down the line. If you don’t learn other ways to deal with emotional pain, you increase your risk of major depression, drug and alcohol addiction, and suicide. Self-harm can become addictive. It may start off as an impulse or something you do to feel more in control, but soon it feels like the cutting or self-harming is controlling you. It often turns into a compulsive behavior that seems impossible to stop. The bottom line is that cutting and self-harm won’t help you with the issues that made you want to hurt yourself in the first place. No matter how lonely, worthless, or trapped you may be feeling right now, there are many other, more effective ways to overcome the underlying issues that drive your self-harm. What Self-Injury Is Not: There exist many myths surrounding self-injury. We’re here to try and dispel some of these commonly held, but wrong, beliefs about self-injury. - Self-Injury is not suicidal behavior. While people do occasionally die from self-injurious behavior, it is by accident. Generally, those who self-injure are not suicidal. - Self-Harm is not a cry for attention. While many people – family, friends, even doctors – may believe that self-injury is attention-seeking behavior, those who self-harm generally try to hide what they are doing because they are ashamed. - People who self-injure are not crazy. Those who self-injure are trying to deal with trauma, not mental illness. These people are simply trying to cope the only way they know how. What Do I Do If I Am Self-Injuring? Acknowledge the problem. You are probably hurting on the inside which is why you self-injure. Talk to someone you trust. It could be anyone. A doctor, a counselor, a friend, a parent. Just confide in them. Identify your self-injury triggers. If you know what your triggers are, you can learn to avoid or address these triggers. Recognize that self-injury is an attempt to soothe yourself. Develop better, healthier ways to calm and self-soothe. Figure out what function self-injury is serving. Replace self-injury by expressing your emotions in healthy ways. Treatment for Self-Injury: There is no golden standard of treatment for self-injury; rather, treatment is tailored to the specific reasons behind the self-injury and treating any underlying psychological conditions. Successful treatment for self-injury is possible but may take time and work to learn more appropriate coping mechanisms. The help and support of a trained professional can help you work to overcome the cutting or self-harming habit, so consider talking to a therapist. A therapist can help you develop new coping techniques and strategies to stop self-harming, while also helping you get to the root of why you hurt yourself. Remember, self-harm doesn’t occur in a vacuum. It exists in real life. It’s an outward expression of inner pain-pain that often has its roots in early life. There is often a connection between self-harm and childhood trauma. Self-harm may be your way of coping with feelings related to past abuse, flashbacks, negative feelings about your body, or other traumatic memories-even if you’re not consciously aware of the connection. Treatment options include: Therapy (also known as “talk therapy”) can help identify and manage underlying issues that trigger self-injury. Therapy can help build skills to tolerate stress, regulate emotions, boost self-image, better relationships, and improve problem solving skills. - Finding the right therapist may take some time. It’s very important that the therapist you choose has experience treating both trauma and self-injury. But the quality of the relationship with your therapist is equally important. Trust your instincts. Your therapist should be someone who accepts self-harm without condoning it, and who is willing to help you work toward stopping it at your own pace. You should feel at ease, even while talking through your most personal issues. Medications. While there are no medications that specifically treat self-injury, doctors often prescribe anti-depressants or other medications to treat any underlying mental illnesses. Treatment of those conditions may lessen the desire to self-injure. Hospitalization. If injury is severe or repeated, an in-patient hospitalization may be necessary to provide a safe environment and intense treatment to get through a crisis. What Do I Do if a Friend is Self-Injuring? - Talk to this person privately about your suspicions about their self-injury. - Be supportive of your friend, and don’t tell them to just “get over it” or that they’re “doing it for attention.” This is a very real and serious problem. - If you believe that your friend is in danger, or that he or she has a plan for suicide, notify your parents, a teacher, a pastor, or any other trusted adult immediately. This is not your fault, and it is not on your shoulders to fix it. - If you offer to listen to your friend, be prepared that their feelings might be overwhelming. You may not understand, and you might want to talk them out of it. You might want to make them stop, to threaten to withhold your friendship or caring if they don’t. Please don’t. This will only add to the shame they already feel. - Respect the fact that a self-injurer can only stop when he or she is ready. Stopping for anyone but themselves will not work. - Validate their feelings. “I understand how tough of a time this is for you.” - Do not judge his or her experiences with self-injury or reasons for it. - Offer specific forms of help, like finding a counselor. - Make sure that your friend knows that you do not think he or she is a bad person for self-injuring. It is a coping mechanism like any other, and while it’s hard to understand, your friend is doing his or her best to stay alive National Suicide Prevention Hotline: 1-800-273-TALK (8255) National Self-Injury Helpline: 1-800-DONT-CUT (366-8288) 24-hour Crisis Hotline: 1-800-273-TALK Self-Injury Foundation: 1-800-334-HELP Additional Resources for Self-Injury: S.A.F.E. Alternatives: a program that offers resources, referrals for therapists, and tips on how to end self-injury. Adolescent Self Injury Foundation: an organization that works to raise awareness about adolescent self-injury and provides education, prevention tips, and resources for self-injurious adolescents and their families. Self-Injury Support: a charity group that provides referrals and support for patients in the UK. Page last audited 8/2018
<urn:uuid:6e142cbd-fc64-4a61-864c-2e8d3f1f26af>
CC-MAIN-2022-33
https://bandbacktogether.com/master-resource-links-2/mental-illness-resources/self-injury-self-harm-resources/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571950.76/warc/CC-MAIN-20220813111851-20220813141851-00695.warc.gz
en
0.95602
4,351
3.609375
4
Transcript from the "Composition Introduction" Lesson >> Kyle: All right, our next discussion point, our next unit to discuss is composition [COUGH]. I think composition really deserves its place as one of the core foundational principles of all of functional programming. And the reason for that, I'll take a step back and say, I eluded to earlier when we were talking about why functional programming, what is the purpose. [00:00:22] The purpose in my mind for functional programming, beyond just the fact that we wanna improve the readability of our code. The real purpose is to declaratively state in our code, what the path of data transformation is. Any piece of data, as it goes through the different steps, we want to make that path and the steps that are gonna be preformed along that path as obvious as possible. [00:00:49] And composition is really at the heart of that. Because composition fundamentally is gonna say, how do I take a function, give it some input, whatever it's output is, make that be the input to another function. And whatever that output is, the input to another so you can see there's a flow of data in a function, out of a function, and a function out of a function. [00:01:09] I wanna give you a running metaphor that we'll used to describe composition [COUGH]. And I'll set it up first with code and then, we'll get in to that metaphor. So here I have a function called sum which takes an x and a y and a mult that takes x and y. [00:01:25] And they do, their obvious respective mathematic work. But I want you to focus on the bottom lines 9, 10 and 11. >> Kyle: [COUGH] Specifically, on line 10 we are calling mult with that 3 and 4 value, getting an intermediary result. And then taking that intermediary result with some other input and passing that into the sum function. [00:01:45] Obviously, this is a foo bar baz kind of an example because who would ever use a sum in a mult function? But what I want you to think in your brain is, not the foo bar baz thing but think to your application, that this would be doing some reasonable mildly complex set of computations. [00:02:03] Say for example, calculating the international shipping rate for some product. And there might be multiple steps involved in the calculation for that shipping rate. And so we see those multiple steps here, we see that mult is the first step of calculating the international shipping rate. And then we take that and we give it some more input, and then we end up, eventually, calculating 17 represents the international shipping rate for whatever product. [00:02:34] Takeaway here is that that code, lines 10 and 11, are highly imperative. That code is telling us how to calculate the international shipping rate. It doesn't tell us that the international shipping rate is calculated. So when we look at it we have to figure that port it out. [00:02:52] And you could put a code comment there. But that's just a poor way of saying, this is not that declarative. It's much more imperative in its nature. >> Kyle: So metaphorically, I want you to imagine that we are the engineers in a candy factory. And we have created this assembly line of, >> Kyle: Steps that we go through from pouring in the melted chocolate here on the front end of this which then cools it into these chunks of chocolate. And then those chunks of chocolate go into the next machine in the middle which slices them up into small pieces of chocolate. [00:03:35] And the small pieces of chocolate go into another machine that wraps them up in candy wrappers to drop into the back. So there's three steps that are being performed and what you should see here visually is that the output of one machine becomes the input to another machine. [00:03:51] And they just rumble along the conveyor belts to do that. >> Kyle: So the manager of this candy factory comes to us and says, well that's all well and good. I appreciate the fact that you've got the machines working but it takes up an awful lot of space on the factory floor. [00:04:10] And we would like to make more candy we'd like to have more machines running so we can spit more candy out quicker. So could you engineer go and figure out another way to orient these thing so that we're not taking up so much space? It seems like those conveyor belts are awfully wasteful in terms of space. [00:04:29] So you're the engineer of the candy factory. You begin to think, what could I do that could make that take up less space? >> Kyle: Back to the code, we could observe the same thing. If that intermediary step that we calculated, the multiplication of 3 and 4 Instead of assigning that to a variable and taking up space in our code visually. [00:04:52] We could just simply take the output of that function all and make it directly one of the inputs to another function call. No need for that intermediary variable. In the same way, the engineer at the candy factory might say, well, I know what we could do. We could just stack those machines on top of each other. [00:05:13] No need for the conveyor belt. Literally just stack them on top of each other and let the candy pouring at the top, cool as it drops, it drops right in the machine that slices it, which then drops right in the machine that wraps up the candies. No need for the conveyor belt. [00:05:28] So now we have a bunch more of these on the factory floor, we can get more work done. So you tell your boss, that's our solution. And the boss is like, great, good solution, thanks very much. Now, few months later the boss at the candy factory is like this is all well and good but we are getting some complaints from the factory workers because these machines are all separate. [00:05:57] And they all have individual on and off buttons and the wires are all hanging out all over the place. It's not a very nice clean way for us to work in our candy factory. Could you just make one machine that does all three of these steps? I don't care what happens on the inside but they just want one machine, one nice clean box where they press the on button, pour in the chocolate out spits the wrapped candies. [00:06:25] So you could say to yourself, what I'm gonna do, I'm gonna make a calculate international shipping rate function. And I will pass in those inputs, and under the covers, on line 10, we're gonna do all of that work. We're gonna do the multiplication and the summing, and we'll just spit out the output. [00:06:41] So now down on line 14, that code is a bit more declarative cuz that code is saying, calculate international shipping rate. And on line 14, we don't really care how it does what it does, we just care that it calculates the rate. On line 10, we can see how it does that work if that's something that's interesting to us. >> Kyle: This is abstraction but I wanna point out back to our earlier discussion at the beginning of the course about abstraction. I wanna point out that the purpose of me putting that in a function is not what most people would think which is, I wanna write dry code. [00:07:21] We've heard of dry D-R-Y don't repeat yourself. Many people think, well that's why you stick in that function. That way I can call that as many times as I need to. Even if there was only one call to this code, we would still wanna put in this function. [00:07:36] So our motivation should not be, I wanna write dry code. As a matter of fact, I have often times run across cases where in functional programming you want to repeat yourself. So it's not necessarily the case that reducing repetition is our motivation. The real purpose of the abstraction here is, as I said earlier, to create that semantic boundary. [00:08:01] Between the what and the how. There's now a boundary between the what and the how and on line 14, all I need to focus on is the what. I don't care how you calculate the shipping array, I just care that I have the shipping array. And then I'm gonna go do something useful business logic wise with that computation. [00:08:22] On line 10 to be honest with you I don't care what you gonna do with the value. The only thing I'm focused about on line 10 is how to do it. Those are both equally important things but I should be able reason about them separately. Creating a function boundary here and obstruction in such that semantic boundary between the two. [00:08:47] That is the purpose of abstraction. Now we built that multAndSum function that calculate international shipping rate function if you will. And then somewhere else in the code we had to go build another one of those machines, and then another one, and then another one. So basically we're building all of these bespoke one-off machines. [00:09:09] Back to the candy factory, the engineer at the candy factory says well, I know what I can do, I can just wrap up the big old box and if you screen closely you'll see the machines are on the inside. They are all actually inside there. But on the outside we just have this nice little clean box, with a single on-off switch control panel, a opening in the top for us to poor in the melted chocolate, an opening in the bottom where out spits the wrapped candy. [00:09:35] We hide all the wires, and all that other unnecessary. So you make one of those, and you put it on the factory floor and all the factory workers are much happier now. Great, that's awesome, we've got a nice easy interface to work with this box, this function. And then, the next candy that you make a different one of those and you make a different one of those boxes and then a different one and a different one. [00:10:00] And every time you have to do that, that work is very manual, so it takes you sitting down and wiring all of those things together. And at some point, you've made enough different ones of those that the boss of the candy factory is like, isn't there like some way to automate this? [00:10:17] Isn't there some way where we could just like put a bunch of machines in and out spits the machine like this is a common pattern. We always wire machines together, why do you have to do this work so manually? >> Kyle: And so it is with our code. We look at that pattern that we saw in our previous code where we are taking one function's output and making it an input to another function. [00:10:43] We can codify that into a pattern and that pattern happens to have a name in functional programming called Pipe, this is called Function Composition. When we take function one and we execute it and then that output becomes the input to function two, that's exactly what we see here. [00:11:02] We see function one being called with arg1 and arg2, its output plus arg3 is the input to function 2 on line 11. So it's the same thing that we were doing manually on the previous code snippet but now we've got a repeatable utility. We have in essence, a machine making machine. [00:11:25] The input to pipe2 is not candy. The input to pipe2 is candy machines. The function themselves are the things being operator on. There's a fancy term for this in functional programming, it's called a Higher Order Function. A Higher Order Function is a function that either or both takes one or more functions as inputs and/or makes a function as an output. [00:11:58] If it does either or both of those things, it's a higher order function. It is to carry our metaphor forward, a machine making machine. So now the boss says, hey, can you just make me a machine where I just throw some machines in the top and it does the magic, it wires them all together and out pops my little single box machine ready to do what I need it do. [00:12:23] That way, any time we come up with some new combination of candy we wanna make, we just throw the machines in and out pops a machine. Can you build that for me? >> Kyle: Being the creative enterprising engineer you are, you say sure, anything can be accomplished. I can make a machine making machine. [00:12:42] Are you following where we're at now? Because now the input to this big machine is other machines. And of course, we could keep going as high as we wanted to. We could say, take that machine and have it built by some other machine-making machine and on and on and on. [00:13:01] Of course, that stretches the ridiculousness of the metaphor. But hopefully, you're starting to see that higher order functions are really at the heart of how we do anything in functional programming. We start to think about stuff, not just as operating on a single piece of data like a number, but operations on functions producing other functions. [00:13:21] We've already seen several examples of higher order functions. The unary function that we looked at, it took a function in and gave us another function out that restricted its input to a single argument that's a machine making machine. Binary took a machine and made another machine, that's a machine making machine. [00:13:40] As a matter of fact, virtually everything you do in functional programming is the usage of utility where that utility is a machine making machine. Higher order functions are at a heart of everything we do. In this particular arrangement of those higher order functions, is what we call Composition. [00:14:00] Taking the output of one and making it the input of another. >> Kyle: So I made that pipe utility that took the first function, called it, passed in the input to the second one. >> Kyle: If we look at that code, it looked like this, it looked like line one. [00:14:23] Conceptually speaking, if we were to talk about the composition of baz, bar and foo. It would be that we call baz first, we passing that input and then its output becomes the input to bar whose output becomes the input to foo. To express that with a typical functional programming utility like compose, we would listed as foo, bar, baz in that order. [00:14:48] Now, I want you to look at line three and compare that to line one. You'll notice that the foo, bar, baz are listed in the same order visually, left to right. But what is the execution order of foo, bar, baz? Which one runs first? >> Speaker 2: Baz. >> Kyle: Baz. [00:15:08] So it actually executes right to left instead of left to right. >> Kyle: The reason, one of the reasons the compose utility takes it's items, takes it machines in right to left order is because it matches the order that you would write them in code, left to right. So if you're replacing line one with line three, you list them in the exact same order, and that is convenient. [00:15:36] But on the other hand, now you have to think to yourself the execution is in the other order. And sometimes it's easier to think about not that order I would list things in, but the order that they're gonna execute in. So if you wanted to list them left to right in order of execution, that would be the pipe utility, where we list baz, bar, and foo. [00:15:57] Baz runs first, then bar runs, and then foo runs. Compose and pipe do actually the same thing. Just they operate on the list of machines in the reverse order from the other. Does that make sense? Both of those utilities are standard utilities you'll find in any functional programming library. [00:16:19] And I have found in my own coding that sometimes compose makes more sense for me, and sometimes pipe makes more sense. So it's not that one is right and the other is wrong. It's that we want to use both of them. By the way, those utilities often times will go by different names in other libraries. [00:16:37] So sometimes compose will be called flow, I'm sorry, flow right and pipe will be called flow. I think that's what they're called in, for example low-fp. So they may go by different names, but they still do exactly the same concept. Was there a question? >> Speaker 3: Yeah, there's a question online about composition using a map function. [00:17:01] Using mapping to read the functions. >> Kyle: We have not talked about map yet, so if we're asking about map, why don't we defer that until we get to list operations later in the workshop? >> Kyle: Okay. >> Kyle: So I wanna make a simple utility that just does two functions. [00:17:27] And I'm gonna call that one composeRight. Takes fn1. You'll notice how I listed the parameter order. That utility will do composition of two specific functions. In just a moment, you're gonna work on an exercise where you get to make a general compose that can work on any number of functions. [00:17:45] But I just wanna show you if I was hard coding it, this is a very simple implementation, just compose two functions together. That comp function returned on line 2, that's our machine that our machine made. That constructed machine takes any number of inputs and passes them to the first function. [00:18:07] But the first function will always output only one output. So the second and third and so forth from then on, they'll always only have one input. Do you remember the special term we used for functions that only take one input? >> Speaker 4: Unarian. >> Kyle: Unarian. Okay. When I said earlier than you're gonna prefer unary functions this is why, because unary functions are a whole lot easier to compose together. [00:18:33] That pipe two thing idea which is a lot more complex cuz they have multiple arguments in each, that was piping binary functions together and that's a lot harder to do. It's much easier if we wanna pipe around or compose unary functions. Because functions have a single output. So if their shape of single output matches the shape of single input, they just fit together like really nice Legos. [00:18:56] Does it make sense? S, wherever possible, you would want to design the functions to be unary. And if your function wasn't unary, you might wanna jump through some hoops to make it or adapt it to be unary using some of the tricks we talked about. Like, for example, that unary utility we talked about. [00:19:16] There will be other things that we look at like currying in a little bit. Those are all things that we can use to take a function that's not a unary, and make it into a unary one so that it'll be easier to compose. >> Kyle: So here's an example of using composeRight. [00:19:39] And I actually make two different functions. I compose these in different orders. F composes double first and then increment. P composes increment first and then double. We get a different end result. On line 12 when we called f, the 3 is gonna get doubled first, and then increment it. [00:19:58] That's how we go from 3 to 6 plus 1 equals 7. But p is gonna increment first before doubling so we're gonna increment 3 to the value 4 and then double it, and that's why we end up with 8. Two different functions that do those operations in different orders. >> Kyle: If I were to make, I didn't put this in the slide, but if I were to make a function that did this doubling and incrementing thing. Similar to what we did with the calculate international shipping rate, it would take two or three inputs. And then produce a single output. [00:20:44] And that would look like a bunch of points, wouldn't it? But if I express that function as simply a composition of two or three functions, now I have point three style again. So, we are reinforcing this idea that using these utilities and these patterns that are well known and well established. [00:21:05] They let us write more declarative code. That's why we were able to write that calculate international shipping rate as a compose.
<urn:uuid:8b75f3e9-6b0a-4b36-acbd-310ee98e775e>
CC-MAIN-2022-33
https://frontendmasters.com/courses/functional-javascript-v2/composition-introduction/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00697.warc.gz
en
0.961074
4,595
3
3
- Ten Lessons from Chernobyl and Fukushima by David Krieger - NATO: Increasing the Role of Nuclear Weapons by Susi Snyder - Looking Back: The 1996 Advisory Opinion of the International Court of Justice by John Burroughs - Nuclear Disarmament - Open Ended Working Group to Conclude in Geneva - U.S. Nuclear Weapons Policy - Attempted Coup in Turkey Shines Light on U.S. Nuclear Weapons - Whistleblowers at Risk - U.S. Navy Returns to New Zealand After 30-Year Nuclear Weapons Disagreement - Nuclear Proliferation - Russia Claims to Be Developing Outer Space Nuclear Bomber - Missile Defense - Definition of Success Is Fluid - Nuclear Insanity - British Prime Minister Writes “Letter of Last Resort” - South Korean Lawmaker Urges Nuclear Armament - Japan Opposes a U.S. “No First Use” Policy - Nuclear Modernization - Senators Speak Out on Nuclear Modernization - UK Parliament Votes to Replace Trident Nuclear Weapons System - August’s Featured Blog - This Month in Nuclear Threat History - Book Review: Almighty - Foundation Activities - Sadako Peace Day on August 9 - Noam Chomsky to Receive NAPF Distinguished Peace Leadership Award - Peace Leadership in Minneapolis - Take Action Ten Lessons from Chernobyl and Fukushima George Santayana famously said, “Those who cannot remember the past are condemned to repeat it.” The same may be said of those who fail to understand the past or to learn from it. If we failed to learn the lessons from the nuclear power plant accident at Chernobyl more than three decades ago or to understand its meaning for our future, perhaps the more recent accident at Fukushima will serve to underline those lessons. The nuclear power plant accident at Chernobyl was repeated, albeit with a different set of circumstances, at Fukushima. Have our societies yet learned any lessons from Chernobyl and Fukushima that will prevent the people of the future from experiencing such devastation? As poet Maya Angelou points out, “History, despite its wrenching pain, cannot be unlived, but if faced with courage doesn’t need to be lived again.” We need the courage to phase out nuclear power globally and replace it with energy conservation and renewable energy sources. In doing so, we will not only be acting responsibly with regard to nuclear power, but will also reduce the risks of nuclear weapons proliferation and strengthen the global foundations for the abolition of these weapons. To read more, click here. NATO: Increasing the Role of Nuclear Weapons The Heads of State and Government that participated in the NATO summit in Warsaw Poland on 8-9 July 2016 issued a series of documents and statements, including a Summit Communiqué and the Warsaw Declaration on Transatlantic Security. Whereas the majority of countries worldwide are ready to end the danger posed by nuclear weapons and to start negotiations for a treaty banning nuclear weapons, both NATO documents reaffirmed the NATO commitment to nuclear weapons, and the Communiqué included a return to cold war style language on nuclear sharing. The summit documents weaken previously agreed language on seeking a world without nuclear weapons by tacking on additional conditions. Instead of simply saying that NATO is seeking to create the conditions for a world without nuclear weapons, now NATO is seeking to create the conditions “in full accordance with the NPT, including Article VI, in a step-by-step and verifiable way that promotes international stability, and is based on the principle of undiminished security for all.” Not only that, but instead of creating conditions for further reductions, now the alliance only remains “committed to contribute to creating the conditions for further reductions in the future on the basis of reciprocity.” To read more, click here. Looking Back: The 1996 Advisory Opinion of the International Court of Justice The 1996 advisory opinion of the International Court of Justice (ICJ) was the culmination of a decades-long debate on the legality of nuclear weapons. In recent years, it has shaped how international law is invoked by the initiative focused on the humanitarian impacts of nuclear weapons use and served as a foundation for the nuclear disarmament cases brought by the Marshall Islands in the court. To read more, click here. Open Ended Working Group to Conclude in Geneva The Open Ended Working Group taking forward multilateral nuclear disarmament negotiations, which met in February and May 2016, will conclude with four days of meetings in August. At the August session, delegates are expected to approve a report to the United Nations General Assembly that calls for the start of multilateral negotiations to prohibit and eliminate nuclear weapons. A draft report of the Open Ended Working Group is available on the UN website. The report details the substantive issues discussed and presents proposals for moving forward. U.S. Nuclear Weapons Policy Attempted Coup in Turkey Shines Light on U.S. Nuclear Weapons in Europe The recent attempted military coup in Turkey has brought a pressing issue into the spotlight: the safety of U.S. nuclear stockpiles abroad. The question of nuclear security has been raised before, but is substantially more present now. As a NATO member, Turkey claims the “right” to nuclear-sharing provided by the United States, whose nuclear umbrella spreads throughout Europe. Turkey actively houses an estimated 50 B-61 nuclear bombs at its Incirlik Air Base in Adana, the most of any other NATO state. Other nations housing U.S. nuclear weapons are Belgium, Germany, Italy, and the Netherlands. The attempted coup also raises questions of whether or not Turkey can maintain NATO status. The unprecedented coup presents NATO with many problems it may not have previously considered. As Aaron Stein of Atlantic Council think tank stated, “It says a lot about the ability of Turkey to operate in coalition operations if its army can’t be trusted.” The lack of stability in the region has existed for quite some time, but the attempted coup introduces a wealth of new problems and doubts. Julian Borger, “Turkey Coup Attempt Raises Fears Over Safety of U.S. Nuclear Stockpile,” The Guardian, July 17, 2016. Whistleblowers at Risk On July 14, 2016, the Government Accountability Office (GAO) released a report charging the Department of Energy (DOE) with unlawful retaliation against nuclear whistleblowers. The report came shortly after the firing of Sandra Black, the head of Savannah River Site’s employee complaints program. Colleagues of Black had come to her expressing grievances about unsafe, illegal, and wasteful practices at the nuclear site. After following through with her colleagues’ complaints, Black was fired. The GAO report was the product of an investigation into whistleblower retaliation complaints made two years earlier at Washington’s Hanford nuclear facility. Though the investigation initially sought only to investigate Hanford, its scope eventually increased to include 87 complaints by workers at 10 major DOE nuclear facilities. While a pilot program was built for whistleblower protection at nuclear sites, the investigation reports that neither Savannah River Site nor Hanford administrations had attempted to implement the program–leaving workers and whistleblowers unprotected. To date, over 186,000 nuclear workers have been exposed to recordable levels of radiation while on the job. But many remain silent, fearing that voicing concerns will cost them their livelihoods. “They will make an example of anyone who challenges them” said one nuclear worker. Senator Ron Wyden (D-OR), who helped initiate the GAO report, said, “It’s clear that DOE contractors are going to amazing lengths to send the message to their employees that when you blow the whistle it’s going to be the end of your career.” Lindsay Wise and Sammy Fretwell, “Report: Department of Energy Fails to Protect Nuclear Whistleblowers,” McClatchy, July 14, 2016. U.S. Navy Returns to New Zealand After 30-Year Nuclear Weapons Disagreement The U.S. Navy plans to make a port call in New Zealand for the first time since 1985. Thirty years ago, the New Zealand government refused a port call request by the USS Buchanan because the U.S. would neither confirm nor deny the presence of nuclear weapons on board the ship. Explaining the decision to overturn 30 years of New Zealand’s anti-nuclear laws, Prime Minister John Key said that it is not necessary for a nation to declare a ship nuclear-free if it can be ascertained from the ship’s specifications. Seth Robson, “U.S. Navy to Return to New Zealand After 30-Year Rift Over Nukes,” Stars and Stripes, July 21, 2016. Russia Claims to Be Developing Outer Space Nuclear Bomber The Russian Strategic Missile Forces Academy is developing a nuclear bomber capable of striking from outer space, Lt. Col. Aleksei Solodovnikov reported in July. The weapon will be able to travel at hypersonic speed and is expected to have the capability of reaching any point on Earth from outer space in less than two hours. “The idea is that the bomber will take off from a normal home airfield to patrol Russian airspace,” Colonel General Sergei Karakayev stated this month. He continued, “Upon command it will ascend into outer space, strike a target with nuclear warheads and then return to its home base.” Regardless of the veracity of this specific claim, it shows that Russia continues to rely heavily on nuclear weapons for its perceived security, and is invested in the new nuclear arms race. “New Russian Bomber to Be Able to Launch Nuclear Attacks from Outer Space,” Sputnik International, July 13, 2016. Definition of Success Is Fluid On January 28, the Missile Defense Agency conducted a flight test of a new and supposedly improved thruster, a key component of the interceptors that make up the U.S. missile defense system. Shortly after the test, the agency released a statement calling it a “successful flight test.” However, the test was anything but a success. The closest the interceptor came to the target was a distance 20 times greater than what was expected. In a letter to the editor published on July 9, NAPF President David Krieger wrote, “Perhaps raking in more than $40 billion from taxpayers since 2004 to produce a useless product is what the Missile Defense Agency and its contractors define as success.” David Willman, “A Test of America’s Homeland Missile Defense System Found a Problem. Why Did the Pentagon Call It a Success?” Los Angeles Times, July 6, 2016. British Prime Minister Writes “Letter of Last Resort” One of the first acts of a new British Prime Minister is to write a “letter of last resort” that is kept locked in a safe in each of the UK’s four nuclear-armed submarines. Only the Prime Minister or another individual designated by the Prime Minister may give an order to launch British nuclear weapons. The letter of last resort is to be used by submarine commanders if these people are no longer alive or are completely out of contact. Prior to writing the letter, the Prime Minister is briefed by the chief of the defense staff, who explains the damage that could be caused by a nuclear strike. Adam Taylor, “Every New British Prime Minister Pens a Handwritten ‘Letter of Last Resort’ Outlining Nuclear Retaliation,” Washington Post, July 13, 2016. South Korean Lawmaker Urges Nuclear Armament Rep. Won Yoo-chul of South Korea’s ruling Saenuri Party plans to initiate a forum on nuclear armament in hopes of achieving lawmaker consensus. Set to begin on August 4, Won hopes this forum will generate a new sense of urgency in the wake of North Korean threats. The lawmaker promotes a strategy that would lead to automatic nuclear armament once North Korea conducts its next nuclear test. Won also explained the “need” for South Korea to develop a nuclear arsenal can be credited to Donald Trump’s claims that South Korea and Japan should increase their payments for deployed U.S. troops. Jun Ji-hye, “Pro-Park Lawmaker Planning Forum for Nuclear Armament,” Korea Times, July 25, 2016. Japan Opposes a U.S. “No First Use” Policy The Japanese government has expressed concern over reports that the Obama administration may be planning to implement a policy of “No First Use,” meaning that the U.S. would pledge never to use nuclear weapons first in a conflict. A senior Japanese government official said, “From the [standpoint of] Japan’s security, it is unacceptable.” The Japanese government believes strongly in the idea of nuclear deterrence, relying on the U.S. nuclear umbrella for its national security. “Japan Seeks Talks With U.S. Over ‘No First Use’ Nuclear Policy Change,” Kyodo, July 15, 2016. Senators Speak Out on Nuclear Modernization Groups of U.S. Senators have sent letters in favor of and in opposition to the country’s plans to spend $1 trillion to modernize its nuclear arsenal. On July 8, 14 senators, including Democratic Vice-Presidential nominee Tim Kaine, wrote to Defense Secretary Ash Carter seeking the Pentagon’s continued outspoken support for the vast program of nuclear modernization. The Senators who signed the letter are Hoeven (R-ND), Daines (R-MT), Tester (D-MT), Hatch (R-UT), Donnelly (D-IN), Heitkamp (D-ND), Rubio (R-FL), Warner (D-VA), Vitter (R-LA), Heinrich (D-NM), Barrasso (R-WY), Fischer (R-NE), Reed (D-RI), and Kaine (D-VA). In a very different tone, 10 senators wrote to President Obama encouraging him to take numerous steps to reduce nuclear weapons spending and reduce the risk of nuclear war. The Senators who signed this letter are Markey (D-MA), Warren (D-MA), Feinstein (D-CA), Boxer (D-CA), Franken (D-MN), Merkley (D-OR), Brown (D-OH), Leahy (D-VT), Wyden (D-OR), and Sanders (I-VT). To read the pro-nuclear weapons letter, click here. To read the letter from 10 senators encouraging a less aggressive approach to nuclear policy, click here. UK Parliament Votes to Replace Trident Nuclear Weapons System On July 18, Prime Minister Theresa May and the Conservative party won the vote to update current British nuclear capabilities. The vote, which Members of the House of Commons passed 472-117, clears the way for the UK to replace its four Trident nuclear-armed submarines with a new system at a cost of up to $250 billion. George Kerevan, a Member of Parliament who is part of the Scottish National Party, asked Prime Minister May during the debate whether she is “personally prepared to authorize a nuclear strike that can kill 100,000 innocent men, women, and children.” Ms. May responded, “Yes…the whole point of a deterrent is that our enemies need to know that we would be prepared to use it.” The UK’s Trident system is based in Scotland; 58 out of 59 Scottish Members of Parliament voted against replacing Trident. Dan de Luce, “British Parliament Votes to Spend Big on Nukes,” Foreign Policy, July 18, 2016. August’s Featured Blog This month’s featured blog is “All Things Nuclear,” by the Union of Concerned Scientists. Recent titles include: “Japan Can Accept No First Use“; “U.S. Missile Defense: In Worse Shape than You Thought“; and “Nuclear Merger.” To read the blog, click here. This Month in Nuclear Threat History History chronicles many instances when humans have been threatened by nuclear weapons. In this article, Jeffrey Mason outlines some of the most serious threats that have taken place in the month of August, including the August 29, 2007 incident in which six nuclear-armed cruise missiles were mistakenly loaded on a B-52 bomber and flown from North Dakota to Louisiana, where they sat unguarded on the tarmac for hours. To read Mason’s full article, click here. For more information on the history of the Nuclear Age, visit NAPF’s Nuclear Files website. Book Review: Almighty Almighty, by Dan Zak, is a compelling new book that exposes the intimate truths behind the 2012 Y-12 break-in through the lens of the peace-activist perpetrators. Fluidly weaving between the past and the present, this intriguing account resembles a thriller novel. As the unique background of the three activists, Sister Megan Rice, Michael Walli, and Greg Boertje-obed, unfolds, the egregious history of nuclear weapons elucidates the United States’ futile attempt at non-proliferation. To read the full review by NAPF summer intern Madeline Atchison, click here. Sadako Peace Day on August 9 The Nuclear Age Peace Foundation will host its 22nd Annual Sadako Peace Day commemoration on Tuesday, August 9, at 6:00 p.m. at La Casa de Maria in Montecito, California. The event – featuring music, poetry and reflection – remembers the victims of the atomic bombings of Hiroshima and Nagasaki, and all innocent victims of war. Sadako Sasaki was a two-year-old girl living in Hiroshima on August 6, 1945, the morning the atomic bomb was dropped. Ten years later, she was diagnosed with leukemia. Japanese legend holds that one’s wish will be granted upon folding 1,000 paper (origami) cranes. Sadako set out to fold those 1,000 cranes, writing, “I will write peace on your wings, and you will fly all over the world.” Students in Japan were so moved by her story, they began folding cranes, too. Today the paper crane is a symbol of peace. A statue of Sadako now stands in Hiroshima Peace Memorial Park. And to this day, we honor Sadako’s fervent wish for a peaceful world. For more information, click here. Noam Chomsky to Receive NAPF Distinguished Peace Leadership Award Noam Chomsky, one of the greatest minds of our time, will be honored with NAPF’s Distinguished Peace Leadership Award at this year’s Evening for Peace on Sunday, October 23, in Santa Barbara, California. We’re calling the evening NOTHING BUT THE TRUTH because that’s what Chomsky is about– truth. He believes humanity faces two major challenges: the continued threat of nuclear war and the crisis of ecological catastrophe. To hear him on these issues will be highly memorable. Importantly, he offers a way forward to a more hopeful and just world. We are pleased to honor him with our award. The annual Evening for Peace includes a festive reception, live entertainment, dinner and an award presentation. It is attended by many Santa Barbara leaders and includes a large contingent of sponsored students. For more information and tickets, click here. Peace Leadership in Minneapolis As a West Point graduate, Iraq war veteran, and former U.S. army captain who has struggled through extreme childhood trauma, racism, and rage, NAPF Peace Leadership Director Paul K. Chappell will bring his hopeful message of equity in education, our shared humanity, and the skills of peace literacy to the Minneapolis area November 1-5, 2016. He will address the plenary session of the annual Missing Voices conference at St. Mary’s University on November 3. The audience will include 350 educators, administrators, and students. To read more about this upcoming trip, click here. For a full list of Paul’s upcoming lectures and workshops, click here. The Nuclear Age Peace Foundation’s latest action alert encourages you to send a message to President Obama regarding the many things he could do during his last months in office to make a difference for nuclear disarmament. Proposed actions include declaring a No First Use policy, removing U.S. nuclear weapons from foreign soil, cutting funding for nuclear weapons “modernization,” and commencing good faith negotiations for the elimination of nuclear weapons worldwide. To read more and take action, click here. “What the Hiroshima survivors are telling us is that no one else should ever go through the experience they suffered. An atomic bombing creates a living hell on Earth where the living envy the dead.” — Tadatoshi Akiba, former Mayor of Hiroshima. This quote appears in the book Speaking of Peace: Quotations to Inspire Action, which is available for purchase in the NAPF Peace Store. “If keeping and renewing our nuclear weapons is so vital to our national security and our safety, then does the Prime Minister accept the logic of that position is that every other country must seek to acquire nuclear weapons? And does she really think that the world would be a safer place if they did? Our nuclear weapons are driving proliferation, not the opposite.” — Caroline Lucas MP, speaking during the UK parliamentary debate over whether to replace the Trident nuclear weapons system.
<urn:uuid:79ce55fb-1df0-4af5-aa35-0f65e55f94f8>
CC-MAIN-2022-33
https://www.wagingpeace.org/sunflower-newsletter-august-2016/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00697.warc.gz
en
0.931916
4,591
2.671875
3
Bukittinggi is a city of 117,000 people (2014) in West Sumatra. It is a city popular with tourists because of its pleasant climate and central location. Bukittinggi is also a popular shopping destination for cheap textiles and fashion products, especially for Malaysians. Bukittinggi (Indonesian for "high hill") is one of the larger cities in West Sumatra. It is in the Minangkabau highlands, 90 km by road from the West Sumatran capital city of Padang. It is located near the volcanoes Mount Singgalang (inactive) and Mount Marapi (still active). At 930 m above sea level, the city has a cool climate with temperatures between 16° and 25°C There are some interesting legends surrounding the foundation and naming of “High Hill” Bukittinggi. The city has its origins in five villages which served as the basis for a marketplace. The city was known as Fort de Kock during colonial times in reference to the Dutch outpost established here in 1825 during the Padri War. The fort was founded by Captain Bauer at the top of Jirek hill and later named after the then Lieutenant Governor-General of the Dutch East Indies, Hendrik Merkus de Kock. The first road connecting the region with the west coast was built between 1833 and 1841 via the Anai Gorge, easing troop movements, cutting the costs of transportation and providing an economic stimulus for the agricultural economy. In 1856 a teacher-training college (Kweekschool) was founded in the city, the first in Sumatra, as part of a policy to provide educational opportunities to the indigenous population. A rail line connecting the city with Payakumbuh and Padang was constructed between 1891 and 1894. During the Japanese occupation of Indonesia in World War II, the city was the headquarters for the Japanese 25th Army, the force which occupied Sumatra. The headquarters was moved to the city in April 1943 from Singapore, and remained until the Japanese surrender in August 1945. During the Indonesian National Revolution, the city was the headquarters for the Emergency Government of the Republic of Indonesia (PDRI) from December 19, 1948 to July 13, 1949. During the second 'Police Action' Dutch forces invaded and occupied the city on December 22, 1948, having earlier bombed it in preparation. The city was surrendered to Republican officials in December 1949 after the Dutch government recognized Indonesian sovereignty. The city was officially renamed Bukittinggi in 1949, replacing its colonial name. From 1950 until 1957, Bukittinggi was the capital city of a province called Central Sumatra, which encompassed West Sumatra, Riau and Jambi. In February 1958, during a revolt in Sumatra against the Indonesian government, rebels proclaimed the Revolutionary Government of the Republic of Indonesia (PRRI) in Bukittinggi. The Indonesian government had recaptured the town by May the same year. Bukittinggi is located about 2 hours north-east by road from the international airport. The only way to get there is by car. But all the roads are good and smooth. As Bukittinggi is a tourist destination, try to avoid traveling on weekends as the traffic can be quite bad especially when climbing uphill. By chartered minivan Known by the locals as "Travel" is cheapest way to get there. The approximate price is about Rp30,000/person for one way. The vehicle is a Honda Odyssey 2.4 minivan with a capacity of 7 seats. Keep in mind that the bus driver usually waits until the chair occupation is about 75%. Once the car is "full", the minivan will depart and take the passengers to their destination. For Kersik Tuo near Kerinci Park, minivans depart throughout the day, leaving the bus station and also picking up at hotels if booked by phone. Travel time is 8 hours, cost is Rp130,000 (2019), and the trip is relatively comfortable, with stops at roadhouses. The main bus station is at Aur Kuniang, 2km southeast of the town center. Many services are minibuses and shared cars, which stop at the bus station but will also collect and drop passengers at their accommodation if booked by phone. Hotels in Bukittinggi can organize a door-to-door transfer from your hotel in Bukittinggi to your hotel in Padang. Departs hourly. Approximate price: Rp40,000 to 50,000 as of 2017. Hello Hostel might be among the cheapest especially if you stay with them. Several bus companies (such as ALS) run buses from Parapat near Lake Toba. The trip is very winding and rough, and takes approximately 15 hours. Be prepared for bus sickness, and to pay around Rp250,000. As the trip is uphill from Lake Toba, the one-way fare is more expensive this way than coming from the south. Numerous buses go from Medan to Jakarta and stop on the way at Bukittinggi. At PDG (Minangkabau International Airport) of Padang, there are several desks and individual operators where you can order taxis but they all ask around Rp300,000, including Grab and Gojek. There is also a train right at the airport that charges Rp5000 to go to Kayutanam, about half the 70 km to Bukittinggi, from which you can hail a minibus the rest of the way. As of June 2022, there are three trains per day, leaving at 9:25, 14:00, and 19:10, and they take about an hour and 10 minutes. (Going back, trains leave Kayutanam Station at 6:50, 11:30, and 16:20.) An alternative is to take a motorcycle taxi for Rp20,000 about 4 km to the main road (the Duku turn-off) and hail a passing minibus to Bukittinggi for another Rp20,000. (Warning: they can be crowded.) Besides city transportation (Angkutan kota), bus charter and car rental are solutions for getting around in this city. If you want to rent a car, it's best to do so at Minangkabau International Airport. Rent a motorcycle is also possible in Bukittinggi for Rp70,000 a day (more expensive than Kuta), and you should have booked certain hotel in Bukittinggi first. Please also make certain with motorcycle rental that a day is 24 hours and not 12 hours. Bukittinggi is a small town, so these places are within walking distance with each other (15-30 minute walk). - 1 Sianok Canyon (Ngarai Sianok) and the Japanese Caves (Lubang Jepang). A network of underground bunkers and tunnels built by the Japanese during World War II. There is a two-story observation tower that overlooks the Sianok Canyon. Ticket price: Rp20,000 (Aug 2017). During dusk you can observe megabats flying from the gorges to the forest in order to feed from the fruits on the trees. This is also a good place to get in contact with guides for tours such as to Lake Maninjau (see below) or jungle/hiking trips through the Canyon. - 2 Fort de Kock. A fort built by the Dutch (nothing is left, only a water reservoir is on top of the hill) and Bundo Kanduang Park. The park includes a replica Rumah Gadang (traditional house), used as a museum of Minangkabau culture (many curiosities, such as stuffed animals with two heads and six legs, model houses and traditional dresses, foreign currencies. Entrance fee an extra Rp15,000 [Aug 2017]), and a zoo with a few very sad Orang Utans, a few expired species which are still rotting in their cages, two obese bears - not exactly an example in modern animal keeping. The Dutch hilltop outpost Fort de Kock is connected to the zoo by the Limpapeh pedestrian overpass. Ticket price: Rp20,000 (Aug 2017). - 3 House of Bung Hatta. The house of Mohammad Hatta, the first Vice President of Indonesia. - 4 Clock Tower (Jam Gadang = Great Clock). It is a clock tower and major landmark and tourist attraction in Bukittinggi. It is located in the centre of the city, near the main market, Pasar Ateh, and palace of Mohammad Hatta. The structure was built in 1926, during the Dutch colonial era, as a gift from Queen Wilhelmina to the city's controleur. It was designed by architects Yazin and Sutan Gigi Ameh, reportedly at a cost of 3,000 guilder. Originally a rooster figure was placed on the apex, but it was changed into a Jinja-like ornament during the Japanese occupation (1942–1945). Following Indonesian independence, the tower's top was reshaped to its present form, which resembles traditional Minang roofs. Tourists visiting the tower were once allowed to climb to the top, but as of 2016 require written permission to do so. There are horse carriages waiting around the Jam Gadang area. Please be cautioned that the rides are very costly, therefore please ask for their rates first. There are two tours that hotels and tour agencies try to push, a tour to Minangkabau and another tour to Maninjau. The Minangkabau tour will visit these places in east area of Bukittinggi: - The King's palace in Pagaruyung - Balimbing village with old century traditional house that is more than 350 years old - Handcraft in Pandai Sikek, such as kain songket (traditional woven clothes), ukiran kayu or bamboo (handmade crafting) - Traditional coffee toasters - Bika, traditional sweet made from coconut, rice flour and palm sugar, located in Koto Baru, between Padang Panjang-Bukittinggi. The Maninjau Tour will visit places in west area of Bukittinggi: - Lake Maninjau - Puncak Lawang, a place where you can see a panoramic of Lake Maninjau - The "44 turns", forty-four numbered(!) hairpin bends up the mountain from where you can see a panoramic of lake Maninjau. Each tour requires at least 8 hours and usually held from 09:00-17:00 (including a stop at some restaurants). The price is ranging from Rp250,000/pp to Rp450,000/pp (2017 price). Hire a car is highly recommended if you're in a group of more than 4 people. Car's price includes driver, fuel, entry ticket, and parking fee. Tips aren't compulsory, lunch invitation is more than enough. Most of the places require ticket and will charge a parking fee. One tour will require about Rp40,000 only for parking and ticket entry. Another option is hiring a car and arranging with the driver to visit the places in Minangkabau and Lake Maninjau. If you are alone, it is also possible to find guides in the Sianok Canyon park who will take you to Lake Maninjau with a motorbike (ask for Parta e.g., no fixed price, he will take what you give him). In any case, depart earlier, as the tour will take all day. Famous agencies selling these tours are Lite'n'easy, and Roni's tour and travel (in hotel Orchid). However since they are both recommended by a famous American guidebook, they tend to quote overpriced fees. Hello Hostel (very near both other agencies) seems cheaper. The budget option to Maninjau is to take the bus (or minibus) from the bus station (get there from the Bemo station near the market) to Maninjau (35 km, 2 hr by bus, 1 hr by minibus). Unfortunately, a tourist racket has been set up so you won't get the ticket for the Rp6000 (Oct 2007) the locals pay. Expect to pay at least Rp10,000 (Oct 2007). Have the right change ready; don't expect to get any from the conductor. To get back, either try to catch a minibus (Rp10,000) or a big bus (Rp15,000-20,000). The big buses you have to catch in the same direction you came, since the narrow road is a one-way for lorries. The budget option to Minangkabau is to take the Batu Sangkar public bus for Rp7000 (Oct 2007) and hire a motorbike (Rp15,000 return) from there (or walk the remaining 5 km) to Pagaruyung. Minibuses and buses back to Bukittingi leave from the bus terminal or may be flagged down anywhere. The Harau Valley is a pretty gorge about an hour east of Bukittinggi comprising a valley floor of rice paddy hemmed in by shear sandstone cliffs. There are several waterfalls with pools (both natural and constructed) for bathing, and you can go rock climbing on the cliffs. Harau is reached via Payakumbuh. If you are interested in visiting the equator, you can take a bus to Bonjol where there is a monument marking the equator built over the main road, good for photo opportunities if you are keen to stand in both hemispheres simultaneously. Theres also a museum on the site which houses a few artefacts of little interest - mainly coins and banknotes. Catch the bus from the Aur Kuning bus station, minibuses depart fairly frequently. Expect to pay Rp10 000 as a tourist. To get back there is a bus which comes from the opposite direction (or northern hemisphere) at 17:00, or alternatively you can wait at the small roadside cafe right next to the monument where locals will help you flag down a bemo which is destined for Bukittinggi (its quite difficult for non-locals to distinguish between a service bemo and a someones car, but the locals seem to know what is what. - Traditional dances are performed for 90 minutes every day from 21:00-22:30. Each group has its own schedule. If you want to buy a souvenirs or CDs about their performance, wait until the show finish because every dancer will offer you a souvenirs. The prices are, for CD about Rp100,000, for the traditional flute about Rp50,000. It's more expensive because you can get with half price in the town. (No rupiah!) - Bukittinggi and West Sumatra in general also is the great place for adventure, as for rafting, kayaking, surfing, rock climbing, mountaineering and paragliding. See https://www.facebook.com/sumatraadventure for more informatin of adventure activity and other tour. - There are several rivers for rafting and kayaking, as Kuantan River, Anai River, Sinama River, Ombilin River and many others. The grade of the rivers is varied from grade 2 to grade 5. - For rock climbing, there is cliff in Baso, Harau Valley and Sijunjuang. The grade of route are varied from 5.8 to 5.14 and the high also varied from 20 m to 150 m. - There are several place for paragliding, as Puncak Lawang near Lake Maninjau, Pintu Angin Hill near Lake Singkarak and Aia Manih Beach near Padang. - For mountaineering and trekking there is several volcano with the high more than 2500 m above sea level, as Merapi, Singgalang, Tandikek, Sago, Talang, etc. Merapi is an active volcano. - Mentawai island is one of the best place in the world for Surfing. - For local assistance to arrange transportation, get tourist information or arrange guides, the guys of Lite'n'easy are an excellent choice. They are friendly, knowledgeable, speak English and are conveniently located at Bedudal Cafe (see the Eat section below), just ahead of the pedestrian bridge over Jl A. Yani. Ask for Fikar. Ramayana Shopping Mall accepts credit cards. There are also 2 markets known as Pasar Atas (Upper Market) and Pasar Bawah (Down Market) near Jam Gadang. Pasar Atas is the largest market in Bukittinggi. On Saturdays, Sundays, and Wednesdays vendors sell their goods beside the road. Pasar Bawah is for fruits and vegetables whereas Pasar Atas is for souvenirs and clothes. Most of the prices in each kiosk are similar, and you should bargain. One wholesale shop located in the middle of Pasar Atas sells souvenirs at the lowest price. A pair of women's slippers is about Rp7,000 and a key holder is about Rp2,000-5,000. Most of the souvenirs sold here are of low quality. Souvenirs of better quality can be found in Pandai Sikek. One men's shirt is about Rp35,000 and a pair of leather women's slippers is about Rp35,000. Pasar Aur Kuning area is a large group of wholesale ("grosir") sales market/shops. Pasar Aur Kuning deserves a special mention here. It is famous with the local people/season travelers. If you are buying items in bulk, this is the place to visit. Some of the shops will allow you to buy in small quantities "eceran". Please ask the trader if they allow "eceran". For price comparison, if a trader in Pasar Atas (Clock Tower area) sells a cowboy hat for Rp100,000 as the opening price, you can expect to buy exactly the same item at Pasar Aur Kuning for Rp30,000. The price per item may go down further after negotiations. To get to Pasar Aur Kuning, take the red Angkot (minibus) from Pasar Bawah (in front of Pasar Banto). Angkot no. 19 or 13 (Tigo Baleh) charge Rp2000 one way. Pasar Aur Kuning also houses a bus terminal to various parts of Indonesia. Travel to Padang by van is located adjacent to Simpang Rayo restaurant. - Aishah Chalik Art Shop, Jl. Cinduamato 90. Various souvenirs. There is good quality traditional cloth called kain songket (colorful cloth with golden thread embroidery), as well as shoes, T-shirts, sarongs, prayer rugs, female prayer clothing, etc. - Toko Tiga Saudara, Pasar Wisata Bukittingi. This is a one stop centre for sovernirs, i.e. woven handbags, keychains, replicas of "Rumah Gadang", miniature bicycles. You can get better prices if you buy in bulk. Look for a guy by the name of Anton. Do ask for a discount. You will notice that item displayed would be of slightly better quality than that offered in smaller shops. Price comparison is essential to enjoy better bargains. People in Bukittinggi like dry, spicy, and sweet snack foods. They make snacks with different tastes and shapes from ingredients that make the foods here special. For example, from cassava they can make spicy long cassava chips, tasty cubed cassava chips, and sweet round cassava chips. The many others include shredded dry eel, spicy potato chips, sweet potato chips, etc. They can be found in Pasar Atas at low prices, but they are not fresh. On the way back to Padang there are many food shops that sell these snacks of better quality. There's small fish named Ikan Bilih (Bilis) or "ikan Danau" in Lake Singkarak that is not found elsewhere. Locals deep fry it or cook it in a sour soup with vegetable. One portion of fried Bilih is about Rp 5,000 and you eat the whole fish, head and bones and all. Most of the restaurants in Bukittinggi serve Padang cuisine which is creamy, spicy, and hot. An average price is about Rp 15,000 per person for one meal. The food unsold is kept overnight and reheated the next day, so it is not recommended for those who like fresh food. After dark, there are many hawkers near Jam Gadang selling fresh foods such as nasi goreng (fried rice), mie rebus (boiled noddles) Roti Bakar (bun with scrambled eggs), and martabak mesir (beef pancake). One portion is about Rp 7,500 - Rp 10,000 pp. Do try the local dessert delicacy known as "Martabak Bandung". The same dish is widely known in Malaysia as "Apam Balek" but the Malaysian version is limited to only one flavour i.e. nuts with a mixture of corn. Here in Bukittinggi or other parts of Indonesia, there are no less than 50 flavours of Martabak Bandung to choose from such as chocolate, cheese, strawberry, jackfruit, honey, banana, durian, etc. It should not be confused with "Martabak Mesir" which is a delicacy from the Middle East. Anti-diarrhea medicine is highly recommended in case you get diarrhea during your food adventure. - B and J's (formerly Apache Cafe), near Fort de Cock. Great, reasonably priced food and friendly staff that speak very good English. They can also arrange tours and give information on transport. - 1 [dead link] Simpang Raya, Sudirman St No 8, ☏ . 05:30 - 00:00. Traditional Minangkabau food. - Turret Cafe and Restaurant, Ayani St. 140, ☏ , email@example.com. 07:00-23:00. Traditional Minangkabau and European food. Main dishes: Rp10,000-60,000. - Bedudal Cafe, Jln. A. Yani No 95/105 (just before the pedestrian bridge). Fruit juices, beer, soft drinks, Indonesian + European dishes - reasonably priced and excellent quality. English spoken. - 2 TARUKO caferesto (villa), Jalan Taruko, Jorong Lambah, Nagari Sianok Anam Suku, Kecamatan Ampek Koto (15-minute walk on bottom of green Sianok Canyon from city center to the west), ☏ , firstname.lastname@example.org. 08:00-19:00. A unique riverside restaurant at the bottom of the green Sianok Canyon with beautiful scenery and delicious Indonesian, Thai, Chinese, Indian and Western food and drinks. The traditional architecture of the restaurant, surrounded by terraced rice fields and the crystal-clear river crossing beside a beautiful garden. €1. Sikotang or Sarobat Sikotang or Sarobat is one of the most famous drinks in Minangkabau. The beverage is made from red ginger (Zingiber sp) and spices such as cinnamon bark (Cinnamomum sp), nutmeg/"pala" (Myristica fragrans), etc. Sikotang is usually mixed with egg, bread, green beans (kacang padi/kacang ijo), and cane or palm sugar. Such a hot drink is useful for keeping your body warm during a cold highland night like in Padang Panjang, Batusangkar and Bukittinggi. Price Rp 5,000-15,000. Daun Kawa (coffee leaves) Daun kawa is made from roasted dry leaves of the coffee tree. The dried leaves are boiled in hot water and put into sections of bamboo and drunk from a "cawan tampuruang" (coconut shell). It can be found in Bukittinggi, Payakumbuh and Batusangkar. Please ask anyone, especially people over age 40. They will show you where a good place is to taste Daun Kawa! Price Rp 5,000-15,000 in 2009 Jus Pinang (Pinang juice) Juice of Pinang (betel nut, the Areca catechu seed) is a bitter tasting drink available in Padang, Bukittingi, and other areas that is believed to have an effect on sexual stamina. Pinang has a biological effect as a stimulant like tobacco, coffee, and tea. Its chemical contents are arecolin, arecain, tannin, and flavonoids. Just try it and feel the difference! For beginners, don't drink more than one pinang seed. Price Rp 5,000-15,000 in 2009. Teh Talua (egg tea) This is a special Minangkabau drink made from egg mixed with hot tea and lemon. Please taste it. You will never forget the experience! Price Rp 5,000-15,000 in 2009. There are a variety of fruit juices ranging from Alpokat (avocado juice), Sirsak (soursop), Jeruk (orange), wortel (carrot) etc. The list is endless. Prices range from Rp 4,000-10,000. Kopi Luwak ("Civet Cat" Coffee) Enjoy one of world's most prestigious coffees, Kopi Luwak, in Batang Palupuah Kampong, Bukittinggi. The coffee is made from coffee beans that have passed through a civet cat before roasting. The Minangkabau also have traditional alcoholic beverages such as tuak. Tuak is made from fermented nira, a liquid collected by cutting the fruit branch of aren or enau tree (Arenga pinnata). However, it is quite difficult to find in Bukittinggi now because alcoholic beverages are haram - forbidden - for Muslims). Small budget hotels are easy to find. Many locals offer accommodation which are like family-owned hotels that provide a "feel at home" atmosphere. The prices span from Rp 80,000 - Rp 200,000 per room without air-conditioning. Breakfast is included. There are no lifts in these small hotels, so be prepared if your room is on the upper floors. Room cleaning is not provided everyday in some cases, so don't hesitate to ask the hotel manager if you want a daily cleaning service. - Hotel Cindua Mato, Jl. Cindur Mato 96 (across the street from the Zoo), ☏ , fax: . Around 10 rooms. No hot water. Rp85,000. - Orchid Hotel, Jl Teuku Umar, doubles from Rp170,000 including a very basic breakfast (coffee and bread). Located near the mosque, the call to prayer can be extremely loud, especially from F-Su and during Ramadhan. Staff is nice but constantly trying to sell you tours, transportation, etc. No wifi. - Hotel Asean, Jl Teuku Umar, singles from Rp80,000. - d'enam Hotel, Jl Yos Sudarso No. 4, double with toilet Rp100,000, toilet outside Rp90,000 (prices 2017); located on top of a hill close to the mosque and the clocktower, friendly and helpful staff, ☏ . - Hotel Murni, Jl A. Yani (north end). Old building. Rooms with two beds: Rp80,000 - Hotel Tigo Balai, Jl A. Yamin. smallish rooms but reasonably clean and nice staff. Rp80,000. - Hello Guesthouse, jl Teuku Umar, email@example.com. Nice staff and clean hostel with small breakfast included. Dorm Rp75 000, double room Rp150 000 (2017). The top hotels in the city are The Hills Bukittinggi (formerly the Novotel Coralia) and Pusako Hotel. Although The Hills Bukittinggi is comfortable place to stay especially for tourists from the West, the cost is at least Rp 800,000 for a night; in comparison there are many small hotels around Fort de Kock that are around Rp 120,000 that are quite nice. There is a row of internet cafes along Jl. Ahmad Yani and Jl Pemuda. Four internet cafes are in the vicinity of (underneath) the pedestrian bridge which links Fort de Kock and the zoo. Check out the prices as the internet cafes on the main street are so much more expensive than the ones around the corner. Suggest you ask a local for the cheapest one. The local price is Rp4,000 per hour. Most hotels and many restaurants offer complementary Wi-Fi, although the speed may not always be as high as you expect.
<urn:uuid:edcfb7b5-e3f3-41e3-bfeb-8a6ff041006e>
CC-MAIN-2022-33
https://en.wikivoyage.org/wiki/Bukittinggi
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570921.9/warc/CC-MAIN-20220809094531-20220809124531-00297.warc.gz
en
0.939214
6,285
2.859375
3
The symbol $, usually written before the numerical amount, is used for the U. S. dollar (as well as for many other currencies). The signs ultimate origins are not certain, though it is possible that it comes from the Pillars of Hercules which flank the Spanish Coat of arms on the Spanish dollars that were minted in the New World mints in Mexico City, Potosí, Bolivia, and in Lima, Peru. These Pillars of Hercules on the silver Spanish dollar coins take the form of two vertical bars and a swinging cloth band in the shape of an “ S”. An equally accepted, and better documented, explanation is that this symbol for peso was the result of a late eighteenth-century evolution of the scribal abbreviation “ ps.” The p and the s eventually came to be written over each other giving rise to $. A fictional possibility suggested is that the dollar sign is the capital letters U and S typed one on top of the other. This theory, popularized by novelist Ayn Rand in Atlas Shrugged , does not consider the fact that the symbol was already in use before the formation of the United States. United States one-dollar bill ($1) Diagram shoes the obverse of the $1 bill The United States one-dollar bill ($1) is the most common denomination of US currency. The first president, George Washington, painted by Gilbert Stuart, is currently featured on the obverse, while the Great Seal of the United States is featured on the reverse. The one-dollar bill has the second oldest design of all U. S. currency currently being produced, after the two-dollar bill. The obverse seen today debuted in 1963 when the $1 bill first became a Federal Reserve Note. The inclusion of “ In God We Trust” on all currency was required by law in 1955. The national motto first appeared on paper money in 1957. An individual dollar bill is also less formally known as a one, a single or a bone. The Bureau of Engraving and Printing says the average life of a $1 bill in circulation is 21 months before it is replaced due to wear. Approximately 45% of all U. S. currency produced today is one-dollar bills. All $1 bills produced today are Federal Reserve Notes. One-dollar bills are delivered by Federal Reserve Banks in blue straps. Diagram shows reverse of the $1 bill Obverse of current $1 bill Detail of the Treasury Seal as it appears on a $1 bill The portrait of George Washington is displayed in the center of the obverse of the one-dollar bill, as it has been since the 1869 design. The oval containing George Washington is propped up by bunches of Bay Laurel leaves. To the left of George Washington is the Federal Reserve District Seal. The name of the Federal Reserve Bank that issued the note encircles a capital letter, (A-L), identifying it among the twelve Federal Reserve Banks. The sequential number of the bank, (1: A, 2: B, etc.), is also displayed in the four corners of the open space on the bill. Until the redesign of the higher denominations of currency beginning in 1996, this seal was found on all denominations of Federal Reserve Notes. Since then it is only present on the $1 and $2 notes, with the higher denominations only displaying a universal Federal Reserve System seal, and the bank letter and number beneath the serial number. To the right of George Washington is the Treasury Department seal. The balancing scales represent justice. The chevron with thirteen stars represents the original thirteen colonies. The key below the chevron represents authority and trust; 1789 is the year that the Department of the Treasury was established. Below the FRD seal (to the left of George Washington) is the signature of the Treasurer of the U. S., which occasionally varies, and below the USDT Seal (right side) is the Secretary of the Treasury’s signature. To the left of the Secretary’s signature is the series date. A new series date will result from a change in the Secretary of the Treasury, the Treasurer of the United States, and/or a change to the note’s appearance such as a new currency design. On the edges are olive branches entwined around the 1’s. Reverse of current $1 bill President Franklin Roosevelt’s conditional approval of the one-dollar bill’s design in 1935, requiring that the appearance of the sides of the Great Seal be reversed, and together, captioned. The reverse of the one-dollar bill has an ornate design which incorporates both sides of the Great Seal of the United States to the left and right of the word “ ONE”. This word appears prominently in the white space at the center of the bill in a capitalized, shadowed, and seriffed typeface. A smaller image of the word “ ONE” is superimposed over the numeral “ 1” in each of the four corners of the bill. “ THE UNITED STATES OF AMERICA” spans the top of the bill, “ ONE DOLLAR” is emblazoned along the bottom, and above the central “ ONE” are the words “ IN GOD WE TRUST,” which became the official motto of the United States in 1956. Below the reverse of the Great Seal on the left side of the bill are the words “ THE GREAT SEAL,” and below the obverse on the right side are the words “ OF THE UNITED STATES.” Both reverse and obverse of the Great Seal contain symbols of historical, political, religious, and numerological significance. The Great Seal, originally designed in 1782 and added to the dollar bill’s design in 1935, is surrounded by an elaborate floral design. The renderings used were the typical official government versions used since the 1880s. The reverse of the seal on the left features a barren landscape dominated by an unfinished pyramid of 13 steps, topped by the Eye of Providence within a triangle. At the base of the pyramid are engraved the Roman numerals MDCCLXXVI (1776), the date of American independence from Britain. At the top of the seal stands a Latin phrase, “ ANNUIT COEPTIS,” meaning “ He (God) favors our undertaking.” At the bottom of the seal is a semicircular banner proclaiming “ NOVUS ORDO SECLORUM” meaning “ New Order of the Ages,” which is a reference to the new American era. To the left of this seal, a string of 13 pearls extends toward the edge of the bill. The obverse of the seal on the right features a bald eagle, the national bird and symbol of the United States. Above the eagle is a radiant cluster of 13 stars arranged in a six-pointed star. The eagle’s breast is covered by a heraldic shield with 13 stripes that resemble those on the American flag. As on the first US flag, the stars and stripes stand for the 13 original states of the union. The eagle holds a ribbon in its beak reading “ E PLURIBUS UNUM”, a Latin phrase meaning “ Out of many [states], one [nation],” a de facto motto of the United States (and the only one until 1956). In its left talons the eagle holds 13 arrows, and in its right talons it holds an olive branch with 13 leaves and 13 olives, representing, respectively, the powers of war and peace. To the right of this seal, a string of 13 pearls extends toward the edge of the bill. The symbology of the Great Seal of the United States, and its subsequent use on the dollar bill (especially the pyramid and the Eye of Providence above the pyramid) are popular topics among conspiracy theorists. Conspiracy theorists are of the opinion that much of the symbolism involves occultism. For example, because the Eye of Providence above the unfinished pyramid is similar to the ancient Egyptian Eye of Horus, a charm, relating to the Pagan/Egyptian sky-god Horus which symbolized that worshipers will be protected and given royal powers from Pagan deities. In fact, Eye of Providence was a common Christian emblem symbolizing the Trinity throughout the Middle Ages and Renaissance. Conspiracy theorists also note that the unfinished pyramid has thirteen steps (or that some other element of the Seal numbers thirteen), and are of the opinion that the number 13 has conspiratorial significance. The explanation for the repetition of the number thirteen is that this number represents the original thirteen colonies which became the first thirteen states. United States two-dollar bill ($2) The United States two-dollar bill ($2) is a current denomination of U. S. currency. Former U. S. President Thomas Jefferson is featured on the obverse of the note. The reverse features an engraved modified reproduction of the painting The Declaration of Independence by John Trumbull. The bill was discontinued in 1966, but was reintroduced 10 years later as part of the United States Bicentennial celebrations. Today, however, it is rarely seen in circulation and actual use. Production of the note is the lowest of U. S. paper money: less than 1% of all notes currently produced are $2 bills. This comparative scarcity in circulation, coupled with a lack of public awareness that the bill is still in circulation, has also inspired urban legends and, on a few occasions, created problems for people trying to use the bill to make purchases. Throughout the $2 bill’s pre-1928 life as a large-sized note, it was issued as a United States Note, National Bank Note, Silver Certificate, and Treasury or HYPERLINK “ http://en. wikipedia. org/wiki/Treasury_(Coin)_Note”” HYPERLINK “ http://en. wikipedia. org/wiki/Treasury_(Coin)_Note” CoinHYPERLINK “ http://en. wikipedia. org/wiki/Treasury_(Coin)_Note”” HYPERLINK “ http://en. wikipedia. org/wiki/Treasury_(Coin)_Note” Note. When U. S. currency was changed to its current size, the $2 bill was issued only as a United States Note. After United States Notes were discontinued, the $2 bill later began to be issued as a Federal Reserve Note. United States five-dollar bill ($5) The United States five-dollar bill ($5) is a denomination of United States currency. The $5 bill currently features U. S. President Abraham Lincoln’s portrait on the front and the Lincoln Memorial on the back. All $5 bills issued today are Federal Reserve Notes. Five dollar bills are delivered by Federal Reserve Banks in red straps. The $5 bill is sometimes nicknamed a “ fin”. The term has German/Yiddish roots and is remotely related to the English “ five”, but it is far less common today than it was in the late 19th and early 20th centuries. The Bureau of Engraving and Printing says the average life of a $5 bill in circulation is 16 months before it is replaced due to wear. Approximately 9 percent of all paper currency produced by the U. S. Treasury’s Bureau of Engraving and Printing today are $5 bills. United States ten-dollar bill ($10) The United States ten-dollar bill ($10) is a denomination of United States currency. The first U. S. Secretary of the Treasury, Alexander Hamilton, is currently featured on the obverse of the bill, while the U. S. Treasury is featured on the reverse. (Hamilton is one of two non-presidents featured on currently issued U. S. bills. The other is Benjamin Franklin, on the $100 HYPERLINK “ http://en. wikipedia. org/wiki/United_States_one_hundred-dollar_bill” bill. In addition to this, Hamilton is the only person featured on U. S. currency who was not born in the continental United States, as he was from the West Indies.) All $10 bills issued today are Federal Reserve Notes. The Bureau of Engraving and Printing says the average life of a $10 bill in circulation is 18 months before it is replaced due to wear. Approximately 11% of all newly printed US banknotes are $10 bills. Ten dollar bills are delivered by Federal Reserve Banks in yellow straps. The source of the face on the $10 bill is John Trumbull’s 1805 portrait of Hamilton that belongs to the portrait collection of New York City Hall. The $10 bill is the only U. S. paper currency in circulation in which the portrait faces to the left (the $100, 000 bill features a portrait of Woodrow Wilson facing to the left, but was used only for intra-government transactions). United States twenty-dollar bill ($20) The United States twenty-dollar bill ($20) is a denomination of United States currency. U. S. President Andrew Jackson is currently featured on the front side of the bill, which is why the twenty-dollar bill is often called a “ Jackson,” while the White House is featured on the reverse side. The twenty-dollar bill in the past was referred to as a “ double-sawbuck” because it is twice the value of a ten-dollar bill, which was nicknamed a “ sawbuck” due to the resemblance the Roman numeral for ten (X) bears to the legs of a sawbuck, although this usage had largely fallen out of favor by the 1980s. The twenty dollar gold coin was known as a “ double eagle”. Rather than a nickname, this nomenclature was specified by an act of Congress. The Bureau of Engraving and Printing says the average circulation life of a $20 bill is 25 months (2 years) before it is replaced due to wear. Approximately 22% of all notes printed today are $20 bills. Twenty-dollar bills are delivered by Federal Reserve Banks in violet straps. United States fifty-dollar bill ($50) The United States fifty-dollar bill ($50) is a denomination of United States currency. Ulysses S. Grant is currently featured on the obverse, while the U. S. Capitol is featured on the reverse. All $50 bills issued today are Federal Reserve Notes. The Bureau of Engraving and Printing says the “ average life” of a $50 bill in circulation is 55 months before it is replaced due to wear. Approximately 5% of all notes printed today are $50 bills. They are delivered by Federal Reserve Banks in brown straps. A fifty dollar bill is sometimes called a Grant based on the use of Ulysses S. Grant’s portrait on the bill. Andrew Jackson’s actions toward the Native Americans as a general, as well as during his Presidency, have led some historians to question the suitability of Jackson’s depiction on the twenty-dollar bill. Howard Zinn, for instance, identifies Jackson as a leading “ exterminator of Indians,” and notes how the public commemoration of Jackson obscures this part of American history. Those opposed to Central Banking point out the irony of Andrew Jackson on a Federal Reserve Note. Jackson spent much of his Presidency fighting against the Bank of the United States, which was at that time the government sanctioned Federal Bank. An email emerged after the events of 9/11 which alleged that folding the twenty-dollar bill a certain way produced images appearing to be 9/11 related (specifically the World Trade Center and the Pentagon burning). United States one hundred-dollar bill ($100) The United States one hundred-dollar bill ($100) is a denomination of United States currency. The redesigned $100 bill was unveiled on April 21, 2010, and the Federal Reserve Board will begin issuing the new bill on February 10, 2011. U. S. statesman, inventor, and diplomat Benjamin Franklin is currently featured on the obverse of the bill. On the reverse of the banknote is an image of Independence Hall. The time on the clock according to the U. S. Bureau of Engraving and Printing, shows approximately 4: 10. The numeral four on the clock face is incorrectly written as “ IV” whereas the real Independence Hall clock face has “ IIII”. (See Roman numerals in clocks.) The bill is one of two current notes that do not feature a President of the United States; the other is the United States ten-dollar bill, featuring Alexander Hamilton. It is the largest denomination that has been in circulation since July 14, 1969, when the higher denominations of $500, $1, 000, $5, 000, $10, 000 and $100, 000 were retired. The Bureau of Engraving and Printing says the average life of a $100 bill in circulation is 60 months (5 years) before it is replaced due to wear. Approximately 7% of all notes produced today are $100 bills. The bills are also commonly referred to as “ Benjamins” in reference to the use of Benjamin Franklin’s portrait on the denomination. They are also often referred to as “ C-Notes” based on the Roman numeral C which means 100. One hundred-dollar bills are delivered by Federal Reserve Banks in mustard-colored straps ($10, 000). The Series 2009 $100 bill redesign was unveiled on April 21, 2010 and will be issued to the public on February 10, 2011. Federal Reserve Note A Federal Reserve Note is a type of banknote. Federal Reserve Notes are printed by the United States Bureau of Engraving and Printing on paper made by Crane HYPERLINK “ http://en. wikipedia. org/wiki/Crane_&_Co.”&HYPERLINK “ http://en. wikipedia. org/wiki/Crane_&_Co.” Co. of Dalton, Massachusetts. They are the only type of U. S. banknote that is still produced today and they should not be confused with Federal Reserve Bank Notes. Federal Reserve Notes “ are authorized” by Section 411 of Title 12 of the United States Code. They are issued to the Federal Reserve Banks “ at the discretion of the Board of Governors of the Federal Reserve System”. The notes are then issued into circulation by the Federal Reserve Banks. When the notes are issued into circulation they become liabilities of the Federal Reserve Banks and “ obligations of the United States”. Federal Reserve Notes are fiat currency, with the words “ this note is legal tender for all debts, public and private” printed on each note. (See generally 31 U. S. C. 5103.) They have replaced United States Notes, which were once issued by the Treasury Department. Various Federal Reserve Notes, c. 1995. Only the designs of the $1 and $2 (not pictured) are still in print. The New $100 Bill The redesigned $100 bill was unveiled on April 21, 2010, and the Federal Reserve Board will begin issuing the new note on February 10, 2011. The redesigned $100 note incorporates a number of security features, including two new advanced features, the 3-D Security Ribbon and the Bell in the Inkwell. It offers a simple and subtle way to verify that a new $100 note is real. These security features were developed to make it easier to authenticate the note and more difficult for counterfeiters to replicate. There are several new security features as stated below: 3-D Security Ribbon: Look for a blue ribbon on the front of the note. Tilt the note back and forth while focusing on the blue ribbon. You will see the bells change to 100s as they move. When you tilt the note back and forth, the bells and 100s move side to side. If you tilt it side to side, they move up and down. The ribbon is woven into the paper, not printed on it. Bell in the Inkwell: Look for an image of a color-shifting bell, inside a copper-colored inkwell, on the front of the new $100 note. Tilt it to see the bell change from copper to green, an effect which makes the bell seems to appear and disappear within the inkwell. Additional Design and Security Features: Three highly effective security features from the older design have been retained and updated in the new $100 note. Several additional features have been added to protect the integrity of the new $100 note. Portrait Watermark: Hold the note to light and look for a faint image of Benjamin Franklin in the blank space to the right of the portrait. Security Thread: Hold the note to light to see an embedded thread running vertically to the left of the portrait. The thread is imprinted with the letters USA and the numeral 100 in an alternating pattern and is visible from both sides of the note. The thread glows pink when illuminated by ultraviolet light. Color-Shifting 100: Tilt the note to see the numeral 100 in the lower right corner of the front of the note shift from copper to green. Raised Printing: Move your finger up and down Benjamin Franklin’s shoulder on the left side of the note. It should feel rough to the touch, a result of the enhanced intaglio printing process used to create the image. Traditional raised printing can be felt throughout the $100 note, and gives genuine U. S. currency its distinctive texture. Gold 100: Look for a large gold numeral 100 on the back of the note. It helps those with visual impairments distinguish the denomination. Micro printing: Look carefully to see the small printed words which appear on Benjamin Franklin’s jacket collar, around the blank space containing the portrait watermark, along the golden quill, and in the note borders. FW Indicator: The redesigned $100 notes printed in Fort Worth, Texas, will have a small FW in the top left corner on the front of the note to the right of the numeral 100. If a note does not have an FW indicator, it was printed in Washington, D. C. Federal Reserve Indicator: A universal seal to the left of the portrait represents the entire Federal Reserve System. A letter and number beneath the left serial number identifies the issuing Federal Reserve Bank. There are 12 regional Federal Reserve Banks and 24 branches located in major cities throughout the United States. Serial Numbers: The unique combination of eleven numbers and letters appears twice on the front of the bill. Because they are unique identifiers, serial numbers help law enforcement identify counterfeit notes, and they also help the Bureau of Engraving and Printing track quality standards for the notes they produce. Large Denominations of United States Currency Today, the base currency of the United States is the U. S. dollar, and is printed on bills in denominations of $1, $2, $5, $10, $20, $50, and $100. At one time, however, it also included five larger denominations. High-denomination currency was prevalent from the very beginning of U. S. Government issue (1861). $500, $1, 000, $5, 000, and $10, 000 interest bearing notes were issued in 1861, and $5, 000 and $10, 000 United States Notes were released in 1878 There are many different designs and types of high-denomination notes. The high-denomination bills were issued in a small size in 1929, along with the $1 through $100 denominations. The designs were as follows, along with their 1929 equivalents in current purchasing power (except for the $100, 000 bill, which uses the 1934 equivalent): $1, 000: Grover Cleveland, equal to $12, 700 in 2010 dollars $5, 000: James Madison, equal to $63, 500 in 2010 dollars $10, 000: Salmon P. Chase, equal to $127, 000 in 2010 dollars The reverse designs abstract scrollwork with ornate denomination identifiers. All were printed in green, except for the $100, 000. The $100, 000 is an odd bill, in that it was not generally issued, and printed only as a gold certificate of Series of 1934. These gold certificates (of denominations $100, $1, 000, $10, 000, and $100, 000) were issued after the gold standard was repealed and gold was compulsorily purchased by presidential order of Franklin Roosevelt on March 9, 1933 (see United States Executive Order 6102), and thus were used only for intra-government transactions. They are printed in orange on the reverse. This series was discontinued in 1940. The other bills are printed in black and green as shown by the $10, 000 example (pictured at right). Although they are still technically legal tender in the United States, high-denomination bills were last printed in 1945 and officially discontinued on July 14, 1969, by the Federal Reserve System. The $5, 000 and $10, 000 effectively disappeared well before then: there are only about two hundred $5, 000 bills, and three hundred $10, 000 bills known, of all series since 1861. Of the $10, 000 bills, 100 were preserved for many years by Benny Binion, the owner of BinionHYPERLINK “ http://en. wikipedia. org/wiki/Binion’s_Horseshoe”‘ HYPERLINK “ http://en. wikipedia. org/wiki/Binion’s_Horseshoe” s Horseshoe casino in Las Vegas, Nevada, where they were displayed encased in acrylic. The display has since been dismantled and the bills were sold to private collectors. The Federal Reserve began taking high-denomination bills out of circulation in 1969. As of May 30, 2009, there were only 336 of the $10, 000 bills in circulation; 342 remaining $5, 000 bills; and 165, 372 $1, 000 bills still being used. Due to their rarity, collectors will pay considerably more than the face value of the bills to acquire them. For the most part, these bills were used by banks and the Federal Government for large financial transactions. This was especially true for gold certificates from 1865 to 1934. However, the introduction of the electronic money system has made large-scale cash transactions obsolete; when combined with concerns about counterfeiting and the use of cash in unlawful activities such as the illegal drug trade, it is unlikely that the U. S. government will re-issue large denomination currency in the near future. According to the US Department of Treasury website, “ The present denominations of our currency in production are $1, $2, $5, $10, $20, $50, and $100. Neither the Department of the Treasury nor the Federal Reserve System has any plans to change the denominations in use today.” The paper "The features of the united states dollar" was contributed to our database by a real student. You can use this work as a reference for your own writing or as a starting point for your research. You must properly cite any portion of this sample before using it. If this work is your intellectual property and you no longer would like it to appear in our database, please request its deletion.Ask for Removal Create a Citation on Essay PaperPrompt. (2022) 'The features of the united states dollar'. 5 August. PaperPrompt. (2022, August 5). The features of the united states dollar. Retrieved from https://paperprompt.com/the-features-of-the-united-states-dollar/ PaperPrompt. 2022. "The features of the united states dollar." August 5, 2022. https://paperprompt.com/the-features-of-the-united-states-dollar/. 1. PaperPrompt. "The features of the united states dollar." August 5, 2022. https://paperprompt.com/the-features-of-the-united-states-dollar/. PaperPrompt. "The features of the united states dollar." August 5, 2022. https://paperprompt.com/the-features-of-the-united-states-dollar/. "The features of the united states dollar." PaperPrompt, 5 Aug. 2022, paperprompt.com/the-features-of-the-united-states-dollar/. Get in Touch with Us Do you have more ideas on how to improve The features of the united states dollar? Please share them with us by writing at the [email protected]
<urn:uuid:77fe58b1-7013-4cda-98f7-f04ad95430c4>
CC-MAIN-2022-33
https://paperprompt.com/the-features-of-the-united-states-dollar/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00696.warc.gz
en
0.948061
6,016
3.9375
4
THE RELATIONSHIP BETWEEN AFRICAN AMERICAN IDENTITY AND RAP SONGS A STUDY OF SELECTED RAP SONGS OF GRANDMASTER FLASH AND THE FURIOUS FIVE Rap music as one of the elements of hip hop culture originated in New York’s South Bronx neighbourhood in the late 1970s. Its lyrics provide a powerful lens through which to view the many dimensions of the African American predicament. As a form, Rap music is for African Americans the means to pen down their history and social circumstances and forge their identities out of the white oriented and white dominated American society and culture. The dominant discourses have relegated African Americans to the margin and excluded them from the power, profits and privileges that Whites overtime have enjoyed in American society. By devaluing the blacks in every possible manner, Whites were able to hold in place the racial hierarchy of the American society. Thus, this dissertation explores rap songs as the medium through which African Americans reflect their predicaments and not only challenge dominant discourses but project their ethnic identities as well. The study deploys postcolonial theory in analysing the selected rap songs based on the relations between Whites and Blacks on American soil and how the songs are used in expressing identity related issues such as racism, marginalization, politics, legal and economic disparities. The study finds out that African American ethnic identity emerged from an identification that is rooted in perceived commonality of oppression, suppression and marginalization. TABLE OF CONTENTS Title Page ——————————————— i Declaration —————————————- ii Certification ——————————————- iii Abstract ——————————————- vi Table of Content—————————- vii CHAPTER ONE: GENERAL INTRODUCTION 1.0 Introduction———————————- 1 1.1 Generational History of African Americans——————– 3 1.2 African American Music———————————– 7 1.3 Examples of African American Musical Style—————————- 12 1.4 History and Development of Rap Music———————— 15 1.5 Rap as Poetry——————————————— 19 1.6 Statement of the Research Problem——————————– 20 1.7 Aim and Objectives of the Study—————————– 21 1.8 Justification of the Study———————— 21 1.9 Scope and Delimitation————————— 22 1.10 Methodology———————————– 22 1.11 Grandmaster Flash and the Furious Five—————— 23 1.12 Definition of Concepts————————— 24 CHAPTER TWO: THEORETICAL FRAMEWORK AND LITERATURE REVIEW 2.0 Theoretical framework ——————————– 26 2.1 Postcolonial Theory as Theoretical Framework———————- 26 2.2 Tenets of Postcolonial Theory—————- 30 2.2.1 Centre or Margin——————— 30 2.2.2 Dislocation————————– 31 2.2.3 Cultural Differences———————– 32 2.2.4 Hybridity——————– 32 2.2.5 Language———————————– 33 2.3 The Colonial Status and Experience of African Americans—————— 34 2.4 Literature Review———————— 40 2.4.1 Rap Music- ——————————– 47 CHAPTER THREE: RAP AS AN EXPRESSION OF AFRICAN AMERICAN ETHNIC IDENTITY IN “THE MESSAGE”, “BEAT STREET”, “WHITE LINES”, “WHAT IF” AND “I AM SOMEBODY”. 3.0 Introduction ———————————————— 53 3.1 The African American Predicament in ―THE MESSAGE —————– 53 3.2 The Consequences of War in ―BEAT STREET ———————– 60 3.3 Legal and Economic Disparities in ― WHITE LINES —————— 63 3.4 Cultural Dialectic in ―What If —————————– 66 3.5 Identity Assertion in ―I AM SOMEBODY —————— 69 3.6 CONCLUSION- —————————-75 CHAPTER FOUR: AESTHETIC AND STYLISTIC FEATURES IN “NEW YORK NEW YORK”, “SURVIVAL MESSAGE II” AND “ITS NASTY”. 4.0 Introduction—- ——————— 76 4.1 Call and Response Technique in ―New York New York ————- 77 4.2 Improvisation and Innovation in ―Survival Message II ——- 81 4.3 Braggadocio and Toast in ―It‘s Nasty — 86 4.4 Conclusion—————– 89 CHAPTER FIVE: CONCLUSION 5.1 Conclusion————————- 91 African Americans are citizens of the United States of America whose forefathers were forcefully removed from Africa during the seventeenth and eighteenth centuries. Such Africans were forced into slavery and were stripped of their cultural affinities. Consequently, this forceful removal had an effect on the Africans who found themselves in an alien world and had to learn new ways in order to survive. In the process, Africans were caught up between two cultures; an African one on one hand and an American one on the other. It is important to note that American culture is not universal, as Africans taken into slavery found it different from the culture they were used to. The slaves had to adapt to an alien culture. Thus, it is this dual identity that gave rise to the term African American, a term that deliberately recognizes the African and American cultures that have moulded the African American personality. One important area in which researchers interested in African American culture focus on is ―identity‖. The concept of identity has always been linked to the history of African Americans and their presence in what has now become the United States of America. For a long time, the image or view blacks had of themselves was largely defined by the way Whites in America described them in their writings, films and other forms of representation. In the past, in the course of domicile in the United States, African Americans have been called by such names as – Negros, Blacks, Coloured, but in the last thirty years, the term African American and Black American has been used. The mechanisms the Whites put in place to subjugate the blacks go a long way in making them think and feel inferior (Wikipedia). Slavery affected every aspect of their lives. The nearly 300 years of slavery have distorted and caused great pain such that African Americans have had to create a new culture and identity out of their experiences in the new world. This is because the white masters did not acknowledge the right of African American to an independent ethnic or cultural identity. As historian T, Vaughan (1995) has noted that the Europeans rarely identified African arrivals in the colonies with terms denoting either nation or ethnicity (Quoted in Hornsby, 2005). For instance, Africans were referred to as Negroes, a term referring to their skin colour. Before the beginning of slavery, Africans lived in a society that was predicated upon a religious system and cultural practices (Hamlet, 2011). As such the people‘s beliefs, values, norms, history, were transmitted by griots and other members of the society who were the custodian of African culture from one generation to another. The transportation of blacks from Africa to the Americas for slavery stripped them of their culture, identity, family and possessions. Language was the first cultural trait the slave traders and holders tried to suppress. On the slave ship, members of the same community were deliberately separated from each other to prevent communication (ibid). This notwithstanding and given that the blacks came from different backgrounds, the similarities in the basic structure of their culture allowed them to be able to form a different form of communication that was partly African and American (Gay and Barber: 1989). Oral tradition was a major cultural vestige that blacks took to the new world. In African American culture, oral tradition has served as a fundamental vehicle for cultural expression and survival. This oral tradition also preserved the cultural heritage and reflected the collective spirit of the race. African American oral tradition can be traced to Africa.African American cultural expressions have been some of the ways of resisting racial oppression and also a way of expressing African American identity. Although the institution of slavery was out-lawed in 1865 in United States, African practices continued to evolve in to newer modes of expression that provided a foundation for African American cultural or ethnic identity. These include folktales, ritualised games such as the dozens, songs, spirituals, vernacular expression e.t.c. From the foregoing, rap can also be classified as a part of the African American cultural expressions as it is said to have developed from previous art forms such as the dozens and vernacular expression. Overall, the goal of this study is to analyze the lyrics of the selected rap songs of Grandmaster Flash and Furious Five. The analysis of rap as an art form is to demonstrate how the form serves as a vehicle for promoting the identity of the African Americans as well as artistically articulating their diverse social and political experiences. Rap acts as a mechanism for retaining and disseminating African American cultural heritage. It is an avenue of speaking out about their predicament in America. The study also looks at the aesthetic features found in the lyrics of Grandmaster Flash and the Furious Five (such as repetition, call and response, language) and deploys postcolonial theory as its theoretical frame work. 1.1 GENERATIONAL HISTORY OF AFRICAN AMERICANS The history of African Americans can be traced to the time of the middle passage when blacks were forcibly uprooted from Africa and were transported to a new and alien world. Blacks who were taken as slaves were stripped of their culture and language. This was important for the slave owners so that the blacks will forget who they were and accept a culture alien to them. For the enslaved blacks the new world and way of life created a new identity for them which was a result of the various mechanism put in place by their owners such as, religion, science and philosophy and also through the improvisation and adaptation that the enslaved came up with. Therefore, stripped of their cultural vestiges, blacks brought with them a strong memory of rich cultural values; one of such is the importance of family. It is important to note that slave owners did their best in separating members of black families as children born by a slave were usually sold by his/her slave master/mistress to another master. But with all the measures put in place to separate slaves, they were still able to survive and form family ties amongst themselves. Family can be seen in the African sense as a large number of blood relatives who can trace their descent from a common ancestor and the family was held together by a sense of obligation, as such, each member of the family is brought up to think of himself in relationship to the group as a cohesive unit. It is this family ties that Africans or blacks brought with them to America. Gutman (1976) observes that ―African family resilience was transmitted to the Americas, and, thus, assisted in Africans‘ survival both during and after slavery (quoted in Hornsby et.al. 2005). From the above, it is obvious that the African Americans did not forget their roots. They found strength in the memory of what their life used to be before being captured as slaves. The African slaves created a culture different from that of their masters, a culture they could call their own. Thus Gutman‘s argument can be compared to Howard Zinn‘s observation in A People‘s History of United States (1999). Zinn observed that ―in a society of complex controls, both crude and refined, secret thoughts can often be found in the arts, and so it was in black society‖. In this respect, one can surmise that the African American experience in America created a fertile ground for the development of cultural forms ranging from work songs, negro spiritual, slave narratives, poetry, jazz, blues, to mention but a few. The Southern states in America were rich in fertile soil and this part of the country depended largely on slave labour to maintain its farms. Some Americans in the Northern states thought slavery should not be allowed in a free country. In this respect, the American Anti-Slavery movement was formed in 1833 in Philadelphia and had several branches established throughout the free states. The goal of this organisation was to abolish slavery. This did not go down well with the whites in the South and they attempted to prove and justify slavery, using scientific and biblical arguments to the effect that blacks were inferior to whites and were destined to be slaves (Race Timeline, 2003). In 1860, Abraham Lincoln was elected as the president of the United States. The southerners did not like the ideas of Lincoln because they feared he might free the slaves, which he did eventually (O‘Callaghan:1990). The following years after Lincoln‘s election as the president resulted into a civil war between the Southern states and the North. The Northern states won the war and American slaves were consequently set free. As a result of the emancipation proclamation and the thirteenth amendment southern states in reaction passed laws known as Black Codes which stipulated the inferiority of the blacks. Such codes stipulate that blacks would remain without property, education and legal protection. Blacks were denied the rights to vote and could not give evidence against the whites or act as jury members. Thus, this caused the United States Congress in 1866 to pass a Civil Rights Act providing full rights for all people born in the United States (Davis: 2008). The legislation was not effective as the southern states rejected this legislation, and this made the North in 1867 to pass the Reconstruction Act. The South was placed under military rule. This action taken by the North only increased the hatred the Southern whites had against the blacks. Whites in the Southern states created an organization called the Ku Klux Klan. This organization devised ways of threatening, murdering and lynching of blacks (ibid). Another means they used in suppressing blacks was the Jim Crow laws. The laws preached separatism. There were consequently separate hospitals, schools, public transport, restaurants, and theatres for blacks and whites (ibid). The fate of blacks was sealed. Although they were freed and enslavement was abolished by law, to be black still meant being a second-class citizen and one who was limited in terms of basic human rights. Nor did the 15th Amendment passed on the 30th of March 1870, which forbade restricting of the right to vote due to race, colour or condition of former servitude able to improve the situation of blacks (ibid.). The World War I and World War II created an avenue for the blacks to migrate from the South to the North. They moved to Northern states like Chicago, Michigan and New York City. This was because such places provided for African Americans a better access to education, economic opportunities and cultural institutions which they could not get in the rural areas of the South where blacks remained more isolated and uneducated. The movement of the blacks to the North allowed them to form a strong community with ingredients for the development of black culture (Stevens: 1991). The Civil Rights Movement came to prominence during the mid-1950s in the United States and had its roots in the centuries-long efforts not only to abolish slavery but address the aberration of racism. It was a response to racial discrimination and was used to agitate for full civil liberties for blacks. In 1954 the Supreme Court decided that segregation in schools was against the constitution. In 1955 a black woman Rosa Parks was arrested in Montgomery, Alabama because she refused to let a white passenger take her seat. This led the blacks to boycott the buses and the boycott was led by Martin Luther King who became the leader of the Civil Rights Movement. In 1964 the American Congress passed the Civil Rights Act which banned discriminations in schools, public places, jobs and in many other fields (Markova: 2008). The Black Power movement and Black Arts Movement are movements that manifested during the Civil Rights Movement particularly in the 1960s. Black Power and Black Arts Movement were both related to the African American‘s desire to attain recognition as a full citizen of the U.S. Both concepts are nationalistic; Black Power Movement is concerned with politics and it also witnessed a period of cultural and artistic revival. The Black Power Movement had been around since the 1950s, but it was Stokely Carmichael that popularised the term in 1966 (Coombs, 2004). He was the head of the Student Non-violent Coordinating Committee (SNCC). The Black Power Movement instilled a sense of racial pride and self esteem in blacks. The movement encouraged African Americans to join or form political parties that could offer a foundation for real socio economic progress. The movement aspired for blacks to define the world in their own view. The Black Arts Movement was an association of African American visual artists, writers, poets, playwrights and musicians. The movement took a definite shape around 1965 and lasted to the late 1970s. Blacks involved in this movement were united by a desire to cultivate a vital black aesthetic different from the standards of whites that reflected and addressed the particular experiences and sensibilities of African Americans. The movement set out to re-affirm the intrinsic beauty of blackness, an explicit challenge to centuries of racism (Neal: 1968). African Americans who contributed to this movement include; Gwendolyn Brooks, Nikki Giovanni, LeRoi Jones (Amiri Baraka) to mention but a few. Today, issues of discrimination remain, though African Americans have made and are still making a significant contribution to every part of American society, be it business, science, politics, art and entertainment. 1.2 AFRICAN AMERICAN MUSIC African American art forms such as poetry, narrative, music and songs are related to the society from which they emerged from. Scientific and biblical arguments which were used as weapons to justify slavery served ironically as the foundation for African American arts. Africans who were taken as slaves had to readjust in a world alien to them. Throughout history, people of African origin in the United States otherwise known as African Americans have developed several music genres, beginning with Negro Spirituals, Blues and Jazz music, to the most recent genre of Rap music. It is important to note that rap is one of the five elements of hip hop culture. Thus music is a vital component of the African American culture. Music has always been a defining aspect of African American culture; ever since the passage of the slaves from both West and Central Africa to the New World. Through their music and songs, the first African Americans were able to keep a sense of their African identity. Music gave a sense of power, of control. If it did not improve the material being of its creators, it certainly did have an impact upon their psychic state and emotional health. It allowed them to assert themselves and their feelings and their values, to communicate continuously with themselves and their peers. They could partly drop their masks and the pretence and say what they felt, articulate what was brimming up within them and what they desperately needed to express (Daniel and Smitherman: 1976). Music along with other forms of the oral tradition allowed African Americans to express themselves, to derive pleasure and also to pass on these forms for posterity. In view of this, Franklin and Moss (1994:25-26) posit: African Slaves came from a complex social and economic life, and were not overwhelmed or overawed by their New World experiences. Despite the heterogeneity characteristic of many aspects of African life, African people still had sufficient common experiences to enable them to cooperate in the New World in fashioning new customs and traditions which reflected their background. Franklin and Moss both surmised that as Africans of different experiences were forced to live together, there was an interaction of various African cultures which resulted to a new culture of their own. The African American culture must be seen as the product of the African American experiences in America. Although the content of the African American culture grew out of the American scene, its style did have African roots. It is these African roots that the slave brought with him–a highly developed sense of rhythm which was passed from generation to generation, and an understanding of art which conceived of it as an integral part of the whole of life rather than as a beautiful object set apart from mundane experience. Song and dance, for example, were involved in the African’s daily experience of work, play, love, and worship. In sculpture, painting and pottery, the African used his art to decorate the objects of his daily life rather than to make art objects for their own sake (Coombs, 2004). Out of the African American experiences and memory from their past lives grew a new culture which was passed down to subsequent generations of African Americans. This buttresses Coombs‘ point when he further affirms that the Africans brought their feelings for art with them, the content of their art was actually changed as a result of their American slave experience. As such, the African American cultural spirit became emotional, exuberant, and sentimental (ibid). This is to say the African American characteristics which have been generally thought of as being African and primitive–his naivety, his exuberance and his spontaneity–are, in reality, his response to his American experience and not a part of his African heritage. They are to be understood as the African’s emotional reaction to his American ordeal of slavery. Out of this environment along with its suffering and deprivation, has evolved an African American culture (ibid). The misrepresentation and marginalization of blacks in America created an avenue for African American culture to develop which is distinct from the culture of their oppressors. As such, African Americans attempted to reassert their identity instead of being represented by others by taking materials from African American culture and experiences in America. African American consciousness or nationalism became noticeable towards the end of the 19th century. A number of African Americans left the South to escape oppression and this led to the Great Migration. They moved to Northern cities like Chicago, Michigan, Philadelphia and New York to form strong black communities. The Great Migration expanded black communities which created a fertile environment for black culture to grow. The migration fostered African American nationalism which contributed to the emergence of a new type of African American who was becoming increasingly conscious of his value as a black person. For instance, Harlem, a neighbourhood in New York, turned into the largest metropolis of the black world. It is therefore no coincidence that Harlem with its newly found self-confidence and African orientated racial feeling stimulated rich literary activities (Berghahn, 1977). The Northern black middle class in the early part of the 20th century began to set up a number of political movements that advocated for racial equality, inspired racial pride and confronted the prejudices or stereotypes that blacks were ignorant, servile and not intelligent. One of such political movement is the National Association for the Advancement of Colored People (NAACP). Alain Locke a leading black intellectual edited a volume of critical essays and literature entitled The New Negro (1925). Like Marcus Garvey, Locke preached the political and cultural rebirth of the black race. It was manifested by a creative outburst of art, music and literature as well as by a new mood of self-confidence and self-consciousness within that community. The centre of this explosion was located in Harlem and the period became known as the Harlem Renaissance (Coombs,2004). According to Locke the most important task for the African Americans was to rehabilitate the black man throughout the world, and to demolish the prejudices which had been carried over from slavery (qtd in Berghahn, 1977). The 1960s and 1970s saw the emergence of an artistic and cultural movement among African Americans in America. At this time African Americans were fascinated with the African continent. They studied African art, language, culture and history. Thus Africa, not America was regarded as the real home of the African Americans (Berghahn, 1977). African Americans became more aware of their position within American society and tried to give it a constructive meaning. As such black intellectuals feel that African Americans have a justified claim to demand equality with whites in America. It is important to note that the intellectuals of this period did not abandon the militant spirit which was reminiscent of the new Negro of the 1920s although some of the intellectuals‘ beliefs or ideas appear to be more cynical and disillusioned. This is because intellectuals of the 1920s like Garvey advocated for a return to Africa. The intellectuals of the 1960s and 1970s though fascinated with Africa realised that they are of Africa but do not feel at home there because they have been disconnected or forcefully uprooted. At the same time African Americans remained as outsiders and rootless in the American society. Their bitterness, undoubtedly, springs partly from the dashed hopes of blacks in an anti-black America. Abrahams, M.H. (2005). A Glossary of Literary Terms. Bangalore: Prism Books. Achebe, C. (1973). African writers on African Writing. Ed.G.D. Killam. London:Heinmann Adams, T and Fuller, D. (2006). The Words Have Changed But the Ideology Remains the Same: Misogynistic Lyrics in Rap Music. Journal of Black Studies. Vol 36, no 6 pp 938-957 Alabi, A.(2005). Telling our Stories: Continuities and Divergences in Black Autobiographies. New York: Palgrave MacMillan. Allen, R.L and Bagozzi, R.P. (2001). Consequences of the Black Sense of Self.Journal of Black Psychology. Alridge. D. P. (2005).From Civil Right to Hip Hop: Towards a Nexus of Ideas.Journal ofAfrican American History. Armstrong, E. (2001). Gangsta misogyny: A content analysis of the portrayals of violence against women in rap music, 1987–1993.Journal of Criminal Justice and Popular Culture Ashcroft. B, Griffiths. G, andTiffin.H. (1995) The Post-Colonial Studies Reader. London; New York: Routledge. Ashcroft.B, Griffiths, G., & Tiffin, H. (2007). ―Key concepts‖ Post-colonial studies the key concepts.2nd ed. Routledge. Taylor & Francis e-Library, April 19, 2012. http://work.colum.edu/-zfurness/korey/Ashcroft.Post_colonial_studies Bennett, L.Jr. (1964) Before the May Flower: A History of the Negro in America 1619- 1974.Chicago: Johnson Publishing Company.
<urn:uuid:494306a5-964d-4082-867e-8fa25f4f6974>
CC-MAIN-2022-33
https://projecttopics.co.uk/english-language-literature-project-topics-materials/the-relationship-between-african-american-identity-and-rap-songs/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573104.24/warc/CC-MAIN-20220817183340-20220817213340-00496.warc.gz
en
0.963898
5,578
3.109375
3
I’ve had quite a few people asking me about chores – how to set them up, do you pay for them, how much do you pay, is allowance tied to chores, do I need chore charts with rewards, what do I do if my kid won’t do their chores. In this podcast I want to address all those issues and more. Chores let your kids develop life skills that, if taught well, will launch them into a good place in life. I’ll start with the research behind why chores are important and then I’ll get into the nitty-gritty of how to implement chores with kids of various ages. First, the research… Research shows that kids who do chores grow into happier, healthier, far more successful adults, and the sooner parents start them on them, the better off they are. There have been two ground-breaking studies looking at success and correlations with behavior and upbringing. One is the Harvard Grant Study which gathered data on individuals over 75 years and the other is a University of Minnesota study looking at individuals over 20 years. Both published a ton of results in 2015. Here are some brief observations I want to highlight for you: - It starts young: The best predictor of success in young adulthood, on measures related to education completion, career path, and personal relationships, was whether they had begun doing chores at an early age — as young as 3 or 4. - Professional success – doing chores was significantly correlated with academic and career success but there are even indications that early chores were linked to higher IQs. - Relationship skills - “A kid who learns early to do chores will be a more generous and cooperative partner. It’s easier to live and work with a person who has learned to take care of his or her own stuff and to be responsible for some of the boring work that adult and family life requires.” Chores teach kids vital relationship skills like cooperation, teamwork, and respect for others. I bet we all know someone in college who was the biggest slob and thoughtless roommate ever - never picked up after themselves, didn’t do the dishes, left the counter dirty and disgusting after cooking. Yuck. - Mental Heath - researchers found that participation in chores as children was a better predictor for mental health in adulthood than social class and family conflict. - Organization, Time Management and delayed gratification - Kids who do chores learn to organize their time and to delay gratification. Both of those are vital skills for later success. If you have to do the dishes before playing video games and your friends are playing at 7pm then you’d better get those dishes done before then. Having to fit in chores forces kids learn to manage their time. Julie Lythcott-Haims who wrote the book How to Raise an Adult said, “While it can be tempting to give kids a pass on busy homework nights real life is going to require them to do all of these things. When they're at a job, there might be times that they have to work late, but they'll still have to go grocery shopping and do the dishes." In the Harvard Grant Study, researchers identified two things that people need in order to be happy and successful: The first? Love. The second? Work ethic. What's the best way to develop work ethic in young people? Based on high-achievers who were part of the study there's a consensus of what gave them a good work ethic - A "pitch-in" mindset. This is a mindset that says, there's some unpleasant work, someone's got to do it, it might as well be me ... that's what gets you ahead in the workplace. The drawback we have as parents, however, is that having our kids do chores doesn't necessarily wind up being less work for us, does it? It takes more time to teach our kids to do chores and to do them well instead of just doing the chores ourselves. How many of us look forward to nagging our kids and reminding them day after day to do their chores away? Now that we know the benefits of doing chores for the long-term, let’s take a close look at the practical side of what we can do to help us arrange for chores in our households. PRACTICAL SIDE OF CHORES To Pay or Not to Pay For Chores I want to start by addressing one major issue - should we pay for chores. I firmly believe we shouldn’t. A family is a unit of people who need each other and love each other. It takes work to take care of a family and there’s no reason why kids can’t learn at an early age that pitching in is just something they need to do. Remember that life skill we learned about earlier? The “pitching in” skill? We do need to set up chores with love and encouragement though instead of nagging and threats. When we pay our kids for chores, they start to think that if they don’t get paid then they don’t have to work. Or, if they don’t need the money, then they don’t need to do the chores either. They become workers for hire and not contributing family members. We threaten to withhold money when they aren’t done and this shouldn’t be about money, it should be about pitching in. I do want to say that I believe in giving kids an allowance as a means to learn about handling money but it should be separate from chores. Teaching kids about money is so important actually that I’ll do separate podcast soon on it so stay tuned for that. To help you on the practical side of things, I’m going to go over my recommendations for chores by age. I’m going to give you some basic examples but after you’re done listening feel free to visit my PARENTING DECODED Pinterest board on Kids Chores. For kids 2-3 years You want to start young. Yep, really young. I’d start as early as two. Richard Bromfield who wrote the book How to Unspoil Your Child Fast put it nicely, “When kids are really young, they want to help you rake leaves or prepare dinner. Take those opportunities to let kids help. Those moments are infused with love and connection. By the time they're older and really able to do [those tasks] competently, they've lost interest." Cape diem! Seize the day! A 2 or 3-year-old helping to sweep the back porch, dust the book shelves, or make a snack in the kitchen with a parent is a happy kid. When they grow up and inevitably have to accomplish these things, they’re less likely to rail against them if you started early and naturally. What can a 2 or 3-year-old do? - Pick up toys - Wipe up spills - Clear places at meal times - Help put away groceries - Sort recycling - Put dirty clothes in laundry - Make their bed - Sort laundry and put away clothes - Feed pet - Set the table - Make a small snack or help with dinner - Pull weeds - Water plants - Sweep porch - Get themselves out of bed in the morning - Make lunch for school - Do their laundry or at least fold it - Cook a simple meal - Load/unload dishwasher - Clean up after the dog - Clean the bathroom - Take out the trash - Do all of their own laundry - Mow the lawn - Cook a complete meal - Wash the car - Mop the floors - Help with younger children - Basic home repairs (light bulbs, dust a fan using a ladder, tighten loose screws) I want to talk now to families with older kids who haven’t been doing chores or almost no chores yet. I’m mostly talking about families with teens or tweens but if you have elementary kids who aren’t doing chores this can be helpful to you as well. If you have kids in this category, it will be a huge adjustment for them, that’s for sure. Our society has transitioned to valuing homework more than teamwork so we’ve given our kids a “pass” when it comes to contributing and they’re likely to resist your efforts to get them to contribute. For starters, I am going to give you the number one chore you need to have your teen or tween start doing right now. It only involves them. If they don’t do this chore, it only hurts them – not you, not the rest of the family, not even the family dog or cat. What is it? LAUNDRY. Set up a Family Meeting and announce that starting in one week you’ll allow your children to do their own laundry whenever they’d like as long as you’re not using the machines yourself. You allow them to choose when to have a lesson on how to use the washer and dryer. You also let them know that once they are trained, they are responsible for using the appliances appropriately or paying for the repairs. Lovingly let them know that you will always provide soap and answer specialty questions that arise but their laundry will now be their laundry. Then, you implement this. Things might get stinky in their rooms. Just shut the door. They need to take care of themselves and this is the perfect life skill and chore for them to own. Some parents think they’ll waste water but that is much less likely than them not cleaning their clothes often enough. Here’s what else you need to do: no yelling, no reminding or nagging. If you have an athlete, all the more reason to get them in the groove early. They might come to realize they need more underwear to stretch out washings to once a week or once every two weeks. Great! Let them buy more underwear! They can use their own money. If they dye a load of laundry pink because they didn’t separate their colors correctly, let them wear pink or replace things with their own money. If your child won’t fold their laundry, won’t put it away? Don’t lift a finger. Let them wear wrinkled clothes. Let them figure out what is clean and what is dirty. Just stay away. Assist them by answering questions by all means, just don’t do their laundry. Ok, feeling better? Do you think you can get that one implemented at your house? Good! This laundry chore will get you on a path toward where you really want to be, getting them more involved in chores around the house. So, what’s next? Here’s what I did with my boys when they hit middle school. This process I’m going to describe takes a bit of time to implement but I really think it is worth the effort. It absolutely was for me. Start by taking a piece of binder paper and taping it to the fridge in your kitchen. Every day, many times a day, write the chores that everyone in the family does on the list. Take about two weeks to write all the chores so that you get a really good cross section of things that need getting done. Add pages as they get full. I told my boys about the list and encouraged them to write down their chores if they didn’t see them on the list but it was a list of all our chores, not just theirs. What was on the list? Grocery shopping, driving kids to school, making breakfast, lunch and dinner, paying bills, earning the money to pay the bills, vacuuming, planting the garden, making beds, cleaning the dishes, setting the table, etc. Our list was about three pages long in the end. Next, organize the list into categories – daily (making beds, setting the table), weekly (taking garbage bins to the street, combing the cat), monthly (clean their bathroom) and random (changing light bulbs, refilling TP, washing the car). I happen to put all mine into a spreadsheet so I could more easily manipulate them and add columns for who will do each chore but do whatever works for you. Last step, have a Family Meeting and brainstorm who does what. True confession, the first time I did this I hadn’t categorized by daily/weekly/monthly and it was a disaster. I had to re-think my process and hold another Family Meeting a few days later which is what I’m describing now. Haha… you can learn from my mistakes! My kids had already had chores but this magic list showed them that mom happened to be doing LOTS of the chores with dad in second place. I was a stay-at-home mom at the time so it wasn’t all that surprising. For their daily chores I just asked before school for two simple things in their rooms –straighten up their beds and open their blinds. I love light in my house and I really wanted that help. They agreed it seemed reasonable. They had other daily chores but those were my wins by doing this. For their weekly chores, they got to decide when they did them – which days worked best in their busy schedules. This is where using choices was key. I wanted them done, they could say when! They also chose that some chores they would own and others would rotate. It seemed that neither wanted to clean the litter boxes for our cats so they rotated that one with taking the garbage bins to the street. I was flexible! It didn’t matter to me when, just that they helped. I also had commitment from my husband and boys that if I cooked, they’d clean the dinner dishes. We would all take our plates over to the counter but then one boy would help dad wash the pots and load the dishwasher and the other one was responsible for cleaning up the leftovers and counters. Again, choices! I could chill while they happily picked their after-dinner music and cleaned up. It never took more than 15 minutes. This again was a chore I used to pretty much do all by myself and not always happily. Another win! However, my real coupe, if you ask me, came when I showed them the “random” list of jobs, the ones that don’t have a schedule. It had about 40 jobs on it. I was pretty much doing most of the 40 jobs and they all could see that now. Before we created this list, they had no idea how long it was. I asked them to each pick 4 jobs from the list. I didn’t care which ones, just pick and be responsible. Their eyes lit up. Only four! Wow! That’s a steal! They were expecting 15 or something. While that doesn’t seem quite fair in some ways to me, I was thrilled to have one son now be the permanent light-bulb-changer and the other the toilet-paper-refiller and foaming-soap-refiller. I can’t even remember the other ones but it was awesome. Just the week before we did this list I had asked one of my sons to replace a lightbulb. They had no interest whatsoever especially since we had high ceilings and a lot of them needed a ladder to get to. Well, the very next week after the new jobs were selected, I got 4 light bulbs changed from a happy teen. Yep! He smiled and just went off to change them. I encourage you all to make your list and get buy-in for some assistance. Chores are good for your kids even if they won’t admit it. Chore Charts, Chore Jars and Chore Events Next, I’m going to talk about how you might track and set up the chores. There are quite a few clever ways I’ve been researching that parents accomplish getting their kids to know what chores to do - chore charts, chore jars and chore days or mornings. Chore Charts – a simple chart that has chores listed and maybe the days of the week. You can use a marker or stickers that the child can show they are done with a chore. Simple. Some families collect stars and give a reward but since rewards are kinda like paying for chores I’m not all that keen on rewards, just charts for tracking what’s to be done. If your child can’t read, by all means use pictures. If your child is older, have Family Meetings to discuss what chores will be done by whom and when. The more choices you can give your kids over chores, the more ownership they will have in completing them. Chore jars - I love some of the Pinterest ideas where you take popsicle sticks and write all the chores on them and put them in a jar. Each person in the family can then pick a stick, do the job and then put it in the “completed” jar when they’re done. Have different jars for different ages if you need to. Be creative! Chore days or mornings - Some families pick one day on the weekend, maybe Saturday morning, where they all do chores together. A list is posted that morning of what needs to be done and everyone pitches in until they are all completed. Consequences for Not Doing Chores Let’s move on. We might agree on the concept of chores but what if our kids won’t do them without lots of nagging and threats? We need to stop nagging and threatening. I need you to go back and listen to Podcast #10 on how to set up good consequences. Using the Love and Logic® technique called Energy Drain that those of you who came to a class learned, as well as setting some good limits as to what will happen if chores aren’t done, is the direction you need to head in. If you don’t know the Energy Drain technique I’ll put a link to the audio you can download it from Love and Logic®. When kids are younger a simple limit stated positively like: “Anyone who has finished their chores is welcome to sit down at the dinner table.” Or “I read books to kids who have put their clothes in their hamper.” These work really well for little ones. For snarky teens and tweens you might need something more like, “Gee, it really drains my energy to see all those dishes sitting in the sink. What are you going to do to put my energy back?” If they refuse, just like I describe in Podcast #10, the next day might look like: “I drive kids to school or soccer practice who have put my energy back.” Or, “I allow kids to use electronics who’ve put my energy back.” You need to keep calm and you need to not nag or yell. I know it can be hard but, believe me, if you’re consistent, your kids will trust that you mean what you say. I do want to cover one more advanced concept that worked great for my own boys. I never yelled or nagged about doing chores. I let it be known that I’d be happy to do any chore for them and I posted a list of charges on my kitchen bulletin board. It was only $20 for me to take the garbage bins to the street, $5 to refill TP and $10 to comb the cat. Everything had a price. I collected my charges once a month from the pink note cards that went on the bulletin board to track when I did a job for them. It allowed me to be a happy mom and they got to be responsible since they didn’t like giving me their money. This whole setup I’ll explain in a future podcast on how to teach kids about money but for now put prices on things. I also bargained to take down a pink card if they did one of my jobs. I was flexible! I’d even tell you to feel free to post what you’ll pay kids to do your chores if they want to earn money as well. Did I give you enough practical ideas on how to get some chores done at your house? I hope I haven’t overwhelmed you. Realizing that our kids need chores is what I hope I’ve accomplished here. Let your kids grow and experience real life, get them out of the academic and performance-oriented bubble our society has been forcing them in to. Help create humans who care to pitch in and understand that life isn’t all about them; it’s about creating a loving environment where we can work to solve problems together. I loved how Julie Lycott-Haims wrote in her book, How to Raise an Adult, “By making them do chores -- taking out the garbage, doing their own laundry -- they realize I have to do the work of life in order to be part of life. It's not just about me and what I need in this moment." Here's the link to PINTEREST KIDS CHORE BOARD
<urn:uuid:d847243d-b69b-4f9e-91bd-b6488acabf51>
CC-MAIN-2022-33
https://www.parentingwithlogic.com/blog/podcast-14-chores-why-and-how
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573172.64/warc/CC-MAIN-20220818063910-20220818093910-00295.warc.gz
en
0.971106
4,433
2.890625
3
Over the years microscopy has benefited from many technological advancements. There is now a wide range of differing types of microscopy including bright-field, differential-interface-contrast (DIC), confocal, scanning electron (SEM) and transmission electron (TEM). Although there has been impressive improvement in the development of better means of capturing bioimages, the analysis of bioimages still remains a challenge (Eliceiri et al., 2012). It is common for researchers to extract qualitative information from an image by simply looking at it. However, this can result in unconscious bias and is often error prone. In particular, our brains have been conditioned to infer patterns (Witkin & Tenenbaum, 1983) and our eyes are ill suited to distinguish colour intensities (Gouras & Zrenner, 1981), both traits make us bad at quantitative analysis. For more advanced image analysis, researchers most commonly make use of ImageJ/Fiji (Schindelin et al., 2012; Schneider, Rasband & Eliceiri, 2012), simply referred to as Fiji from now on. Fiji has an advantage over generic image analysis software in that it can work directly with images from most microscopy instruments. This is achieved by making use of the Open Microscopy Environment’s Bio-Formats API (Linkert et al., 2010). This, along with a relatively easy to use graphical interface and a vibrant user community (Schindelin et al., 2015), has made Fiji the standard tool in bioimage analysis. Beyond Fiji, a wide range of tools, libraries and frameworks exist to solve various aspects of the image analysis problem. These include: (1) tools designed to solve very specific analysis problems, such as quantifying plant phenotypes (Pound et al., 2013; Fahlgren et al., 2015); (2) tools specifically designed for (bio)image analysis (Möller et al., 2016) Imaris (www.bitplane.com/imaris/imaris) (Marée et al., 2016); (3) analysis pipelining tools, some dedicated specifically to bioimage analysis (Carpenter et al., 2006; Kvilekval et al., 2010) Icy (http://icy.bioimageanalysis.org/) and some with more general application but specific adaptations to bioimaging problem domains (Berthold et al., 2007); (4) general purpose mathematical or statistical analysis tools with packages designed for image analysis (R Development Core Team, 2016), MATLAB (Version 7.10.0; The MathWorks Inc., Natick, MA), Mathematica (Wolfram Research, Inc., 2016), Python; (5) software libraries or frameworks providing image analysis methods and transforms (Lowekamp et al., 2013; van der Walt et al., 2014; Bradski, 2000). Some of the tools and libraries can be integrated with interactive data exploration tools that aid reproducibility such as Jupyter Notebook (NumFOCUS Foundation, http://jupyter.org), Beaker Notebook (Two Sigma Open Source, http://beakernotebook.com) and Vistrails (https://www.vistrails.org/). In our work, providing bioimage analysis support to various biology labs, we found that we needed a tool that added functionality to basic Python scripting to: Quickly explore data and test a range of analysis methods step-by-step. Provide easy access to existing powerful image analysis frameworks (such as ITK, scikit-image and OpenCV). Automatically add tracking of image manipulations to enable both auditability and reproducibility. Take small scale (single image or part of image) analyses developed and tested on a desktop/laptop, and transfer them to run on large datasets on a compute cluster or powerful dedicated workstation. These goals were driven by a desire to be able to quickly explore bioimage data in a reproducible fashion and to be able to easily convert the exploratory work into software for experimental biologists. Reproducibility is key pillar of scientific research and an aspect that is becoming more and more scrutinised as funding bodies strive towards a more open approach. The available tools all provided some part of this functionality, but non provided all of it, and many of the more complex frameworks required substantial supporting infrastructure to install and run. We needed a lightweight tool, that would enable us to make use of the power of image analysis libraries with Python bindings and Bio-Formats’ data conversion capabilities, but without adding substantial installation or maintenance overhead. We therefore developed a Python tool that enables those with bioimage data to: (1) quickly view and explore their data; (2) generate reproducible analyses, encoding a complete history of image transformations from raw data to final result; and (3) scale up analyses from initial exploration to high throughput processing pipelines, with a minimal amount of extra effort. Here we present our efforts to produce this tool, a Python package named jicbioimage. The jicbioimage framework was implemented in Python. Python allows rapid exploration of data and has a rich ecosystem of scientific packages, including several aimed at image analysis; for example scikit-image (van der Walt et al., 2014), Mahotas (Coelho, 2013), OpenCV (Bradski, 2000), and SimpleITK (Lowekamp et al., 2013). Building on the works of others Rather than reimplementing existing functionality from scratch, we decided that the framework should form a layer on top of the works of others, see Fig. 1. Firstly we decided to leverage the Bio-Formats tools for parsing and interpreting bioimage files (Linkert et al., 2010). Secondly, we decided that the framework should not directly implement image analysis algorithms. Rather, it should be able to make use of the work done by others in this field. By allowing users of the framework to use the wide range of existing implementations of image analysis algorithms, such as edge detection or thresholding, the framework becomes powerful and flexible. Loading bioimage data Our first goal (allowing rapid exploration of bioimage data) was addressed by providing functionality for loading and manipulating bioimage files. A bioimage file can contain more than one 2D image, often described as 3D, 4D or even 5D data. The framework therefore provides the concept of a microscopy image collection, which contains all the 2D images from the bioimage file. The microscopy image collection can provide information about the number of series, channels, z-stacks and time points in the collection as well as methods for accessing individual images and z-stacks. This functionality was implemented as a thin wrapper around Bio-Formats’ bfconvert tool. The bfconvert tool can convert many kinds of bioimage file into a set of appropriately named TIFF files. The converted TIFF files are cached in a backend directory. This means that the conversion only needs to happen once, which helps shorten the iteration cycle during the development of new analysis workflows. However, this detail is hidden from the end user who simply needs to write three lines of code to load a bioimage. The hypocotyl.czi file (Olsson & Calder, 2016) is a 3D image (contains a number of z-slices) and has information in two color channels. This particular microscopy image is of the hypocotyl of Arabidopsis thaliana. The first channel (0) contains the intensity originating from a nuclear marker and the second channel (1) contains intensity originating from a cell wall marker, see Fig. 2A. Methods for providing access to specific 2D images as well as z-stacks are provided by the microscopy collection. Reproducible data analysis To help researchers easily understand how their data are transformed during each step of the data analysis workflow, the framework contains a transformation function decorator. This function decorator can be applied to any function that takes an image as input and returns an image as output. When decorating such a function, two pieces of functionality are added to it: (1) the ability to append information about the function itself to the history of the returned image object; and (2) the ability to write the result of the transformation to disk, as a PNG file, with a descriptive name. Below we illustrate the use of the transformation function decorator by implementing a basic threshold transformation and applying it to the image loaded earlier. When we apply the threshold_abs() transformation, information about this event is appended to the history of the image. Furthermore, the image resulting from the transformation is written to a file named 1_threshold_abs.png in the working directory, see Fig. 2B. The history of an image provides a record of how the image was originally created. In this case the image was created by reading in a file representing the 2D image of channel 1, z-slice 31, see the first line in the output below. The events stored in the history of an image provides an audit log of the transformations that the image has experienced, note the threshold_abs(image, 50) event in the script’s output above. Without the additional functionality provided by our transform decorator, we would need to include explicit code to save the image and record the history, for example: Not only would we need to recreate this code for each step of the image analysis, but we would need to carefully ensure that data paths existed, filenames were consistently constructed and meaningful, and that images were saved in an appropriate format. We would also need to ensure that we recorded that this processing step had taken place and at what stage of processing it had occurred. With the transform decorator, all of these things happen automatically and consistently. Because the transformation function decorator can be applied to any function that has a numpy array as input and output, it is easy to wrap image transforms from other Python libraries. In fact most of the transforms in the jicbioimage.transform package are thin wrappers around functions from the scikit-image (van der Walt et al., 2014) library. The numpy array encodes multidimensional numerical data as a block of memory (van der Walt, Colbert & Varoquaux, 2011) together with information about the size and shape of that data. Within the Python programming community, it is a widely used standard designed specifically for scientific computation. The jicbioimage framework has an Image class that is an extension of this numpy array. One extension is the history property described earlier. Another extension is the addition of a png() method, which returns a PNG byte string of the image. This method is used by the transformation function decorator to write the transformed image to disk. The fact that the Image class is simply a numpy array makes it easy to work with other scientific Python libraries. In particular libraries such as numpy (van der Walt, Colbert & Varoquaux, 2011), scipy (Jones, Oliphant & Peterson, 2016; Oliphant, 2007; Millman & Aivazis, 2011), scikit-image (van der Walt et al., 2014), OpenCV (Bradski, 2000), simpleITK (Lowekamp et al., 2013), and Mohotas (Coelho, 2013) that all provide useful algorithms for image analysis. Tools for working with segmentations and making annotations The development of jicbioimage has been driven by our need to provide image analysis support across the John Innes Center (www.jic.ac.uk). As such, features have only been added after we have found a repeated use for them. Having added support for working with bioimage data and transformations we found that we continually had to implement custom code for working with segmentations. We therefore implemented a SegmentedImage class and a Region class. The SegmentedImage class inherits from the base Image class, but provides its own png() function that represents the segmentation as a false colour image. In a SegmentedImage a segment is represented by pixels that have the same positive integer. The integer zero is reserved for representing background and all other positive integers represent potential identifiers for segments. The SegmentedImage class then provides methods for accessing the set of unique segment identifiers and for accessing the segments as regions by their identifiers. The Region class extends a boolean numpy array representing the mask of the region of interest. It has functionality for accessing a number of useful properties of the region that can be calculated on the fly, for example the area and the perimeter. Some of the properties that can be accessed from a region are themselves instances of the Region class, examples include the border and the convex hull. We found that in communicating with experimental biologists it was useful to be able to create annotated images. We therefore implemented an AnnotatedImage class with convenience functions for loading a gray-scale background image, masking out regions of interest, drawing crosses and writing text. In order to facilitate rapid exploration of data and ideas the framework has built-in integration with IPython (Perez & Granger, 2007). Our Image class (the base 2D unit of processing) provides the methods needed by IPython to enable it to be directly viewable in a IPython/Jupyter notebook (particularly the _repr_png_() method). This enables those analysing their image data to see the output of each stage of data analysis and exploration, allowing the framework to act as part of an interactive, reproducible workflow. The desire to be able to convert exploratory work into tools that can be distributed to experimental biologists was largely satisfied by implementing the framework in Python. Python has extensive support for creating command line tools, graphical user interfaces and web applications. To aid in reproducibility the framework is under Git version control hosted on GitHub (github.com) and stable releases are available through PyPi (pypi.python.org). This allows scripts developed with the tool to be easily shared and published. The framework has been designed to be modular. At the top level jicbioimage is a name space package into which other sub-packages can be installed. This allows the dependencies of the sub-packages to be independent of each other. The framework currently has four sub-packages: The framework has been developed using a test-driven approach and has full test coverage. Although this does not mean that the framework is bug free, it gives some level of confidence that existing functionality will not be broken as the framework is developed. The tests are run on Linux and Windows each time the code is pushed to GitHub using the Travis CI (travis-ci.org) and AppVeyor (www.appveyor.com) continuous integration services. The framework has both high level descriptive as well as API documentation. The documentation is built each time changes are pushed to GitHub using Readthedocs’ hosting services Read the Docs (https://readthedocs.org/). The framework is supported on Linux, Mac and Windows and works with Python 2.7, 3.4, and 3.5. The framework depends on bftools and freeimage (The FreeImage Project, http://freeimage.sourceforge.net/) as well as the numpy, scipy and scikit-image Python packages. The code described in this paper was run using: Detailed installation notes can be found in the online documentation, jicbioimage.readthedocs.io. Below is an extended example illustrating some of the aspects of the framework described here. The code segments an image into cells to show the functionality available in the jicbioimage Python package. Segmenting images into cells is a very common problem in bioimage analysis as it enables understanding of cell level properties such as volume and shape. Segmentation is also a requirement if one wishes to locate other features of interest within cells. First the microscopy data is loaded and the 2D image representing channel 1, z-slice 31 is retrieved. The segmentation protocol will make use of the absolute thresholding transform discussed earlier. The segmentation protocol will also make use of a number of transformations built into the jicbioimage.transform package. It will also make use of two functions from the jicbioimage.segment package. Below is the code for importing these functions. The code below makes use of the threshold_abs() and the imported functions to segment the image. Because all of the functions imported from the jicbioimage.transform and jicbioimage.segment packages have been decorated with the transformation decorator they automatically write out the resulting images to the working directory, see Fig. 3. The top and bottom segments correspond to regions that are outside of the hypocotyl tissue. As such we would like to remove them. Here we accomplish this using a simple area filter. Finally, an augmented image is produced to show the result of the segmentation and the number of pixels of each segmented cell. The code below makes use of the pretty_color_from_identifier() function that produces false colour images in a deterministic fashion. The png() method on the image class returns a PNG encoded byte string. One can therefore easily create a PNG file by writing this byte string to a file opened in binary mode. In this case we use it to write out the annotated image, shown in Fig. 4. In summary this extended example shows that using relatively few lines of code it is possible to: load a bioimage and access a particular z-slice from it segment the image into cells; automatically creating a visual audit trail work with the segmentation to remove large regions that do not represent cells create an augmented image displaying the raw data, the segmentation and the size of each cell Software development inevitably involves a trade-off between implementing functionality directly and including dependent code through libraries or packages. The Python-bioformats (http://pythonhosted.org/python-bioformats/) package exists to provide an interface to OME Bioformats, using the Python Java bridge. We chose not to depend on this package as part of our bioformats integration since managing the Java bridge complicates installation and adds additional code maintenance overhead. However, as the jicbioimage code is relatively modular, it is possible to create another backend. An alternative backend that used the Python-bioformats (http://pythonhosted.org/python-bioformats/) package could therefore be created and added to the DataManager. One of the reasons for the code having this modular structure was to be able to add backends when encountering new formats not supported by Bio-Formats. To date we have had to do this once when working with bioimages generated from micro-CT (Computed Tomography) experiments. We have used our tool to analyse images at a wide range of scales, from the microscopy data presented in this paper (at micrometer scale), though images of whole plant organs (at centimeter scale) to analysis of drone imaging data of whole fields (a scale of multiple meters). This range of scales also cover images captured from varied devices, including different microscopy imagers (confocal microscopes, DIC, SEM/TEM), multispectral cameras, micro-CT capture and drone-mounted cameras. In each case, the support provided by our framework in providing access to image analysis libraries, auditing and recording image transform history and allowing rapid scaling has been extremely beneficial. We have used our tool to analyse individual images as well as large collections of bioimages. One recent study required us to analyse over 400 3D images of roots stored in 14 bioimage files ranging in size from 0.5 to 2 GB. In this instance, we prototyped code for segmenting the root into cells and extracting information on a per cell basis on two individual roots. To scale up the analysis we then wrote a script that used the DataManager to unpack all the bioimage files and write out a “job list.” Using the job list, it was then trivial to parallelize the image analysis. Although we are very happy with our tool, there are still areas where the tool could be improved. These are discussed below. Use of numpy arrays as a primary data format results in limitations on the scale at which individual datasets can be processed. For example, we have used our tool to analyse micro-CT data. Files produced by the imager can be up to 50 GB in size, at which point the three dimensional structure of the data is too large to fit in memory in a single numpy array. As jicbioimage does not load entire zstacks into memory unless specifically asked for, we can work with these large bioimages. However, extending the functionality of our framework to include functionality similar to ImageJ’s “virtual stacks” (whereby only part of the whole image resides in memory) would make this process smoother. At the moment the histories of images are not automatically saved to disk when a script is run. However, since a “history” is essentially a list of strings it is trivial to add a step to save it. We are currently experimenting with ways in which this could be automated. One option is to make use of Python’s built in logging module and add logging of the history to the transformation function decorator. In an ideal world the history would also be stored within the image files saved to disk. This would in theory be possible using the TIFF file format. At the moment there is no automatic extraction of meta data from bioimages. Although the meta data is always available from the original file it is in many instances useful to have direct access to it from within the data analysis script. In the future, we plan to add functionality to allow this. The work described in this paper arose out of necessity. As computational scientists supporting a large number of biology groups we needed a tool that would: (1) allow us to quickly view and explore bioimage data; (2) generate reproducible analyses, encoding a complete history of image transformations from raw data to final result; and (3) scale up analyses from initial exploration to high throughput processing pipelines, with a minimal amount of extra effort. The jicbioimage Python package provides all these features as a set of loosely coupled submodules. These allow the user to read in bioimage data, transform and segment the data whilst having the history recorded both in the image objects and as a series of image files written to disk. Furthermore, the package includes lightweight functionality for creating annotated images to aid in the creation of more informative images. To date we have used the tool, and its nascent precursors, on over fifteen internal projects at various stages of the publication pipeline (Berry et al., 2015; Duncan et al., 2016; Rosa, Duncan & Dean, 2016). As the package has matured, we have found substantial gains in our productivity. Each feature has resulted in a substantial reduction in the development time for subsequent bioimage analysis projects. We feel that we now have functionality to address most of the bottlenecks in our work. As such we consider the package stable. Furthermore, we consider the package to be a time saving tool that provides provenance and is easy to use. It is our hope that the tool will be useful to both experimental and computational biologists. The framework developed here is ideally suited to any researcher who desires automation and reproducibility in their bioimage analysis.
<urn:uuid:7366c0dd-80c3-4645-a5d8-addf36b03e47>
CC-MAIN-2022-33
https://peerj.com/articles/2674/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573399.40/warc/CC-MAIN-20220818185216-20220818215216-00096.warc.gz
en
0.9002
4,973
2.71875
3
Design, Engineering & Excitement! Engineering Koenigsegg Ford Thunderbird Falcon Mustang Comparison Tricks Revolution Tips Horrors Summary What is engineering, how does it work? Engineering is defined as building things via the use of scientific principles. In practice, engineering is trade-offs and fighting the limitations of our abilities and imagination. It’s reducing expenses, not just of a part, but of the parts to build a part, the industrial machinery and equipment. It often means a lot of tedious and dirty work to get to a finished product. Here’s a wonderful video comparing two corporations’ products. It’s pretty conclusive which cars are the better engineered for roadability. Just a note that there may be some bias since this was sponsored by Chrysler, but they really did have better performance and handling, at the cost of a cushy ride and usually somewhat higher noise levels. Trade-offs. Engineering can also mean recognizing better ideas, and adapting them, leading us to this Swedish manufacturer. Koenigsegg’s new luxury car, the $1.7 million Gemera, is a magnificent hybrid electric solution to the “cars problem,” and its general concept should be implemented by all manufacturers, who need to understand a game-changer when they see it. You will ask, “How are mainstream car makers going to do that? They’ll go out of business at that price!” They shouldn’t imitate the price, of course, but the concept. The Gemera is powered by a three (3) cylinder engine of 600 HP and three motors of a combined 1100 HP for a total 1700 horsepower. - Introduction to engineering - Koenigsegg’s advanced engineering - The straightforward solution - Reworking the Thunderbird - Tricks of the trade - A styling revolution - Tips for designers: Going retro - Tips for designers: Chrysler 300 - The Horrors: Inexplicable styling missteps Yes, 600’s crazy power for an engine, never mind one with only three cylinders. Note that fewer cylinders means fewer parts to help make it cheaper, an important consideration. But the engine still sounds bad-ass thanks to big cylinders. With more power than typical V-8s, this engine can be adapted for more or less power simply by choosing turbocharging or no. Why is this important? By having an adaptable engine, you potentially never have to design a new engine, which is a terrific savings of resources. Note also: no turbo lag at all should be noticeable, when powering your ride with electric motors as well as an engine. Cooling is more efficient with three cylinders as well, and I’ve seen other places tout that the fewer cylinders the less internal friction, but more important is the compact size, which is ideal. Small size means you can optimize for passenger and cargo space. And no need for any nonsense “cylinder deactivation” though you could implement that with Koenigsegg’s Freevalve system (no valve train, but using electro-hydro-pneumatic controls to open and close the valves). Turning some cylinders on and off is hard on an engine, due to hot/cold spots leading to thermal stresses, for one, and therefore stupid to do. Koenigsegg’s engine is revolutionary in another way. It runs on pretty much any fuel. On alcohol it produces hardly any emissions. The hybrid approach done the right way is the answer, with a natural gas-powered engine, or, using the Koenigsegg design. Mazda is going to introduce a hybrid using a rotary engine as a range extender only, so it recognizes the importance of this type of setup. This mindless pursuit of a switch to battery-only cars is dopey and doomed. Batteries are too marginal. The holy grail solid state battery is turning out to be a farce. Toyota’s attempt was to be released in 2021, now rescheduled to 2025 — or 2030. Or “whenever.” And so what, even if it is successful? Even with better batteries there’s still the problems of recharging, the sparse availability of chargers on the road and the range anxiety, and the absurd weight of too many batteries. Only two ways to easily deal with this, having the charger built in your car or having the road itself provide a way to charge (and hope the power doesn’t go out), and, ideally, cheaper and fewer batteries. With the Koenigsegg design, the solution is today. Manufacturers need to be smart. It’s also in their mandate to create the best profits for their shareholders. Smart is to take the best design available and license or emulate it. Right now, it’s this advanced hybrid design. Of course, it would need to be adapted for lower cost. Easy enough, 1700 HP is not an essential for a family car. Here’s a prediction: those manufacturers that don’t move towards this type of system will be the ones that go under, again. There’s no other feasible way to meet all the “green” rules. Due to the overwhelming incompetency in almost all management these days, they’ll mostly bungle it. The incompetency can be attributed to sins such as corruption, arrogance in ignorance, “diversity quotas,” and so on. There are very few truly competent people in any organization. There seem to be many that are talented in a specific area, but fall apart when big changes are required. But a main problem is managers who are solely promoted to that position through nepotism or for their bossiness, suckholery, and unquestioning obedience to superiors. In sum, the whole electric car push is a farce. It can’t work on the path its on now, with only batteries. Yet a simple switch to a hybrid using a small, clean-burning engine is the straightforward solution. Did Chevy need V-8s in 289, 305, 327, 350, 390, 409… cubic inch displacements? Well, at the time, for different applications, it may have made sense. Today, one engine can be used over a wide range of applications, cars and trucks, with only tweaking and optimization going forward. Power levels are, as mentioned, controlled by adding turbo(s) or even a supercharger, or by changing the electric motor configurations. The cost saving is tremendous for a car company to never have to design another engine, or at least delay it much longer! Hybrid is the only practical solution. Why would a company want to subject your customers to the looming risk of running out of juice and long charge times if it doesn’t have to? An absurd ~50 MPG fuel economy regulation for cars is coming up, a physically impossible requirement. Understand that it is, like all these types of demands, mostly phony, and government lackeys do it constantly to get payoffs/bribes/graft in order to later roll back these pie-in-the-sky edicts. Again, there’s also the intention to get us out of private transportation altogether. And More, Yet More Bailouts? There are lots of recent videos predicting yet another GM bust. Yet, defying all reason, almost all the automakers are suddenly going to pure electric, taunting plain logic. These imbeciles don’t realize, or care, that they’re putting themselves out of work. We have to assume they’ve been promised something special when the time comes. As usual, GM provides a humorous note: They tried, with the Volt, to do something clever with hybrids, but failed miserably. Too expensive, too ugly, not a very good car. Oddly, it seems to have been designed similarly to the excellent Prius, so they probably screwed it up with typical GM cheapening and corner-cutting. Turning now to design, let’s take a look at Ford. The Ford Motor Company Notice the weird way everything is adulterated in foods, to the point now where it’s hard to find something like a simple juice without artificial sweeteners and other additives? And then, if you do find reconstituted a.j., for example, it’s extra-diluted with water, a new scam to hide the cost cutting. I doubt it’s all inflation, especially when wages are stagnant and all business is consolidating into monopolies, with monopoly price fixing — and vertical integration. They just want enormous, even larger profits, using inflation as an excuse, so they dog-pile on we poor suckers. “Retro-modern” Ford Thunderbird Looking back at the 2002 Thunderbird is a good example of this adulteration in automotive terms. This blob design was in keeping with the times — even Mercedes utilized this type of ugly sculpting. When it was new, it seemed fresh, but the utter laziness of these smoothed and simplified designs doesn’t hold up. This was produced when the so-called jellybean was their ultimate expression of “modern.” The shaved, contoured styling ruined a lot of cars, like Mercedes, because it ultimately looked cheap. And it is. That ugly period in design was ideal for the penny-pinchers at Ford. They thought they really had something with that chintzy little knock-off, that Playskool simulation, but it wasn’t. Not a real Thunderbird. The original was classy, with substance. The new one, which may have been exciting at first glance, if you didn’t have anything to compare it to, is a disappointment. We’ve seen that horrible catfish maw somewhere else before… but where? Where? Ah, yes: Studebaker Hawk! If anyone is still fooled by that T-Bird, an evaluation reveals too much flex on bumps, two-seat impracticality, reduced trunk space, and a lousy convertible top that gets creased up and mangled in its storage area, among other problems. If the Chinese had done this, what would our thoughts be? Probably less favorable, that it was a mocking imitation. Based on the Lincoln LS model, mediocre at best, the 2002 Blunderbird came with self-inflicted problems that were completely unnecessary, but are so typical of Ford. It was based on an expensive/cheap platform — poorly engineered, but probably expensive for Ford due to poor effort and execution. It was space inefficient, and only a two-seater (in this iteration), when they already know a four-seater is required for much better sales. That lousy, awkward convertible top was inexplicable from a company that knew how to successfully tuck an entire solid roof in the trunk back in the 1950s! The car flexed like Gumby and was basically a piece of plop. Chrysler made similar mistakes with its Prowler, and inflicted another: it only ran a V-6, not an eight! But they at least had an excuse — it was a styling exercise meant to test the use of aluminum in their cars, and Ma Mopar never expected or tried to sell many. One thing about convertibles, though. They are criticized for being noisy and having less protection, but there’s no reason now, with carbon fiber and better insulation, many more cars can’t be hardtop convertibles. It’s simple prudence, since convertibles hold their value better. With some flex engineered into the solid tops, they could be designed to easily fit in the trunk space. There should be more on the roads. People who don’t like being wind-buffeted, seemingly have forgotten to only have the top down for low-speed cruising. There’s something noticeable with the car companies: They produce something “luxurious,” when it isn’t, but could make a very nice lower-tier car. Now give the 2002 T-Bird another look. The vehicle would have made an ideal different model, with a few tweaks to the front and rear clip and rear fenders. Make it a four-seater (easy enough, the platform it’s based on was). And strengthen the chassis so cutting the roof for a convertible doesn’t turn the car into a stick of warm butter. Mad Magazine didn’t suffer Fords lightly. Lets also take another look at that old bucket, the “Furd Foulcar,” which could be resurrected quite nicely. Now, don’t look too hard at this mashup with the Falcon face grafted on the 2002 T-Bird. It’s just a crude paste-over, we’re not impressing anyone with graphic artistry. But it looks about 10,526 times better anyway. Authentic, whereas what they put on the road is artificial. The original Thunderbird was detailed and artful, the lazy 2002 washout is a caricature, not an homage. You might want to modernize the Falcon front clip somewhat, but it looks just fine as is. The chrome bumper looks good too, or it might also look nice in gloss black. Why not bring that type of bumper back for some cars? They were way more effective too. Well, you know why, they save sixteen cents a car or something without them. Yes, cars now have “safety standards,” that limit their design choices, but those standards are a bunch of noise. Nothing in the “standards” seems to prohibit the worst offenders from being lumbering behemoths, like Cadillac Escalades, tearing up and down the roads. Here’s something delightful: Make those target sights on the front fenders the turn signals, the amber turn signals in the bumper the fog lamps. It looks like the Stude already did this trick with the turn signals. A Word about Ego & Tunnel Vision They set out to design a “luxury-sport” car with the 2002 Bird, and didn’t realize, in their ego, that they failed at both luxury and sport. However, they did a design for a different car that isn’t bad, as we see, when it is repurposed. But again, in their ego, they never thought to do that. Well, it’s not necessarily ego, but tunnel vision. They were fixated on a “sport luxury design,” and it didn’t work out that way. Or, they considered their instincts flawless, and were wrong. Strangely, it looks like there was already the influence of “old Falcon” in their work, though. It’s a funny thing. How do you set out to “design a luxury car?” What’s fun is to look at Chinese designers’ ideas of “luxury,” or old Russian Volgas for your dose of “people not clear on the concept.” Same thing happens on occasion with American, German and Japanese products, though usually not so woeful. We could make the Falcon a little longer, as seen here. But that would make it look less of an economy car in this case, so we reserve the slight front lengthening for the Mustang, as you can see. It’s a design trick, and an economical way to share a platform, the important dimensions of a car, while making it appear they are different cars. It’s cheap to extend the front clip out as you like ahead of the front wheels, whereas stretching the distance between the wheels or between front wheels and windshield pillar is costly. Speaking of design tricks, note the positioning of the headlights, high on the fenders of the 2002. Part of what makes a modern car look modern was introduced in the 1960s, by GM by lowering the position of the headlights. So yes, in a way the front face of the 1963 Falcon looks more modern than the 2002 Bird. Explaining the Mustang The 2002 T-Bird design suits the Mustang too. In fact it’s a massive improvement over all the modern Mustangs, which are too chunky and heavyset, and overdone. The Mustang was originally successful because it was relatively inexpensive, because it was a Falcon. Nowadays, to drive proudly home in a new Mustang costs about 50% more money in real terms. That is, the thing is overpriced. Ford might tell you, “Oh, we couldn’t do a Falcon based on the LS platform, it would cost too much.” BS. They designed the platform poorly and inefficiently, that’s all, and tried to recoup losses by selling cars on the platform for an astronomical sum. This use of the LS platform for the Falcon and Mustang would have been an efficient use for economy of scale. The first strip below provides a comparison of the old vehicles, we have ’57 T-Bird, ’63 Falcons and the ’66 Mustang… looks like a Shelby with some mods? Well, its a classic Mustang face with a few embellishments. The second strip compares the 2002 Bird, short and longer Falcon mock-ups, and Mustang mock-up. There’s another reason they wouldn’t make this more elegant-looking, stylish Falcon, though. Modern cars are priced in the ionosphere, yet they often give you only crap for your money, especially if you aren’t shelling out the big bucks. It’s yet another slap in the face. It’s incomprehensible to me how a car can be priced at an average annual salary after taxes, just for a “family sedan,” never mind insurance, gas, oil and maintenance, cleaning, parking, tickets… You know, people might like to have a luxurious-looking car even when they spend less. It’s still a lot of money, even for an economy car, why shouldn’t they get something that looks decent? It doesn’t cost any more to provide luxurious styling cues, but they want to rub people’s noses in it if they only pay for an “economy car.” Another industry trick, of course. The ’50s and ’60s Styling Revolutions Chrysler started a design revolution with its 1957 cars that were “longer, lower, wider,” shown in the video above. The streamlined, sophisticated cars were a complete, refreshing modernization from “bowler hat” styling. But they didn’t know where to go from there, and simply kept doing variations on the same theme. It took GM to step forth with that idea to simply lower the headlights in the grille, and push the wheels outward, closer to the sides of the car. Pontiac took advantage. It advertised hard, the “Wide Track” look of its cars, and they did look fantastic. All cars today carry the influence from those breakthroughs from Chrysler and GM. Design Tips for the Manufacturers Here’s a design tip: If you must go retro, you need some frills and frippery, the sculpting, heavy chrome and fine details those early models possessed. A second thing, and a very important point they really should take heed of: They should, but don’t tend to, produce multiple designs at once, case in point: the Chrysler 300. They should have had freshenings, updates and redesigns in the bag for the next decade at least before release of the new 2005 model, instead of freezing up and finally, too much later, releasing a tired, lukewarm effort that diluted the original appeal of the vehicle. By doing your new design, plus several updates in advance, you have enough to do updates for the next 12-15 years, or 4-5 refreshes. Styles will have changed by then, so then you can do a larger revision. They obviously didn’t do that with the 300, and sales suffered needlessly. In the case of the 300, they already have an ideal form for a large car, and they could get away with making fewer, minor changes at each update. They say it’s like a Bentley, why not embrace that and use a few more cues from that make? The aftermarket had already been playing with the grille since almost the beginning. But adding the Bentley headlights makes for an interesting change, too. For those wondering, it isn’t a case of hindsight being 20/20 or other BS. It was apparent at the time, that most car flops would be flops. You think anyone thought the Edsel was attractive? “My, what a beautiful car,” people were lining up in praise? No, it made people avert their eyes, or shake their heads in disgust! Jokes abounded regarding the Edsel. Except for the Ford fanboys and die-hards. If you served them up plop on toast and put a Ford blue oval on it, they’ll buy it. It seems every make has its indiscriminate boobs who will buy its crap, no matter what. - the Cimarron – perhaps the worst of the bunch, this rebadged Cavalier almost destroyed Cadillac as a business - the Edsel – Ford was staggered by the billions in losses on this, too - the ’62 Dodge - the ’60 Plymouth - the ’61 Plymouth - the ’60 Ford - the ’58-’60 Lincolns - the ’59 Chevrolet Oh, there were many more disasters, but not many that flaunt themselves so shamelessly. Some people don’t care for some of the small cars that came out in the ’70’s, like Pinto, Vega and Mustang II, but they were somewhat attractive cars that could’ve and should’ve been salvaged. They were disasters in their own right, but if the companies had committed to improving them, they would have been a decent response to the Japanese invasion that devastated the domestic makers. Cadillacs were mostly just gilded Chevys, sometimes too obviously. The simple problem with Cadillac, that persists to this day, is that they never took it seriously past the 1960s. You can’t have a Chevyllac in this day and age, when there is Mercedes and there is Lexus and a number of other quality marques. At the near-death moment, they decided to “save the brand,” and started an initiative to make “sporty Cadillacs,” but didn’t commit the budget or the common sense to do it properly. It was the GM “lipstick on a pig” approach. And, crucially, Caddies were never intended to be sporting. What was needed was an effort from scratch that should have been processed the way Toyota did when starting Lexus: produce an initial, superior car, from scratch, price it to be an unusually good bargain, and build the brand from there, adding new product when suitable. Now, it’s pretty much just the Escalade that’s keeping them above water. The 1982-88 Cimarron was their biggest fiasco, but there were others, and you could include all of their sedans and coupes for at least the last 10 years, which just never caught the public’s fancy. Sometimes Cadillac division really tried and just couldn’t excel. This GM blunder, produced for the 1987-93 model years, wasn’t as catastrophic, but was stupid in execution and should’ve not been made. They spent too much money building it overseas and shipping it back from Europe. Plus it had its share of woes, was only a two-seater, and was too expensive. It only broke even, if that, but perhaps would have been an interesting concept car, with an eye to a sporty &/or smaller Cadillac to be produced later as a Seville coupe. Here is an instance where they did something right: In 1975, they tweaked and re-skinned the Nova, hung a presumptuously huge price tag on it, and called it a Cadillac Seville. In this case, the Nova wasn’t so bad, so the Seville wasn’t a failure. In fact, the Nova Seville was a coup, and, due to some diligence, made a decent smaller, cleverly styled Cadillac. (Hover image to see the Chevrolet Nova.) Ford’s laughable antics need mention, too. The tarted-up heap called Versailles was Ford’s Cimarron, but the puny effort failed, rightfully. It was almost indistinguishable from the donor Granada and only lasted from 1977-80. (Hover image to see the Ford Granada, which was also presented for sale as the Mercury Monarch.) There are no “problems” when there are solutions. “Renewables,” “car pollution,” and the like are now proven non-issues. That the solutions aren’t implemented means there is no will to implement them, not that there is a chronic problem. We’ve already visited some solutions in previous blogs, like how hemp oil provides a limitless, endlessly renewable source of oil. Which would be good, even though oil isn’t running out and cannot run out in 20 years, or 200 years. Recall that the oil already has run out, according to failed past “predictions.” But remember: There is no end to the hemp oil that can be grown, and the stuff is a weed (heh-heh, weed) that grows almost anywhere. And no end to the alcohol that can be produced. And, those fuels are cleaner burning. In fact, the Koenigsegg engine cleans the air as it runs (in already polluted places, mind you, but beggars can’t be choosers). As to the matter of design, it looks like the manufacturers still have a lot to learn. They stumble around, basically waiting for some talented genius to come rescue them. Guys like Harley Earl, Bill Mitchell, John DeLorean, Virgil Exner or Elwood Engel. It’s amazing that these companies can endure for 100 years +, yet they seem to retain no accumulated knowledge over those years that would make their work easier and more effective. They still bungle and stumble their way through, hit and miss, like drunken toddlers. Well, that’s another one of the weird follies of life, apparently. (Updated July 1, 2022)
<urn:uuid:42e8dde4-c899-4f57-9ac8-d5e03348bcb4>
CC-MAIN-2022-33
https://markwidmer.com/2022/04/the-automotive-industry-2/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572021.17/warc/CC-MAIN-20220814083156-20220814113156-00097.warc.gz
en
0.961582
5,701
2.90625
3
Building Writing Skills, Critical Thinking and Teamwork through Technology and Revision—Megan E. Williams (2008) Using technology, revision as part of the writing process, and group work, American Studies students learn to question dominant beliefs concerning American identities. American Studies 110: American Identities is an interdisciplinary introduction to the interplay between individual and group identities in American society between World War II and the present. The aims of this course are to introduce and engage students with ways of exploring and understanding American identity, as well as introduce them to thinking like an American Studies scholar. AMS 110 fulfills the university-wide Society and Culture requirement and is typically taught as a large lecture course. This section was an eight-week summer course and enrolled seven students. This portfolio focuses on three assignments: individual Blackboard discussion board posts, opportunities for revision, and a group wiki essay. Six times throughout the semester, students posted reading summaries and discussion questions on Blackboard. My hope was that these assignments would promote student questioning and encourage students to cultivate important reading and writing skills. I decided to encourage students to revise their written assignments based on rubrics I used to grade them. My hope was that students would come to view writing as a process, rather than a onetime demonstration of knowledge. I also decided to try a new final assignment, a group-based, reflection essay on a course wiki asking students to make connections across texts to answer essential questions, one of the most important aspects of thinking like an American Studies scholar. In their Blackboard posts, students consistently raised thoughtful questions that we then used to direct in-class discussions. By requiring students to generate summaries and questions before class, I was able to gauge understanding and interest and assure that each student’s ideas, interests, and questions were addressed. The students who elected to revise their writing assignments were both able to improve their theses and their ability to sustain and support their arguments. Unlike those students who opted out of revising, those who revised seem to have left the course with the understanding that revisions are an essential part of the writing process. For the final wiki assignment, one of the three groups worked together to craft an excellent essay that made important connections across various texts from multiple disciplines. The other two groups were not as successful. I believe that most of my students left the course with tools for improving their critical speaking, viewing, reading, thinking, note-taking, and research skills. Still, there are several ways that I might alter the course to further clarify the connections between my goals and student performance. The next time I teach this course I plan to continue to use discussion board posts, but I might also require that students respond to their peer’s posts prior to class to increase engagement before and during class. I would also require students to revise their projects to further emphasize the importance of critical writing as a process. At this point, I am still considering what to do about the group-based wiki project. Perhaps constructing larger groups and allowing each group to “vote off” a student who is not contributing would encourage students to take the assignment seriously. ^Back to top^ American Identities (AMS 110) is an interdisciplinary introduction to the interplay between individual and group identities in American society between World War II and the present. Throughout the semester, we read, view, listen to, and discuss materials that explore different ways of understanding American identities (ethnicity, race, religion, gender, sexuality, region, class, and age) and contemplate the degree to which our identities are socially constructed. Using visual culture, memoir, fiction, ethnography, music, television, and film, we pay special attention to how issues of popular representation and histories of social movements for individual and group rights intersect with identity formation. Although this was my fifth semester teaching a version of AMS 110: American Identities, it was my first experience teaching during a summer session. Typically offered during the fall and spring semesters as a large lecture course enrolling approximately 300 undergraduates of diverse backgrounds, ages, and majors, my Summer 2008 section of AMS 110 consisted of seven students: two American studies majors, one prospective major, and four students seeking to fulfill the university-wide Society and Culture (SC) requirement. My main goal is to teach students to think like a beginning American Studies scholar by: - Creating assignments that help students develop critical speaking, viewing, reading, thinking, note-taking, library research, and teamwork skills that are applicable to life both within and outside of the university; and - Asking students to question “common sense” or dominant understandings of gender, sexuality, ethnicity, religion, class, history, the United States, and popular culture. Ultimately, I hope that my students will use the knowledge and skills gained in my American Studies classroom to become partners in creating what scholar Michael Cowan describes as, “a just, creative, and humane ‘America’” that, in the words of bell hooks, “celebrates diversity, welcomes dissent, and rejoices in collective dedication to truth.”* *Michael Cowan, “American Studies: An Overview,” online via Encyclopedia of American Studies, The Johns Hopkins University Press (2005), http://eas-ref.press.jhu.edu/ view?aid=524&format=print (accessed August 21, 2006); bell hooks, Teaching to Transgress: Education as the Practice of Freedom (New York: Routledge, 1994), 33. - How might a teacher use technology (in particular the discussion board forums and wikis available through Blackboard) and out-of-class time to teach students skills critical for thinking like an American Studies scholar? - How do opportunities for revision after written feedback and optional one-on-one writing consultations alter students’ understanding of the writing process? Does having the opportunity to revise and resubmit their work after a review process help shift students’ notions of writing from a one-time demonstration of knowledge to a process of revision and argument development, much like that used by American Studies scholars? - How does a group-based, cumulative wiki project contribute to students’ working together to make connections between texts in order to answer essential course questions (listed on the syllabus and discussed throughout the semester)? Using Blackboard posts outside the classroom to build reading, writing, and thinking skills Six times throughout the semester, I required students to post on Blackboard. Each post consisted of 25-word summaries of the main argument, theme, or thesis of the assigned reading(s) and discussion questions which were used to provoke student discussion in class (Bb post instructions). The students were divided into groups, and each group posted on the same dates throughout the semester. Students were required to post their summaries and questions to the discussion board under the correct forum on Blackboard by 11:59 pm on the night before class; every member of the class was required to read her or his peers’ summaries and questions prior to class. My hope was that these short summaries of readings, paired with student-developed discussion questions, would promote student questioning and encourage students to cultivate important reading and writing skills that are crucial to thinking like an American Studies scholar, such as reading for a purpose, identifying arguments, and writing succinctly. On a practical level, they would also require students to read and think about the course texts prior to coming to class and generate questions for class discussion that originate from them and their peers. Students were provided with grading criteria and examples authored by prior students. Using project revisions and individual consultations to build writing skills Throughout the course of the semester, each student was responsible for completing two individual writing projects designed to develop and refine research, writing, and analytical skills crucial to “doing” American studies and meant to prepare them for upper level American studies classes.There were five possible projects and corresponding rubrics: - Document Analysis Assignment (pdf) - Document Analysis Rubric - Library Database Assignment - Library Database Rubric - Oral History Assignment - Oral History Rubric - Annotated Bibliography Project and Grading Standards - Four Freedoms Assignment and Rubric Each student was required to complete two of these projects. In prior semesters, I have not allowed students to revise their projects for a better grade. This semester, I decided to encourage students to revise their projects using the completed rubrics and my additional comments. If students were unsatisfied with their project grades, I suggested that they meet with me during my office hours for an optional one-on-one consultation to discuss the grading rubric, my notes, and possible revisions; students were able to revise without this consultation if they felt comfortable with implementing my suggestions. Students were given one week to make revisions to their project, incorporating and responding to my feedback and rubric comments as well as our discussion, for a new grade. The two out seven students who opted to revise their projects were able to significantly improve their grades while learning important writing strategies. Often, undergraduates are given an assignment and it is returned to them with instructor feedback without the option of incorporating instructor feedback into a revised paper. Revising an essay or manuscript to incorporate peer feedback is a crucial step in the writing process of American Studies scholars. My hope was that students would come to view writing as a process with several drafts where one hones and defends an argument based on peer or instructor critique, rather than a onetime demonstration of knowledge. Using a group-based, cumulative wiki project to build teamwork, thinking, and writing skills In past offerings of this course, I had never required students to demonstrate their ability to make connections across the semesters’ topics or to attempt to answer the course’s essential questions in a cumulative way. Believing that making connections across texts in order to answer “meta” or essential questions is one of the most important aspects of thinking like an American Studies scholar, I decided to try a new final assignment, a group-based, reflection essay on a course wiki. In groups of two to three, students were required to write a joint essay that made meticulous, well-cited use of course texts to answer one or more of the essential course questions outlined in the syllabus (Group Wiki Assignment and Rubric). By working together in groups, I expected my students would gain an important skill applicable to their lives regardless of their chosen major/minor fields—that of discussing culturally sensitive issues and negotiating differing opinions in order to craft a joint project. I also hoped that this project would help students to make connections in groups that they might not arrive at alone. ^Back to top^ Overall student performance The average grade is the course was B. Grade distribution was as follows: Examples of Blackboard posts Example of high-level work (Student A): 50/50 points (advanced)—Summaries accurately reflect the reading(s) main thesis, theme, or argument and do not exceed 25 words each; questions invite discussion and conversation and are grounded in specific readings (and provide citation): - Nelson indicates that we learn more about Steele’s life and motivation than Obama’s in Steele’s book, labeling his ideas about race relations old-fashioned and irrelevant. Q: On page 2, Nelson argues that “‘Black Americans’ slow embrace of Obama’s candidacy was strategic and pragmatic.” What does she say is the reason for this? - “Bind I: The Discipline” discusses the importance of maintaining the mask of Bargainer, Challenger or Iconic Black to uphold the contract of Black redemption and White absolution. Q: On page 111, what does Steele say is the “third rail” of American race relations? How does this third rail change the redemption/absolution equation? - “Bind II: Is He Black Enough?” proposes that Obama as a Bargainer may alienate Black Voters, and Obama as a challenger may alienate White voters. Q: On page 126, what aspirations does Steele refer to when he says that “Obama is a bound man because he cannot serve the aspirations of one race without betraying those of the other?” Student A’s work met all of the requirements for the assignment. She succinctly highlighted the main argument of the assigned chapters and created thoughtful, textually-based questions. Example of lower-level work (Student B): 45/50 points (good)—Either the summaries accurately reflect the reading(s) main thesis, theme, or argument; the summaries exceed 25 words each; or the questions invite discussion and conversation but are not all grounded in specific readings (does not include citations). - “Bind I: The Discipline” talks about the discipline which is the muscle that enables the masks of both bargaining and challenging to work, bargainers, and iconic negroes. Q: Why did Shelby Steel [sic] say “And yet, black responsibility is the third rail of American race relations.”? (page 111) What does that mean? - “Bind II: Is He Black Enough?” talks about Obama, the first black to bargain his way to national political importance, works entirely within the current configuration of race relations. Q: What does it meant that “he is bound by the same racial configuration that he has exploited.”? (page 127) - “Identity Politics” argues that to comprehend the phenomenon, people have to understand precisely what an identity is. Moreover, Nelson pionts [sic] out people can know more about this writer than in Shelby Steel’s [sic] book. Q: Why and how can some of Kennedy’s interpretations be deeply disturbing? Student B’s work earned a 45/50 because, although two of his summaries and questions met the requirements, his final summary and question did not. Overall, students performed very well in this component of the course. Students consistently raised thoughtful questions that we then used to direct our in-class discussions. The student-generated questions were used in the classroom in a variety of ways. In some instances, I chose particular questions posed by students online and combined them with questions that I crafted. Many times, students would think about and discuss several questions in pairs before we would discuss them as a whole group. I found that giving students time in pairs to contemplate the questions and then giving them the opportunity to share their initial discussions with the larger group provided a useful springboard for discussion. By requiring students to generate summaries and questions before class, I was able to gauge student understanding and interest. Additionally, the student-generated posts assured that each student’s ideas, interests, and questions were addressed over the course of the semester. Examples of individual projects and revisions Example of high-level work (Student A, Mini Oral History Project): Student A’s original project (pdf) had an interesting and clearly stated thesis, but did not adequately sustain the argument and lacked clear connections between her analysis of oral interviews and use of supporting quotes from the required texts, resulting in a grade of B+. I provided Student A, an American Studies major, with a typed summary of my feedback as well as written comments on the paper and a completed grading rubric. I offered Student A the opportunity to meet with me to discuss possible revisions. She felt that my feedback was clear and that she was capable of incorporating my critique in a revised paper without a one-on-one consultation. She was given one week to revise her essay. Her revised project (pdf) earned an A. Example of lower-level work (Student B, Double Victory Project): Student B’s original project (pdf) lacked a strong thesis, adequate support, and organization. I returned the paper to him with extensive comments and questions, as well as a completed grading rubric. I offered Student B a writing consultation during my office hours and the option to revise his paper. Student B, a foreign student majoring in the sciences, had admittedly little experience crafting an essay and gladly accepted the opportunity. Using a visual writing tool—a template (pdf)—we discussed the structure of a well-argued essay and, by asking him questions and using existing ideas from the original paper, we filled out this template together. Student B was given one week to revise his paper. His revised paper (pdf) earned a B, a full two letter grades higher than his original paper. Both Student A’s and Student B’s writing improved with the opportunity to revise their projects. Responding to my written suggestions and critiques, Student A retained her thesis but better sustained her argument throughout the paper, integrating supporting quotes with her analysis of primary and secondary sources. Student B’s work improved significantly following our one-on-one consultation. In his revised essay, he offered a clear thesis and support for his argument—both lacking in his initial draft—and demonstrated increased competency. Unlike those students who opted out of revising their projects, Student A and Student B seem to have left the course with the understanding that revisions are an essential part of the writing process. Grouped together with another student in Group 2 for the cumulative wiki essay, Student A and Student B each asked me (of her/his own volition) to look over drafts of the final wiki essay prior to its due date. They each incorporated my suggestions in their final draft, and their group essay represented high-level work. Examples of wiki essays with the graded rubric associated with this work Example of high-level work (pdf) (Group 2: Students A, B, and X): Overall, I was very impressed with Group 2’s level of cooperation and their overall final product. Throughout the process, they collectively wrote a cogent project proposal, made productive use of scheduled in-class time to work together on their project, constructed several drafts, and brought a completed draft for peer feedback (using the grading rubric) to a class session set aside for this purpose. Their final paper was a pleasure to read—it presented a clear thesis with consistent and persuasive support, incorporating the required number of texts and making connections across different course units. It earned a 200/200. Example of lower-level work (pdf) (Group 3: Students C and D): In general, I was disappointed with Group 3’s level of work, particularly on the part of Student D, throughout the course of the project. Group 3’s project proposal did not meet the assigned requirements. In my typed feedback, I offered suggested sources and pushed them to clarify their thesis. Although Student C made productive use of scheduled in-class time, working on her portion of the essay, Student D neglected to attend this class session. Student C approached me and told me that Student D was not responding to her emails regarding the project and had not been contributing his share. I told Student C to continue to contact her partner and to work on her portion of the project. Student D also neglected to attend the in-class grading workshop or provide his portion of the essay, resulting in a largely unproductive workshop for Student C, although her portion of the essay was critiqued, using the grading rubric, by me and her peers. In the end, Student C submitted her portion of the project, which followed the assignment and her group’s outlined proposal, on time. Student D created his own separate wiki, which he turned in late, and which did not meet the requirements or follow his group’s project proposal. Ultimately, I graded Student C’s contributions to Group 3’s unfinished essay individually, judging her portion as A- work. Whereas Student C attempted to craft her portion of an essay—which included a clear thesis statement and two, well-written, supporting paragraphs using half of the sources required for the entire project—Student D did not meet the requirements laid out in the syllabus, project description, and grading rubric, earning a failing grade on the project. ^Back to top^ Overall, I was satisfied with many of the course outcomes; I believe that the majority of my students left the classroom with tools for improving their critical speaking, viewing, reading, thinking, note-taking, and research skills, as well as a new appreciation for questioning the status quo and an increased sensitivity to issues of multiculturalism in American society. Still, there are several ways that I might alter the course to further clarify the connections between my goals and student performance: - The next time I teach this course I plan to continue to use discussion board posts as a way to teach students important writing and questioning skills, but I might add a virtual response component as well where each student is also required to respond on Blackboard to at least one of their peers’ questions before class. I feel that this added element would oblige students to think about the discussion questions before coming to class and could possibly lead to further student-generated questions and discussion topics. - I would also require students to revise their projects. I believe that this requirement would further emphasize the importance of critical writing as a process. - At this point, I am still considering what to do about the group-based, cumulative wiki project. Whereas one of the three groups, Group 2, was able to work together to craft an excellent essay that made important connections across a variety of texts from multiple disciplines, the other two groups failed to complete the assignment. In both Group 1 and Group 3, one of the students was essentially abandoned by their partner; while they crafted their portion of the essay, their partners did not comply with the assignment. This made grading difficult. Ultimately, I decided to grade each partner individually. Perhaps constructing larger groups and allowing each group to “vote off” a student who is not contributing her or his share, resulting in a failing grade for that student, would encourage students to take the assignment seriously. Contact CTE with comments on this portfolio: firstname.lastname@example.org. ^Back to top^ Click below for PDFs of all documents linked in this portfolio. - Williams portfolio - AMS 110 syllabus - Course goals - Bb post instructions - Bb post grading criteria - Document analysis assignment - Document analysis rubric - Library database assignment - Library database rubric - Oral history assignment - Oral history rubric - Annotated bibliography project and grading standards - Four freedoms assignment and rubric - Group Wiki assignment - Group Wiki rubric - Student A original project - Student A revised project - Student B original project - Student B revised project - Revision template - Example of high-level work - Example of low-level work
<urn:uuid:42a62285-cbbc-4727-9d13-dc83809a83d9>
CC-MAIN-2022-33
http://cte.ku.edu/portfolios/mwilliams2008
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571147.84/warc/CC-MAIN-20220810040253-20220810070253-00497.warc.gz
en
0.957732
4,762
2.9375
3
Judy Fakes – Ryde College TAFE, NSW This paper summarises the critical stages of managing the heritage landscapes and trees of the future from planning, site analysis, species selection, stock selection, planting, establishment, and maintenance to removal and replacement. This paper focuses on a landscape approach rather than the management of individual trees. It also includes a bibliography that may be a useful resource. Across Australia, many of our 19th Century and early 20th Century landscapes and streetscapes are in decline. In Sydney these include Centennial Park, Hyde Park, the Royal Botanic Gardens and the Domain. Usually, the most visually dominant elements in these landscapes are the trees. Many Avenues of Honour have been lost or compromised through radical changes to their environment. In 2004, the removal of eleven trees, including four 140-year-old Moreton Bay Figs, from Sydney’s Domain stimulated a wide-ranging debate within the community. The reason for the removal was to plant 33 new trees. This was the debate that we had to have; as in order to sustain the visual amenity of our landscapes, tree removal and replacement is inevitable. The critical question seems to be when, in the life of a tree or a landscape, should this occur. If future generations are to enjoy the style of landscapes that we have come to know and love, then management decisions must be made for the life of the tree, the life of the landscape and the life of the manager. STAGES OF THE TREE Trees go though a range of physiological stages during which growth rates change and the form of the tree develops. The essential requirements of trees remain the same throughout their life. Trees must have adequate supplies of light, water, nutrients, soil oxygen, carbon dioxide; they must be adequately supported and have a reasonable temperature range in order to maintain health and vigour for as long as possible. In unnatural and constructed landscapes we must provide most of these resources. As trees must continue to grow new leaves, transport tissues and roots in order to stay alive, the provision of these fundamental resources must be ongoing. The ability of a tree to cope with shortages of resources will to a large extent depend on its stage of life. As trees mature, growth slows. A tree in its late stages of life (over-maturity/ senescence) produces fewer leaves on shorter shoots and hence less sugar is produced by photosynthesis. The cambium becomes less active and fewer transport cells are produced. This has an impact on the volume of sugars and water that can be transported throughout the tree. If injury occurs, the transport system is increasingly vulnerable to disruption. If fewer roots are produced, less water and fewer nutrients will be taken up. Older trees also become more susceptible to secondary pathogens if injury occurs. Whilst trees have an automatic response to injury, its success depends on stored sugars and the vigour and vitality of the tree. The consequences of wounding are worse if wounds are made into heartwood. As growth slows, less sugar is allocated to storage and to the defence process. Similarly, old trees are more susceptible to drought and compaction, two very common stress factors in urban landscapes, than young vigorous trees. However, for a tree to get old and make a major contribution to a landscape, it has to live long enough! There are many stages of a tree’s life where it is vulnerable to damage. The best and most cost-effective approach to tree and landscape management is to avoid the problems. Perhaps the most critical stage in the long-term success of a landscape is the planning phase. Critical questions to be asked include: What do we want and for how long? The eventual removal of trees should be planned for. As most landscapes outlive the people who design and manage them, the development of tree/ landscape management plans should be an integral part of this process. This requires an inter-disciplinary approach and must consider available resources and elements other than trees that may have an impact on trees. The Tree Masterplan for the Centennial Parklands is a good example of this approach. It sets out - principles and strategies for the conservation of the existing tree population - a framework for the sensitive integration of new tree plantings into the historic fabric of the Parklands, and - management and maintenance approaches to strengthen and sustain the tree population, and ultimately the Parklands themselves, into the next millennium. A team of consultants carried out the Tree Masterplan; the team included landscape architects, heritage consultants, an arborist, botanist, fauna specialist and a soil specialist. The project was overseen by a Steering Committee consisting of landscape architects, an arborist, botanist, and members of staff from Centennial Parklands. Preparation of the Tree Masterplan involved identification and mapping of landscape character types created by trees such as avenues, forests and so on. Detailed studies of the issues that affect the existing and future trees were undertaken. These detailed studies covered heritage, design, environment, age and condition of trees, arboricultural practices and habitat. Management precincts and sub-precincts were determined. A heritage study provided an historical and cultural assessment of the tree population, including details of significant plantings, a timeline of planting periods and identifying a list of successful and or failed tree species. The design study identified significant vistas, species and plantings. It also analysed the major planting types and patterns of tree species in order to generate definitions of the Parklands’s landscape character. A review and analysis of environmental conditions, particularly soils, indicated the way in which the natural forces and human impacts have dictated species selection and the health and performance of existing established plantings. A Safe Useful Life Expectancy (SULE) analysis of the tree population revealed the precarious physical conditions of many of the Parkland trees and the increasing importance of implementing a tree replacement programme. A brief overview of arboricultural practices highlighted the opportunity and need to improve existing (or implement new) techniques in order to achieve the recommended landscape character. A review and assessment of the existing habitat values of the Parklands was recommended in order to integrate native fauna priorities with tree management practices. Tree/ landscape Master plans provide current and future managers with the rationale behind the design and management decisions as well as a guide to implementing appropriate management practices. This is a critical stage that is often done very poorly. It is the stage at which constraints and limitations are identified. Most of the constraints will be below-ground including depth of drained soil. Advances in technology and knowledge mean that many of these constraints can be over-come. Some examples would include the use of interconnected planting pits and gap-graded or structural soils. The use of these soils can reduce the impact of compaction. There are many species of trees that can be used to create a particular landscape character. It is essential that designers work collaboratively with arborists and horticulturalists in the process of species selection. It has become very clear over the years that some of the species selected by some of the most influential figures in Australia’s public landscapes such as Charles Moore, Ferdinand von Mueller, Joseph Maiden and Walter Hill, have certain intrinsic problems. Several species of Figs have structural problems, others become infested with insects that cause problems for the trees and site users. The development of problems over time is almost inevitable given the almost non-existence of the domestication of our native species. Apart from the desirable physical attributes, species must be assessed for structure, susceptibility to pests and diseases, tolerance of urban environments, drought hardiness, growth rates, longevity and maintenance requirements. The latter includes pruning requirements and cleaning-up of shed parts such as leaves, fruit and bark. The latest edition of the Natspec publication Purchasing Landscape Trees: A Guide to Assessing Tree Quality is to be developed as a new Australian Standard. This is an excellent guide to specifying good quality root systems and above-ground parts that are in balance. It also includes useful guides for ordering trees and working with growers as well as a compliance checklist. It is not sufficient to specify “as per Natspec”; it must be driven and applied thoughtfully. Poor quality root systems are still a common cause of failure to establish and perform. Trees must be self-supporting in their containers if they are to stand up by themselves in the ground; hence, trunk taper is another important criterion to specify. Unfortunately, planting and installation specifications almost always detail staking! If self-supporting trees are installed, these details can be omitted, Tree protection may still be required but trees do not need to be attached to stakes or guards. One strategy in the renewal of landscapes is to use super-advanced trees as replacements. A four or five metre tree cannot be grown overnight so forward ordering of such stock is essential. Where this is well specified and managed, trees of excellent quality can be grown and installed. As most limits to tree survival are below ground, it is important to seek advice from a soil scientist when significant landscapes are being developed, especially if the site has had a history of disturbance. Adequate soil volume and good soil drainage are not negotiable, especially if large stock is to be installed in a typically disturbed urban soil. At this stage, hard landscaping and other infrastructure should be planned for and designed to limit long-term impacts on trees. This is especially important for underground services. One internationally common cause of poor establishment is planting trees too deeply. Unfortunately there are many technically inaccurate planting details doing the rounds of many firms of landscape architects. A common problem with these details is the over-excavation and then backfilling of the planting hole. If trees are planted too deeply in fine-textured soils, water will not penetrate the root ball; this is an example of a “perched water table”. Planting too deeply will also compromise oxygen diffusion to the roots and may cause mechanical damage to the stem. The outcome of planting must be that the top of the root ball, or better, the root-crown, must be level with the finished level of the soil forever! The best way to ensure this is to state that the depth of the planting hole must be the depth of the rootball. Some useful references for planting details are given in the bibliography attached to this paper. Another potential cause of failure or poor establishment is the use of excessive amounts of organic matter in the back-fill. Soil organisms compete with roots for oxygen. Advice should be sought from a soil specialist who is familiar with amenity landscapes. The watering-in of trees, immediately after installation, is an integral part of the planting process. For a guide to determining how much water should be applied, refer to the paper presented by Dr Peter May at the 2004 TREENET Symposium on “Soils, Water and Tree Establishment”. The water must be applied gently through the rootball. Once trees have been installed, trees must be maintained until they are self-sustaining; although in some highly constructed landscapes, this may be for their entire lives. Watering is a critical component as is the protection of young and vulnerable trunks from mechanical damage. The damage caused by mowers and whipper-snippers is epidemic and completely unnecessary and unacceptable. Where trees are planted into turfed areas, the installation and maintenance of a mulched area around the base of the tree is a useful strategy on many levels – even if tree guards are installed. Regular inspections should be part of the establishment process as early failures or poor performance should be assessed and addressed sooner rather than later. Monitoring of performance should be ongoing. Regular inspections will also highlight the need for any formative pruning. Pruning is likely to be an on-going process. All pruning must be performed according to the general conditions of AS4373 Pruning of Amenity Trees with particular pruning requirements clearly specified. Changes in the rootzone are the most common causes of stress to established trees. For this reason, all proposed changes such as paving, topdressing, level changes and the installation of underground services should be assessed for their potential impact on the roots of trees. This assessment should involve a consulting arborist. What is becoming clearer, with some recent major tree failures in very public landscapes, is that mechanical damage, to both roots and root buttresses, is to be avoided at all costs. Wounding damages bark and potentially allows the entry of pathogens. Some genera of wood decay fungi such as Phellinus spp and Ganoderma spp have been identified as causal agents in the failure of mature trees in Hyde Park, Moore Park and the Royal Botanic Gardens Sydney. These fungi enter through wounds. The wounding of large woody roots as a result of the “upgrading” of hard landscaping can lead to catastrophic failures with major implications for public safety. The key to the sustainable management of mature trees is to MAINTAIN A STABLE ENVIRONMENT. Managing over-mature/ senescent trees As trees age they are less able to cope with changes in their environment. Inevitably the aging process leads to more dead wood and an increasing susceptibility to wounding and decay. In some species, structural defects become more obvious. As trees age, the critical issue becomes hazard management. In some instances the structural defect may be removed or abated. This is not always possible and if it is important that the tree be retained, the issue becomes target management. This may involve fencing off the tree; clearly there is a limit to how many trees can be fenced off from public access. [A good example is the “Children’s Fig” in the Royal Botanic Gardens Sydney.] When redevelopment of landscapes containing significant over-mature trees is planned, it is essential that an experienced consulting arborist assess the impact. If trees are a dominant and significant element in the landscape, and they are to retained, they must be seen as a constraint in the process. Tree removal and replacement This is often a sad but inevitable outcome of landscape management. Unfortunately, the people responsible for some of our significant landscapes did not leave behind detailed landscape or tree management plans outlining the design intent and the stage at which the trees should be removed and replaced. When replacement tree planting is planned, so too should the process of replacement. The rationale should be clearly stated however, at best, this could only be a guide to future landscape managers. Another critical element in the tree removal process is public consultation. The facts should be clearly presented and based on sound arboricultural practices. The replacement of a significant avenue of Phoenix canariensis from Centennial Park in Sydney and its eventual replacement with Agathis robusta is an example of successful public notification. However, regardless of what maybe a thorough and time-consuming exercise of public consultation, the process may be hijacked by politicians and the media and thus sensationalised. Removal and replacement allows for the implementation of current best practices and may allow for the planting of species better suited to present conditions. THE DOMAIN – A CASE STUDY The Domain is a 28 ha parcel of land that bounds the Royal Botanic Gardens Sydney. In 1807 it was named as the “Domain of the Governor’s Residence” and it was gradually “improved” by successive governors. In 1828 it was identified as a place “reserved for Public Purposes”. In 1848 it was officially placed under the management of the Superintendent of the Botanic Gardens, and finally, in 1980, under the Royal Botanic Gardens and Domain Trust. The Trust is a statutory body brought into existence by the Royal Botanic Gardens and Domain Act, 1980 and reports to the NSW Minister for the Environment. The Domain contains over 1000 trees, some of which date back to pre-European times. Most of the significant trees are the legacy of two early Directors of the Botanic Gardens, Charles Moore (1848-1896) and Joseph Maiden (1896-1924). One of Charles Moore’s signature species was Ficus macrophylla (Moreton Bay Fig). Prior to the recent removals, there were 149 Morton Bay Figs, many of which go back to Moore’s time. Hence the Domain is a landscape of great horticultural, scientific, and historic significance. The Domain is divided into a number of management precincts. The trees in question are on the western boundary of the Philip Precinct, between Hospital Road and the playing fields. Hospital Road is largely a service road for Sydney Hospital and Parliament House. [The offices of NSW parliamentarians overlook the Domain.] It is a point of access for pedestrians from Macquarie Street into the Domain. It is the section of the Domain which is used for major concerts and events such as “Opera in the Park” which attract up to 80,000 people. Managing a cultural landscape is a complex business, especially for an organization such as the Royal Botanic Gardens and Domain Trust. In this context, issues of scientific and botanical interests must be balanced with heritage, aesthetics and risk management in an environment of restricted finances and limited human resources. Managing a landscape is more than managing trees on an individual basis. An important criterion for landscape management is to maintain a range of plantings of uneven age and of species diversity. It would be financially crippling and aesthetically devastating to have a significant number of old trees failing in a relatively short time. Where a landscape has had intensive periods of extensive plantings such as the Domain and Centennial Park, the prime positions for plantings are already occupied and other legitimate uses have been established in adjacent areas. For example, the large open spaces in the Domain are used extensively for sport and for major events. In 2003, a decision was made by the Royal Botanic Gardens and Domain Trust and supported by both the independent Scientific and Horticultural Committees that the renewal of the Domain must be accelerated. Over several decades, the total number of trees had declined and the condition of some of the oldest plantings had deteriorated to the point where they were hazardous. A safe useful life expectancy or SULE analysis of the entire tree population of over 1000 trees was carried out. This process highlighted the least sustainable trees and highlighted the most degraded parts of the landscape. For many of us who knew the trees, it was no surprise that the trees along Hospital Road were identified as the ones that should go. It was proposed that 11 trees, including 5 Moreton Bay Figs from Moore’s time (about 140 years old), should be removed to make way for 33 new trees. The removal of 11 trees represented about 1% of the trees in the Domain. The loss of five Moreton Bay Figs represented 2% of the number of this species in the Domain. Both committees and the Trust supported this proposal. The community was consulted and each Member of Parliament was written to informing them of the decision, as was Sydney City Council. The wider community was informed through numerous announcements in the press and through signage in the Domain. Unfortunately between the announcements and the implementation, a new Council was elected. To cut a long story short, the decision was challenged by the new Council of the City of Sydney in the NSW Land & Environment Court. This resulted in a lot of media attention and distracted many senior staff of the Royal Botanic Gardens for months. Expert witnesses were engaged and alternative management options were considered such as reduction pruning to reduce the risk of failure and the inter-planting of the new trees between the old trees. However, it was deemed by the RBG team that removal was the most sensible option. In the end, the court found in favour of the Royal Botanic Gardens and Domain Trust. All but one of the trees was removed and the 33 new trees were planted. The attention given by the media to the planting was almost non-existent. Despite the drama, the removal and replanting process allowed for the testing and remediation of the soil; the species selected were chosen on a number of criteria including resistance to compaction, low susceptibility to Fig Psyllids and heritage values. This small but landmark project is the taste of things to come as the public, politicians and landscape managers come to the realisation that landscapes are dynamic and the largest and most conspicuous elements, the trees, don’t last forever. - AS4373-1996 Pruning of Amenity Trees. Standards Australia. - Bradshaw, A; Hunt, B and Walmsley, T. (1995) Trees in the Urban Landscape: Principles and Practice. E &FN Spon, London - Clark, R. (2003) Specifying Trees: A Guide to Assessment of Tree Quality.2nd Edition, Natspec Guide; Construction Information Systems, Australia, Milsons Point. - Costello, L.R. & Jones, K.S. (2003) Reducing Infrastructure Damage by Tree Root: A Compendium of Strategies Western Chapter of the ISA, California - Craul, P.J. (1992) Urban Soil in Landscape Design. John Wiley & Sons, New York. - Craul, P.J. (1999) Urban Soils: Applications & Practices. John Wiley & Sons, New York. - Gilman, E. (1997) Trees for Urban and Suburban Landscapes, Delmar Publishers, Albany, NY. - Harris, R. W; Clark, J.R; & Matheny, N.P (2004) Arboriculture: Integrated Management of Landscape Trees, Shrubs & Vines 4th Edition, Prentice Hall, New Jersey. - Hitchmough, J.D. (1994) Urban Landscape Management. Inkata, Melbourne. - Lonsdale, D. (1999) Principles of Tree Hazard Assessment & Management. Forestry Commission, The Stationery Office, London. - Matheny, N.P. & Clark. J.R (1994) A Photographic Guide to the Evaluation of Hazard Trees in Urban Areas. Second Edition. International Society of Arboriculture, Savoy, Illinois. - Miller, R.W (1997) Urban Forestry: Planning & Managing Urban Greenspaces. 2nd Edition. Prentice Hall, New Jersey. - Solness, P. (1999) Tree Stories. Chapter & Verse, Neutral Bay, NSW. - Watson, G.W. & Himelick, E.B. (1997) Principles & Practice of Planting Trees & Shrubs. International Society of Arboriculture, Savoy, Illinois. - Watson, G.W & Neely, D. (Eds) (1994) The Landscape Below Ground. Proceedings of an International Workshop on Tree Root Development in Urban Soils. The International Society of Arboriculture, Savoy, Illinois. - Watson, G.W & Neely, D. (Eds) (1998) The Landscape Below Ground 11. Proceedings of an International Workshop on Tree Root Development in Urban Soils. The International Society of Arboriculture, Savoy, Illinois. - Journal of Arboriculture – research papers from the International Society of Arboriculture. - Arborist News – contractor issues and educational topics from the ISA - Arboricultural Journal – research papers from the Arboricultural Association (UK) - International Society of Arboriculture – Australian Chapter (http://isaac.org.au) - TREENET (www.treenet.com.au) - www.statewide.nsw.gov.au/risk.htm (for Best Practice Manual – Trees and Tree Roots version 2) (For an extended bibliography of arboriculture references, email Judy Fakes on [email protected])
<urn:uuid:6365c355-1810-4c95-b55d-84c4ed1382b5>
CC-MAIN-2022-33
https://treenet.org/resource/the-heritage-trees-of-2115-planting-design-and-establishment/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571911.5/warc/CC-MAIN-20220813081639-20220813111639-00494.warc.gz
en
0.938037
5,006
2.984375
3
Kaigetsudô Ando (壊月堂安度 c. 1671–1743, act. c. 1704-36) was the founder of a noteworthy and influential school of ukiyo-e artists specializing in portrayals of courtesans rendered in a monumental style. Critics have lauded the Kaigetsudô beauties — James Michener called them "the "finest symbol of ukiyo-e" and Richard Lane described them as the "Bodhisattva of the demimonde" and "bold memorials to Japanese womanhood." Lane went further, stating that "there is something more to the Kaigetsudô paintings than a picture of a courtesan: indeed, in order to enjoy these paintings to the fullest degree, one must develop something of the Japanese love and appreciation for the kimono as a work of art," and that what is depicted is "something more than a simple girl: there is, shall we say, a whole culture involved." Ando (安度 also read as Yasunori), who lived in Asakusa Suwa-chô in Edo, apparently produced only paintings (roughly 30 survive), including handscrolls. Ando's family name was Okazawa (岡沢) or Okazaki (岡崎), his common name was Dewa-ya Genshichi (出羽屋原七), and he also used the art name Kun'unshi (翰運子). He sometimes included only "Kaigetsudô" (壊月堂) in his signature, but then accompanied it by his artist seal reading "Ando." He probably had training in Chinese-inspired academic Kano (狩野) and Nanga (南画) literati painting. In the large kakemono-e (hanging scroll picture: 掛物絵) on the right, made with ink and colors on paper, Ando portrayed a courtesan walking against a strong gust of wind as her obi (sash: 帯) and outer robe are swept back by the disturbance. Such animation, however slight, in a composition by a Kaigetsudô artist is unusual. Even so, she maintains her regal bearing, a characteristic of the genre, and her elaborately decorated robes are a focal point, another constant in Kaigetsudô paintings and prints. In 1714 Ando was exiled to the island of Izu Oshima due to his association with a merchant named Tsuga-ya Zenroke who was implicated in a scandal now referred to as the Ejima Ikushima jiken (Ejima-Ikushima affair: 江島生島事件). Lady Ejima, the principal lady-in-waiting to the shogun's mother, left the Ôoku ("Great interior," the shogun's women's quarters: 大奥) to visit the grave of the late shôgun Tokugawa Ienobu (1662-1712). While away, and against strict rules and etiquette, she accepted an invitation to attend a kabuki performance from the popular actor Ikushima Shingorô (生島新五郎 1671–1743) at the Yamamura-za, Edo. After the performance, she invited the actor and others to a party at a tea house. The festivities ran late and Ejima missed the closing of the gates to the Ôoku. When Eijima's behavior was discovered, a power struggle ensued between factions advising the shogun Tokugawa Ietsugu (1709-16). The Ôoku was investigated and numerous infractions were uncovered. Ultimately, 1,300 people were punished. Shingorô was banished from Edo until 1742 and the Yamamura-za was shut down. Ejima was sentenced to death, but received a pardon. Ando was pardoned in 1722. Upon returning to Edo, he continued painting (as he had while in exile) and also took up designing illustrations and contributing hokku (opening verses in linked poems: 發句) for haikai (31-syllable poem: 俳諧) anthologies under the literary names Kaigetsudô Jôsen and Kaigetsudô Shisui. There is a consistency in drawing and presentation among Ando's paintings and those of his pupils, nearly always featuring a single courtesan placed before a neutral background. There is typically a pronounced curve to the standing or walking figures, with the central torso jutting out, emphasized by the large obi (kimono sash: 帯) tied in front. The women are invariably adorned in fine kimono, often of a remarkable inventiveness in design, which are rendered with bold, sweeping lines and strong colors. The faces are stylized and almost mask-like in their aloofness, restrained expression, and simplicity. These are not actual portraits, but faces drawn to conform with a typology used by all artists of the Kaigetsudô ("Embrace the moon studio": 壊月堂). The painting on the left by Ando is a kakemono-e (1,080 x 470 mm) from c. 1704-1714 with ink and colors on paper, depicting a courtesan dressed in a kimono whose furisode (long or "swinging" sleeves: 振袖) and lower hem are decorated with monkeys climbing about in a tree. Such fanciful adorments are often found in the works of the Kaigetsudô artists. The curve of her form and the long, sweeping, thick lines of the robes are typical of Ando's style. Note that she has lifted her robes in front to aid in walking, a very common gesture found in the works of the Kaigetsudô. The vast majority of extant Kaigetsudô paintings were made with inexpensive pigments on paper (rather than silk). Thus, while a great many other ukiyo-e paintings were expensive commissions from wealthy connoisseurs, many of the Kaigetsudô works might have been sold as souvenirs of visits to the Yoshiwara pleasure quarter, which from 1656 was located just north of Asakusa (where Ando lived). This conjecture seems to gain support from the signatures used by Ando and his five direct followers, who typically prefixed their names with the characters for Nihon giga ("Japanese painting for fun": 日本戯畫 or 日本戯画 with "Nihon" possibly read as "Yamato"). Moreover, there is a contemporary (c. 1701-11) painted handscroll (281 x 4,647 mm) offering some corroboration of the inexpensive, even "lowly" nature of Kaigetsudô productions. In the Asakusa fûzoku zukan (Illustrated genre scenes of Asakusa: 浅草風俗圖巻), now in the National Museum of Japanese History, Sakura, there is a scene depicting a painter set up in a nondescript booth in front of the Asakusa temple gate. Passersby are shown observing him painting a monochrome landscape, but they can also see that two strongly colored kakemono-e in the Kaigetsudô style displayed on the wall. The work by Ando shown below is a rarity among Kaigetsudô paintings, a horizontal rather than vertical format composition, painted on silk instead of paper. Moreover, it portrays two seated figures instead of the nearly ubiquitous upright solo Kaigetsudô beauty. Ando's handling of the forms and colors is masterful, and the relaxed pose of the courtesan is complemented by delicate line work and sophisticated colors throughout the elegant robes. Ando's pupils worked in a manner barely distinguishable from his style. Besides using the aforementioned signature prefix Nihon giga, they also inserted matsuyô into their signatures ("end leaf": 末葉), which can be taken to mean "follower." Three of them (Anchi, Dohan, and Doshin) designed prints as well as paintings; two others (Doshu and Doshû) produced only paintings. Dating any of the Kaigetsudô works is problematic, as no definitive assessment has so far been achieved. Kaigetsudô Anchi (壊月堂安知 also read "Yasutomo") was the only follower to use a character from Ando's art name (An, 安). Moreover, he was apparently the only one to establish a studio, which he named the "Chôyôdô" (長陽堂). Anchi is considered by many to be the most accomplished of the pupils, known for about 20 or more paintings, although his printed numbere as few as six or seven. The painting shown below on the left depicts a courtesan seated on a bench reading a manuscript (possibly a love letter). The outer robe decorated with calligraphy pattern called hogo-zome ("scratched-pad design," or "calligraphy scrap paper": 反故染), made with a calligraphic textile-dyeing pattern going back at least as far as the Heian period (794-1184), which was especially popular during the Edo period. The name of the dyeing technique/pattern came from words meaning "against the old dyeing," that is, a design made to look like a kimono worn by a destitute man made from reused writing paper. There are many examples in ukiyo-e paintings and prints of women and actors wearing robes with inscription patterns. Anchi's painting is signed Nihon giga Kaigetsu matsuyô Anchi zu (日本戯画 懐月末葉安知圖). The hand-colored tan-e below on the right, published by Maru-ya Jinpachi (丸屋甚八 Marujin 丸甚, Enjudô 円寿堂) circa mid-1710s, is also a design featuring a hogo-zome or a dyed calligraphy pattern on the robes, with the lettering presumably related to one of Japan's great poets of the Heian period, who is pictured on the lower inside hem of the outer robe. Here, again, we see the long sweeping lines for the robes, although the bold coloration found in paintings is somewhat subdued in this tan-e (red-lead print: 丹絵), partly a result of the printed black sumi (carbon black pigment: 墨 or 墨) covering so much of the robes (the purple and yellow colors are also faded in this specimen). The print is signed Nihon giga Kaigetsu matsuyô Anchi zu (日本戯画 懐月末葉安知圖). Kaigetsudô Dohan (懐月堂度繁 also read "Norishige") was one of Ando's three pupils who designed prints as well as paintings. Nothing is known about his personal life, but his surviving works number at least 12 prints (all but one published by Iga-ya, 伊賀屋) and 11 paintings, seemingly from the mid-1710s. He has been criticized as perhaps the least inspired of the Kaigetsudô artists, although several of his works match up well with those by other pupils. The kakemono-e (hanging scroll painting: 掛物絵) shown below on the left portrays one of the stately courtesans who served as the primary focus for the Kaigetsudô artists. Here she is affixing a bekkô-gushi (tortoise-shell comb: 鼈甲櫛) to her "Katsuyama" (勝山) coiffure. The hairstyle, which is so often encountered in Kaigetsudô paintings and prints, was associated with prostitutes and is said to have been created by a high-ranking prostitute or oiran (花魁) in Katsuyama (about 300 km west of Edo). As previously mentioned, the kimono were of prime importance in these designs. Regardless of season, the highest-ranking courtesans wore two layers of nagajuban (long undergarments: 長襦袢) underneath three layers of kosode (small sleeve: 小袖) kimono plus three layers of long outer garments. The women's skills in layering kimono and combining decorative patterns was on full display in the works of the Kaigetsudô artists. In Dohan's painting there is, arguably, a certain directional rigidity to the lines and shapes, particularly in the reliance on repetitive diagonals. The motif on the green sleeve is the yûgao ("evening faces: 夕颜), a flower that blooms as evening comes but withers by dawn. In classical and premodern poetry and literature, its short-lived blossoms came to symbolize the impermanence of life and futility of worldly cares. Essentially, the main focus of Dohan's print was the display of luxurious kimono, worthy of only the highest-ranking courtesans. The print is signed Nihon giga Kaigetsu Matsuyô Dohan zu (日本戯画 懐月末葉度繁). Dohan also designed a large print depicting a courtesan in a similar fashion (see Museum of Fine Arts, Boston, acc #21.6645). Seated female figures were among the rarities in Kaigetsudô paintings and prints. The large ôban (ô-ôban, 555 x 288 mm) hand-colored print shown below on the right, published by Motohama-chô Iga-ya hanmoto (元濱町伊賀屋板本) circa mid-1710s, is among the best examples. Dohan depicted a courtesan seated on a large box while dangling a towel with a shibori (shaped-resist or tie-dyed: 絞り) pattern as she plays with her pet kitten. The side of the box seen below the beauty's leg is decorated with a partly visible figure of a monkey, and a label on the left side reads: Onkashi dokoro, Asakusa Komagata-chô Masaru-ya (House of Masaru, seller of confections in Komagata Street, Asakusa). The print, in effect, serves as an advertisement for a local vendor who probably commissioned or subsidized the woodblock edition. The print is signed Nihon giga Kaigetsu matsuyô Dohan zu (日本戯画 懐月末葉度繁圖). Kaigetsudô Doshin (懐月堂度辰 also read as "Noritake" or "Noritatsu") was another Ando pupil who produced both paintings and prints, although surviving works are very few. So far, only three prints are known with his signature. It has been proposed that his style, although entirely in the Kaigetsudô tradition, is somewhat more amiable in expression, rendered with refinement and fuller forms. The image below on the left is a kakemono-e painted on paper in large format (1,114 x 497 mm). The outer robe has a pattern of yûgao (lit., "evening face," the so-called "moon flower": 夕颜), a flower with millennium of literary and poetic symbolism in Japan, including the Yûgao chapter in the Genji monogatari (Tale of Genji: 源氏物語). It also appears in family crests, where it is frequently combined with images of the moon. Yûgao is a white flower that blooms as evening comes but withers by dawn. It is thus a symbol of life's impermanence and the brevity of physical beauty possessed by the courtesans portrayed in this and other works by the Kaigetsudô artists. In the image below right, Doshin's strolling beauty has tucked her hands inside the kimono, which is decorated with a pattern of falcon feathers and tasseled rope. The artist's large seal (Doshin, 度辰) is placed below the signature, which reads Nihon giga Kaigetsu matsuyô Doshin zu (日本戯画 懐月末葉度辰図). As is nearly always the case, the large ô-ôban was printed on two sheets of paper (here the join is visible horizontally running through the signature). The work was published by Nakaya (his seal reads Nakaya Tôriabura-chō hanmoto: 板元通油町中屋). In viewing a print such as Doshin's, one is perhaps reminded of a comment by Richard Lane, who once wrote that the "powerful yet pensive figure of a lone courtesan ... [of the Kaigetsudô] has come to symbolize the living image of old Japan." The two remaining direct pupils of Kaigetsudô Ando produced paintings but no woodblock prints. As few as six works survive by Kaigetsudô Doshu (懐月堂度種 also read as Noritane or Nobutane), including the kakemono-e shown below on the left. She turns back to look over her shoulder at something (or someone) that has caught her attention as she strolls in the Yoshiwara. She lifts the front of her robes to facilitate walking while adorned by the many layers of robes. Her elegant outer robe is decorated with orange blossoms trailing over a fence. Kaigetsudô Doshû (懐月堂度秀 also read as Norihide or Nobuhide) is known by a mere three paintings. One of these, a somewhat trimmed but large kakemono-e shown below on the right, portrays a courtesan adjusting her hairpin. Once again, strongly colored and boldly patterned robes are a focus of a Kaigetsudô composition, so much so that despite the toning and abrasion of the paper, the painting retains a vibrancy that communicates much of Donshû's original work. Baiôken Eishun (梅翁軒永春 also read as Baiôken Nagaharu, act. c. 1710–1755) was one of the ukiyo-e artists specializing in paintings of courtesans drawn in a manner similar yo the Kaigetsudô style. Eishun used other art names, including Hasegawa Eishun (長谷川永春), and Takeda Harunobu (竹田春信), along with the pseudonym Shôsuiken (松翠軒). He produced paintings for hanging scrolls and designs for illustrations in woodblock-printed books. Eishun, along with Matsuno Chikanobu (see below), has long been considered part of a minor revival of the Kaigetsudô school after it fell into decline following the exile of its founder Kaigetsudô Ando in 1714. However, recent research suggests that while influenced by the Kaigetsudô, Eishun and Chikanobu, along with such artists as Baiyuken Katsunobu (梅祐軒勝信) and Nishikawa Terunobu (西川照信) — see below — were independent painters who, in fact, might have been seen as competitors of the Kaigetsudô. The poem reads: "Though I didn’t say / I was retiring for the night / still she loosens her sash./ She reads my thoughts, / bringing tears to my eyes." [trans. by Miyeko Murase] Matsuno Chikanobu (松野親信, act. 1720s-30s?), who also used the pseudonym "Hakushôken" (伯照軒), was possibly one of the most popular painters of his time and remains highly regarded today. It is said that he might have worked closely with Baiôken Eishun (see above). He adopted the Kaigetsudô style, portraying courtesans dressed in brightly colored and exquisitely designed kimono. Unlike the Kaigetsudô artists, however, Chikanobu used high-quality pigments and silk for his paintings. His figures tended to have sweetly smiling faces with small upturned mouths, and they wore robes with rhythmically modulated outlines. In the painting shown below on the right, the courtesan, probably on parade along the wide boulevard called Naka-no-chô (中の町) that bisected the Yoshiwara pleasure quarter, wears a spectacular kimono decorated with snow-capped bamboo. Tôsendô Rifû (東川堂里風 act. c. 1720-30) is also counted among the later artists associated with the Kaigetsudô style of courtesan portraiture. Tôsendô's style, although compatible with the Kaigetsudô, also shows some influence of Hishikawa Moronobu, and particular works seem to adapt the manner of Matsuno Chikanobu (see above). In the kakemono-e shown below on the left, a courtesan is seated on a bench while cooling herself with a uchiwa (rigid fan: 團扇 or 団扇). She wears a light-weight summer kimono patterned with Genji-guruma (wagon or ox-cart wheels: 源氏車), a familiar motif in painting, woodblock print, and family-crest designs (the latter were popular from the latter half of the Heian period). Takizawa Shigenobu (滝沢重信 active c. 1720–40) also used the pseudonym "Ryûkadô" (柳花堂). He is known by at least nine paintings. The example shown below on the right depicts a delicate young woman walking on a veranda in a garden. There is a notable sweetness to the courtesan's demeanor and fragility to her form quite unlike the imposing Kaigetsudô beauty, despite the influence evident from that school of artists. Her impossibly tiny hands and feet prefigure some of the waif-like anatomies found in Nishikawa Sukenobu's works. In fact, it may be that Sukenobu exerted some influence on Takizawa Shigenobu and a few other late artists of the Kaigetsudô school. Baiyuken Katsunobu (梅祐軒勝信) is known by a dozen or so paintings of courtesans from the 1710s whose style was influenced by the Kaigetsudô. A seal reading Shin or Kami (deity: 神) appears on at least one of his works. Katsunobu might have been associated with the artists Matsuno Chikanobu and Baiôken Eishun (see above). Here, below left, a courtesan is about to place an ornamental tortoise-shell comb in her hair. As with the Kaigetsudô Anchi works discussed earlier, she wears a kimono with a hogo-zome ("calligraphy scrap paper": 反故染) design. Her figure is slimmer and more elongated than the typical Kaigetsudô beauty. Nishikawa Terunobu (西川照信), another artist influenced by the Kaigetsudô, not only painted beauties but also actors. He also produced designs for ehon (woodblock printed illustrated books: 絵本). Some of his works also present slender figures that veer away from the typical Kaigetsudô model, as in the example shown at the lower right. Here, an onnagata (performer of female kabuki roles: 女方 or 女形) wears the murasaki-bôshi (purple cap: 紫帽子), the purple silk headcloth used by onnagata to cover the shaved forelock, seen both during kabuki performances and off-stage on formal occasions. The face of the unidentified actor is drawn in a manner more closely resembling an actual portrait than any of the previous facial typologies shown on this web page. Although the deportment of the onnagata is feminine enough to be convincing on stage, the features of the face are certainly masculine, especially the strong jaw. The bold horizontal stripes decorating one of his inner kimono also suggest a male figure. The signature reads Yamato eishi Nishikawa Gyokuuken Terunobu kore zu and is sealed Terunobu (照信). © 2020 by John Fiorillo
<urn:uuid:7ce52e3b-1384-413d-908a-9a3b30d9c0ec>
CC-MAIN-2022-33
https://viewingjapaneseprints.net/texts/ukiyoe/kaigetsudo_ando.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571982.99/warc/CC-MAIN-20220813172349-20220813202349-00096.warc.gz
en
0.96372
5,566
2.96875
3
Strangely enough, the most modern source on the medieval life and times of Peter Stumpp, otherwise known as the Werewolf of Bedburg, can be found in the lyrics of the rock band Macabre, a group of American troubadours who specialize in the obscure genre of “murder metal.” Paring down the meat of the story to bare bones, their song works in harmony with history yet offers little in the way of understanding. That heartier version can only be found in time-worn sources from the past, all of which provide a feast of gruesome details on the world’s most famous werewolf. Over 400 years ago The people were terrorized Around Bedburg and Cologne In the German countryside According to the pamphlet Published at that time A man named Peter Stumpp Committed atrocious crimes Although assigned one of the most monstrous titles in all of history, Peter Stumpp—also sometimes written as Stube, Stübbe, or Stumpf— was probably not a real werewolf at all. Depending on how you view his story, he was likely one of two things—a man violently scapegoated to appease the fear of a fractured society or a straight-up deranged serial killer with cannibalistic tendencies. There is historical evidence to support both scenarios. It is true medieval people often relied on phantasmagorical notions, like werewolves and witches, to explain dark and inconceivable things like mental illness, which could be why he ended up in chains. Perhaps he did prey on the unsuspecting lifeblood of his community—or perhaps he was victimized by the sheer power of lore. Either way, poor Peter Stumpp definitely had one hell of a dreadful journey from mere man to terrifying myth. Much like today, people suffered from all sorts of emotional and psychological disorders in the 16th century, none of which were legitimized through a religious narrative or accepted in the established moral code. The boundaries for behavior were clearly defined by polite society, and anything at odds with these definitions—whether physical or mental—was typically considered deviant or evil. There was just no room for them in the simplicity of agrarian life. And this is where the story of Peter Stumpp became mysterious. While many details of his story were supported by fact, they often seemed to collide with folklore and spin outward again towards the truth, snagging wild elements of fantasy as they went. When we are able to separate and analyze these different avenues, understanding the motivations of medieval societies becomes easier, and we begin to see that history has always been intertwined with lore—the question is just to what degree? When we take the time to break down fact from fiction and consider how they became so bound together in the first place, we often glimpse the universal desire to find an explanation in the outrageous, especially if quells our deepest fears. And perhaps, in this case, it will allow us to better understand the Werewolf of Bedburg. Peter Stumpp the Werewolf Aided and abetted By his mistress and daughter Body parts were found On the land and in the water His daughter had a son From repeated incest Stumpp ate his son And said the brain tasted the best The lyrics may sound outrageous but make no mistake—the story of Peter Stumpp, while steeped in much folklore, is both valid and legitimate, first detailed in a brief pamphlet by George Bores from the 16th century called The Damnable Life and Death of Stubbe Peeter. In it, “the life and death of one Stubbe Peeter, a most wicked sorcerer” are outlined in colorful detail. Although no copies of the German version can be found today, two English translations still exist, one in the British Library and the other in the Lambeth Library outside London. In addition to this primary source, the diary of a local Cologne alderman, Herman von Weinsberg, also provided some background about the sensational case and illustrated many of the images seen in posters around southern Germany at the time. When the famous English occultist, Montague Summers, discovered the artifacts in 1933, he used them—along with the original woodcuts— to create his literary work titled The Werewolf in Lore and Legend. And as a result of this literary trio, the strange events surrounding Stumpp’s life as a werewolf have been repeatedly refurbished for modern consumption. Sensational beyond belief, the Stumpp case gained publicity around much of Europe and attracted the attention of communities in the Netherlands, England, and Denmark. Descriptions of witches and other occult behaviors sprang up often in the region and were reinforced by various narratives like The News From Scotland, released in 1591, which highlighted stories of witches intent on destroying the King of Scots and his Danish queen. The condemning literature about Stumpp fit in nicely with this larger narrative and added to the ever-growing fear of hidden conspiracies and magical assaults. In fact, the Eifel region was wildly receptive to such assertions and was once considered ground zero for witch-hunting between 1580 and 1650, leading to the execution of some 1,500 people. The events of Stumpp’s life were also referred to in a writing by Edward Fairfax, who provided a first-hand account of his own daughters’ persecution as witches in 1621. Some of the facts in the case were pretty straightforward—Peter Stumpp was a German farmer, born in the village of Epprath near Bedburg, and a relatively wealthy and well-respected man in his community. But other parts of the story—like how he earned himself the frightening title of werewolf and became the defendant in one of the most lurid and famous occult trials in history—were a little more complicated to explain. And the part about how he ended up spread-eagle on the infamous breaking wheel, where he endured a level of torture reserved only for those in lieu with Satan himself, set a new standard for medieval drama. Tied to a wooden wheel They took red-hot pinchers And pulled his flesh off In several areas They broke his arms and legs With a hatchet Then burned the evil Stumpp After cutting his head off To really understand how Stumpp became a werewolf, it’s important to consider the zeitgeist of Bedburg in the 1500’s. The town was in the throes of a great terror which no one could seem to unravel. A diabolical creature of some kind was slaughtering cattle each night and leaving the gory remains to be found in the surrounding fields the next morning. It had been going on for years, and the townspeople were becoming increasingly unsettled at finding their valuable lambs and calves ripped open and devoured, as if by some savage animal. And it was not just animals—women and children also began to disappear from their homes, only to be found days later in gory shreds along the road—if they were found at all. As expected, a panic broke out among the populace who quickly determined the crimes to be the act of a beast, possibly a large wolf. For others, the notion of something more sinister, like a werewolf, began to haunt their thoughts. Whoever or whatever was committing these crimes was either criminally insane or not human—of that, they could be sure. The victims had not just been killed—they had been strangled, disemboweled, bludgeoned, crudely torn apart, and apparently eaten raw. Although the need for unity during this difficult time was clear, the townspeople found themselves at odds as Catholicism and Protestantism battled for religious dominance. Germany’s Rhineland was then part of the waning Holy Roman Empire, and the sense of upheaval and religious separatism was pervasive. The former Archbishop Gebhard Truchsess von Waldburg had been trying for a number of years to introduce Protestantism as law; however, when the Cologne War was lost in 1587, Bedburg Castle became the headquarters of Spanish and Italian mercenaries who were determined to restore the Catholic faith. In fact, this occupation was so violent at times, some historians suggest the gruesome murders may have been the result of belligerent soldiers. This conflict set the tone of religious intolerance and fueled punitive behavior against the Protestants. It also served as a precursor to the bigger, badder 30 Years War, which would last from 1618 to 1648 and proliferate the power struggle between the two faiths, eventually devolving into a fight between France and Austria. In short, the lack of a firm societal backbone left the people of Bedburg confused, afraid, and unsure how to handle the growing terror within their own ranks. But one thing was clear—they needed to find and destroy this thing, whatever it was before it did them any more harm. Because they had already been victimized by the dreaded Black Death, roving brigands, and the lingering effects of the war, the opportunity to exact justice on an actual being felt like a great way to satiate some of the pain and fear they had been subsisting on for so long. Maybe, for once, they could actually put a face on their enemy and vanquish it, even if it was something beyond human. But whisperings suggested the wolf-like creature was “strong and mighty, with eyes great and large, which in the night sparkled like unto brands of fire, a mouth great and wide, with most sharp and cruel teeth, a huge body and mighty paws.” It would not be easy. Even though Stumpp was generally regarded as a friendly and successful widower who had been left burdened with two children—a young boy and a teenage girl—some insidious rumors began to suggest he was sleeping with his sister and may have impregnated his own daughter. This foul gossip was suspected to have come from a local man looking to even the score after his wife had a brief dalliance with Stumpp or perhaps from another source intent on discrediting him. The townspeople began to view Stumpp with distaste and even outright contempt, a fact that was made worse by his physical condition. Because Stumpp had lost his left hand in a farm accident years before, he fell under even deeper suspicion when a wolf’s paw was recovered from a trap in the nearby woods. In the minds of the townspeople, all of whom were desperate to identify the killer, the loss of this corresponding appendage felt like real evidence of his guilt. Yes, the timing of the injury was off, but still. It could be him. On a more sophisticated level, Stumpp was also a recent convert to Protestantism which meant his persecution would be an ideal way to throw shade on the new faith and revive Catholic sentiment among the masses. After all, who wants to attend church with a werewolf? Others say Stumpp was no victim at all—but rather an “insatiable bloodsucker,” psychologically damaged and suffering from the rare mental disorder of lycanthropia. Regarded as a highly delusional condition, lycanthropy —from the Greek words lykos “wolf” and anthropos “man”— was believed to give the sufferer the feeling of being a wolf or some other nonhuman animal, depending on the culture at hand. The person taking on the beastial form would likely have assumed the most dangerous predator of their own land—from an African leopard to a Nordic Bear to an Indian Tiger—and in Germany where upwards of 30 wolf attacks could feasibly happen in one year, the wolf was an obvious choice. The folklore, legend, and deeply rooted fairy tale surrounding lycanthropy lived in the shared psyche of many civilizations, going back as far as ancient Greece where werewolf myths stemmed from prehistoric times and were promulgated by the Olympian religion. The mountainous region of Arcadia in Greece gave birth to the cult of the Wolf-Zeus, while Mount Lycaeus remained the site where priests would annually combine and eat the slaughtered flesh of both man and beast. According to that legend, whoever tasted it would take on the characteristics of a wolf and could never return to his human state unless he gave up feasting on human flesh for nine long years. The Romans were also smitten with the legend of the werewolf and the myth of Romulus who was suckled by a she-wolf and later became the founder and first king of Rome. An even older werewolf story appeared in Ovid’s Metamorphoses, written in 1 CE, and told the tale of King Lycaon who offended the gods by serving them human flesh for dinner. Jupiter punished this transgression by turning the king into a werewolf, forced to dine on the meat of man forever. But, of course, the most ancient shapeshifter of all first appeared in the Sumerian text and great mother of all literary themes, the Epic of Gilgamesh. Gilgamesh was roving about… wearing a skin… having the flesh of the gods in his body, but sadness deep within him, looking like one who has been traveling a long distance. In medieval Europe, the werewolf’s metamorphosis represented social anxieties about the state of being human and how the body, mind, and soul often straddled the tenuous line between man and monster. Drawing on ancient pagan beliefs, shapeshifter myths helped people explain away the seemingly evil acts of the mentally ill—a reason that made them particularly hard to kill. While they all mostly believed in the existence of werewolves and the mythical transmutation of lycanthropes, there was a growing belief—especially among the more educated—that lycanthropy was a sickness and not an actual transformation. And as a result, some viewed Stumpp as more of a mental monster than a physical one. Either way, his behavior only led to one thing—bloody murder. As for the truth, it was entirely possible he suffered from clinical lycanthropy or some other cultural manifestation of schizophrenia, usually associated with hallucinations, disorganized speech, psychotic outbreaks, and violent tendencies—no one knows for sure. Perhaps Stumpp committed these bestial acts as a symptom of his mental condition, or perhaps he was just a farmer who ended up in the wrong place at the wrong time. As hysteria took hold, men from the town began roving the countryside in heavily armed bands, looking for missing children and hoping to catch sight of the wicked creature. When their hounds finally caught the scent of the beast and began chasing it, ripping through the underbrush and dragging the panting men behind them, the group spread out to surround the fleeing animal. Frenzied and determined, the dogs pushed on and seemed to have the shaggy black wolf in their crosshairs. But as they all flanked the animal on all sides and turned into the clearing where he would be exposed, they did not find a wolf. Instead, they found only Peter Stumpp standing in the field at this most inconvenient moment, apparently out for a walk. Fueled by the enigma of the chase and the foul rumors they had heard, they captured Stumpp immediately and carted him back to town. Like all good medieval tales, the use of torture soon came into play as authorities strapped Stumpp to the breaking rack, where they planned to grill him on his crimes and the details of his true nature. But according to primary sources detailing the event, there was no need as Stumpp “volunarilye confessed his whole life, and made knowen the villainies which he had committed for the space of XXV. yeere.” In fact, Stumpp admitted to a laundry list of horrific crimes spanning 25 years—he had made a pact with Devil at just 12-years-old who in return had given him a magic belt imbued with the gift of lycanthropy. When he was found and captured in the woods, he claimed to have tossed the enchanted item in the bushes, but extensive searches by local men found nothing. He also confessed to murdering 14 children and several women, some of whom he devoured entirely as “dainty morsels” so they would never be found. And according to his willing testimony, he had ripped the fetuses from the wombs of his pregnant victims and “ate their hearts panting hot and raw.” Aside from these horrific murders, Stumpp also confessed to massive amounts of depravity in his personal life. He had been living with his daughter Beele—described by all as charmingly beautiful—who he raped regularly and forced to bear him a son. He had used his charm and charisma to lure a local woman named Katherine Trompin to his home, where she later became a servant to his sexual desires. And according to Stumpp, he also enjoyed a carnal relationship with an evil succubus who was sent down by the Devil to satisfy his baser desires. Even though he referred to his son as the light of his life, who he refused to hurt for a long time, the craving for his child’s blood eventually became so unbearable, he had to give in. Luring the young boy into the woods, he proceeded to dash his head against a rock and eat his brains. Hiding in plain sight, he also admitted to stalking appealing girls in town and following them to more rural areas where he would ravage and destroy them the bloodiest of ways. Stories of werewolves were widespread during this time, and outlaws and soldiers sometimes donned wolf skins over their armor as a way to demonstrate their prowess as killers. It was not unusual for people to develop a more clinical type of lycanthropy, where delusions of being a wolf caused them to act out in all sorts of wild and shocking ways. And in his own words, Stumpp described himself as being a victim of such a condition, taking “pleasure in the shedding of blood” and the feeling of fresh, warm blood pouring down his throat. While no one really knew if his confession was based on truth or the desire to evade torture, the brutal manner of his trial and subsequent death was not up for speculation. Plenty of proof erupted against him, as witnesses lined up to offer first-hand accounts of how he had chased and attacked them. One young girl told the tale of her escape from the werewolf after the starched collar of her Sunday dress protected her neck from his claws, while another story detailed how Stumpp had been spotted keenly watching some children playing and milking cows in the field right before they went missing. Another man described the occasion in the woods when Stumpp lured him away from his female companion by calling his name, only to circle quickly back and snatch the woman away. The testimony was specific, detailed, and left no doubt in the minds of the special court assembled to sentence him. As predicted, Peter Stumpp was found guilty on October 28th, 1589 of murder, cannibalism, incest, witchcraft, and straight up werewolfery. And as accessories to the crime—or rather innocent bystanders who had become forever tainted by his actions—his mistress and daughter were also sentenced to burn at the stake as a “continual monument to all ensuing ages” about what happens to those in cahoots with a werewolf. In celebration of his guilt, Stumpp’s execution was planned for Halloween day, 1589 and was well attended by the aristocracy of the region who could not resist the curiosity of it all. Regarded as the most brutal execution in history, Stumpp’s death was not a disappointment. Once strapped to the breaking wheel, his flesh was systematically ripped from his body with red-hot pincers as his limbs were bashed again and again with the blunt side of an ax to prevent his rising from the grave. And at the end of this abuse, his head was ceremoniously hacked from his body and placed on a pole carved like a wolf where it would serve as a “warning to all Sorcerers and Witches, which unlawfully followe their owne diuelish imagination to the utter ruine and destruction of their soules eternally…” As promised, his mistress and daughter were flayed alive before being strangled, their bodies tossed onto the pyre with his and burned down to ash. Despite its sensationalism—or perhaps because of it—the execution of Peter Stumpp remained a singular event in medieval Europe. As the first and last person to be tried and convicted of such crimes in the region, his fate has settled into legend. But regardless of whether he was guilty or not, the Werewolf of Bedburg’s story remains less about the horrors of werewolves and more about the horrors of man. And the rest is history. *Special thanks to Jakub Rozalski, whose art perfectly captured the setting for this story.
<urn:uuid:c38b5e41-af8b-451b-bead-f72f7cf21aa8>
CC-MAIN-2022-33
https://theravenreport.com/2017/09/18/how-the-werewolf-of-bedburg-passed-from-man-to-myth/?shared=email&msg=fail
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572581.94/warc/CC-MAIN-20220816211628-20220817001628-00096.warc.gz
en
0.983121
4,340
2.59375
3
Estimates of survival after HIV infection in Africa are essential for understanding the natural history of HIV-1 infection and for planning healthcare resources for those infected. Understanding the natural history of HIV is essential for patient management and counseling, and enables an assessment of the impact of interventions to care for and treat people with HIV, including the provision of antiretroviral therapy (ART). Most of the data on survival and progression to AIDS in low and middle income countries have come from cohorts of prevalent HIV-1 infection or from cohorts of specific groups, such as commercial sex workers [2–4], military conscripts , women , mine workers and blood donors . We have previously published data from this prospective population-based clinical cohort of HIV-infected rural Ugandan individuals [9,10]. Time to death and time to World Health Organization (WHO) stage 4 during 10 years of follow-up were 9.8 and 9.4 years, respectively. The updated findings presented here cover the time from 1990 until the end of 2003 (after which HAART was introduced in this cohort), thus representing a substantially (3 years) longer observation period and allowing more precise estimates of times between seroconversion, symptomatic disease, the development of AIDS and death. In addition, we also compared progression from seroconversion to ART eligibility and from ART eligibility to death. With now 13 years of follow-up, this is one of the longest standing population-based cohorts to document the natural history of HIV infection in Africa, and one of the very few in which data are available from the time of infection. After the large-scale introduction of HAART, it is unlikely that data from similarly long observation times will become available again in the future. We also present data on the mortality of HIV seroconverters from the same community who were not enrolled in the clinical cohort, as mortality may differ for a number of reasons. Selection of participants Participants in the clinical cohort presented here were selected from a larger general population-based cohort in rural south-west Uganda. This larger cohort was established in 1989 to describe the dynamics of HIV-1 infection, and originally consisted of approximately 4500 adults in 15 villages . In 1999, 10 more villages were added to the survey area, bringing the total number of adults to approximately 7000. Since the start of the study, annual house-to-house census surveys have been conducted among this population, followed within weeks by a serosurvey. Monthly birth and death reports are compiled by village-based recorders. This cohort has been described in detail elsewhere . In 1990, one third of prevalent cases from the first survey round, randomly selected, were enrolled into the clinical cohort that forms the framework for the data presented in this paper. In addition, all seroconverters (13 years and older) identified during subsequent annual surveys were invited to enroll in the clinical cohort as HIV incident cases (Fig. 1). The estimated date of seroconversion was taken as the midpoint between the last negative and the first positive HIV test. Only individuals with an interval between the last negative and first positive HIV test of less than 4 years were invited into the cohort. All individuals were aged 15 years or older at enrollment, although some were between 13 and 15 years of age at the time of seroconversion. HIV-negative individuals were randomly selected from the population to match the age and sex of the HIV seroconverters and prevalent cases, in order to facilitate comparisons with the background morbidity and mortality in the study population. Trained field workers visited the individuals identified for enrollment, explained the nature of the study and invited the participants to the study clinic for enrollment into the clinical cohort. At the clinic, a clinician explained the study and answered any questions. On enrollment, all participants gave informed consent (signed or by thumbprint). Cohort participants were invited to attend the study clinic every 3 months for regular clinic visits, and for interim clinic visits whenever they were ill in between regular appointments. At the regular appointment, participants were seen by one of the two study clinicians, who administered a detailed medical and sexual history questionnaire and undertook a full physical examination. A blood specimen was routinely collected to monitor a variety of laboratory parameters. Since 1995, CD4 cell counts have been obtained every 6 months for HIV-positive and every 12 months for HIV-negative participants, using FACSCount (Becton Dickinson, San Jose, California, USA). Between 1992 and 1995, CD4 cell counts were sent occasionally to an external laboratory in Kampala and tested using flow cytometry. Any symptomatic disease was investigated and free treatment was provided. Referral for inpatient care was provided if indicated. Ugandan Ministry of Health guidelines were used for all treatments. Up to the end of 2003, cotrimoxazole or isoniazid prophylaxis and ART were not provided. At every routine visit, participants were staged according to the WHO staging system . WHO stage 4 was used as the definition of AIDS in this analysis. The vital status of participants who defaulted from the cohort was checked through family or neighbour reports. For reasons of confidentiality, clinic staff and field workers were unaware of participants' serological HIV status, unless the participant chose to reveal his or her status to the clinician or nurse. All participants and their partners were encouraged to use the free counseling and testing services available at the clinic or in their residential villages. Frequencies and percentages were calculated for categorical factors describing the participants included in this analysis. For time periods, the median and interquartile range (IQR) were used. Person-years from the seroconversion date were computed, and Kaplan–Meier survival functions and life tables were used to describe the cumulative survival probabilities and 95% confidence intervals (CI) for the various endpoints (WHO stages 2, 3, AIDS, CD4 cell counts below 200 cells/μl, and death). To compare survival, a Cox regression model was used to obtain hazard ratios (HR) and 95% CI for univariate (unadjusted) analysis, and to compare groups adjusting for the effect of age at infection. Likelihood ratio tests were used to assess the significance of differences between groups. All analyses were performed using Stata 9.0 (Stata Corporation, College Station, Texas, USA). For the analysis of time to WHO stages 2 and 3, only HIV incident cases enrolled within 2 years of the estimated date of seroconversion were included. For the analysis of time to WHO stage 4, CD4 cell counts below 200 cells/μl and death all HIV incident cases enrolled were included regardless of the time between estimated seroconversion and enrollment. All events after 31 December 2003 were excluded from the analysis. For all analyses, the endpoint was taken as the date first seen with the event at any clinic visit, or the date of censoring. For participants who did not experience the event, the date of the last clinic visit was taken as the censoring date, or 31 December 2003 if the clinic visit was after this date. The time from seroconversion to death was studied in two ways. For the first analysis, we only included HIV incident participants enrolled in the clinical cohort. All the living incident cases were censored at the end of 2003. For the second analysis, we also included HIV incident individuals from the general population who never enrolled in the clinical cohort, in order to assess whether mortality in this group was different from those who enrolled. The date of death for individuals who never enrolled was obtained through the routine annual census and the monthly death reports in the general population as described above. The follow-up time of these individuals was also censored at the end of 2003 if they were still known to be alive. Mortality rates for HIV incident participants of each sex were calculated. The data from the HIV negative individuals in the general population from which the clinical cohort had been recruited were directly standardized to the age and sex of the HIV-positive individuals at the time of seroconversion. The net mortality attributable to HIV infection was calculated from the lifetable approach of competing risks using the relationship: All HIV incident cases with at least one CD4 cell count contributed to the analysis of time from seroconversion to CD4 cell count below 200 cells/μl. Participants who never reached a CD4 cell count below 200 cells/μl were censored at the last follow-up with a CD4 cell count available or at 31 December 2003 if the CD4 cell count was more than 200 cells/μl after that date. All HIV-infected participants were included in the analysis from developing AIDS to death, except participants who presented with an AIDS-defining condition already at enrollment. Time was estimated from first being seen with AIDS to death; participants still alive were censored at the end of 2003. The same analysis was performed for the time from ART eligibility to death, using two definitions of eligibility: 1) having a WHO stage 4 event or a CD4 cell count less than 200 cells/μl (former WHO definition) and 2) a WHO stage 4 event, or a CD4 cell count less than 200 cells/μl, or a WHO stage3 event and a CD4 cell count less than 350 cells/μl (current WHO definition). Participants in the cohort were treated for opportunistic infections and referred to hospital for inpatient care if necessary. They were encouraged to know their HIV status, and free voluntary testing and counselling services were made available in the study villages by the project. ART was made available for all eligible patients from January 2004 onwards. This study was approved by the Science and Ethics Committee of the Uganda Virus Research Institute and the Uganda National Council for Science and Technology. Characteristics of enrolled participants By 31 December 2003, 775 participants from the general population cohort had been invited to enroll in the clinical cohort and 605 (78%) of these had enrolled: 108 HIV prevalent and 240 incident (35 of whom were enrolled as HIV negative but seroconverted while in the cohort) cases, as well as 292 HIV-negative controls (including the 35 seroconverters; Fig. 1). The main reasons for not enrolling were refusal (N = 62, 36%), moving out of the study area (N = 55, 32%) and death before enrollment (N = 43, 25%). Table 1 gives the age distribution of all participants at the time of enrollment. The male incident cases were significantly older at enrollment than the female cases (P < 0.001, Mann–Whitney), reflecting the transmission pattern in this rural African community. The characteristics of the HIV incident cases are shown in Table 2. The 240 incident cases had a median time of 12.7 months (IQR 12, 24.5) between their last negative and first positive HIV test, reflecting the fact that most incident cases were identified through consecutive annual survey rounds in the general population cohort. The date of seroconversion was estimated with similar accuracy for men (median time between last negative and first positive test 12.7 months) and women (median time 12.8 months). The median period between the estimated date of seroconversion and enrollment into the cohort was 11.6 months (IQR 8.1, 21.9). The median follow-up from seroconversion to death or censoring at 31 December 2003 was 5.1 years (IQR 3.1, 8.4). Follow-up in the cohort was high, with 80.5% of scheduled visits attended. World Health Organization stages 2 and 3 Of the 240 incident cases, 201 were seen in the clinical cohort within 2 years of their estimated date of seroconversion, and this subgroup was used to estimate progression times to WHO stages 2 and 3. These cases had a similar age distribution to the whole group (median age at seroconversion 28.7 years, compared with 28.2 for the whole group). Up to 31 December 2003, 123 participants progressed to WHO stage 2 or higher, and 111 participants progressed to WHO stages 3 or 4. The median time to first symptoms (WHO stage 2 or higher) was 25.8 months (95% CI 22.8–31.3) and the median time to stage 3 was 39.0 months (95% CI 29.2–50.4). There were no sex differences for progression to stage 2 or stage 3 (HR 1.10, P = 0.6 and HR 0.9, P = 0.6, respectively). The criteria used to diagnose progression to WHO stages 2 and 3 are given in Table 2. AIDS (World Health Organization stage 4) or death All 239 incident cases who had an initial WHO stage of 3 or less were used to calculate the time to AIDS (WHO stage 4) or death (Fig. 2a). By the 31 December 2003, 65 had progressed to AIDS, 35 died without having a documented AIDS-defining event, and 139 were censored because they had not died or progressed to WHO stage 4. The median time from seroconversion to AIDS or death was 7.1 years (95% CI 5.9–8.5). Compared with those infected at 13–24 years of age, there was significantly (P < 0.001) faster progression to AIDS or death with increased hazard in all older age groups: 25–34 years (HR 1.78, 95% CI 1.04–3.07), 35–44 years (HR 4.27, 95% CI 2.31–7.89) and 45 years or more (HR 5.55, 95% CI 2.82–10.95). After adjusting for age at seroconversion, women had a slightly faster progression to AIDS or death than men (HR 1.48, 95% CI 0.95–2.30, P = 0.09). The most frequently diagnosed conditions for the first stage 4 event were wasting syndrome (36%), candidiasis of the oesophagus (30%), chronic herpes simplex virus infection (14%), extrapulmonary cryptococcal disease (3%), Cryptosporidium diarrhoea (6%), Kaposi sarcoma (8%), non-typhoid Salmonella septicaemia (7%), extrapulmonary tuberculosis (8%) and HIV encephalopathy (2%). Survival to antiretroviral therapy eligibility Of the 240 incident cases, 229 were known to be not yet eligible for ART at the time of enrollment (excluded were one participant withWHO stage 4 at enrollment, two had no CD4 cell counts, and eight had a first CD4 cell count < 200 cells/μl). Of these, 79 (34%) progressed to a CD4 cell count below 200 cells/μl and 60 (26%) had a WHO stage 4 defining event during follow-up to give a total of 99 (43%) eligible for ART by 31 December 2003. A total of 130 were censored before becoming eligible for ART: 100 at 31 December 2003, 10 at the date of their last CD4 cell count or WHO stage assessment and 20 died without being seen to fulfill the ART eligibility criteria. The median time to a CD4 cell count less than 200 cells/μl or WHO stage 4 was 6.2 years (95% CI 5.0, 8.8). Age at infection was significantly associated with faster progression to ART eligibility: the HR for the 25–34 year age group was 1.37 (95% CI 0.83–2.26), for the 35–44 year age group it was 2.71 (95% CI 1.47–5.02) and for the over 45 year age group it was 3.55 (95% CI 1.83–6.89) compared with the youngest age group (P < 0.001). Adding the criteria of WHO stage 3 with a CD4 cell count below 350 cells/μl, 18 more participants (8%) were eligible for ART before 31 December 2003 and 37 (16%) had an earlier date of eligibility. The median time to ART eligibility according to these criteria was 5.1 years (95% CI 4.2–6.2; Fig. 3). After seroconversion, during a total of 1387 person-years at risk, of which 1199 person-years were observed from the first positive HIV test, 84 incident cases died, giving a crude mortality rate of 70.0 per 1000 person-years. Among HIV-negative participants, 24 died during 1980 person-years, giving a crude mortality rate of 12.1 per 1000 person-years. For incident HIV cases the median survival from seroconversion to death was 9.0 years (95% CI 7.5, 10.6). Removing background mortality (as measured in the HIV-negative population) from the mortality experience of the HIV-positive individuals yielded an estimate for net mortality attributable to HIV that would correspond to a median survival of 10.2 years, an increase of 5% over the observed, gross median survival. Figure 2b shows the cumulative probability of death from seroconversion. Older age at seroconversion was a risk factor for faster progression to death, as demonstrated in Figure 4. The HR for age group 25–34 years was 1.70 (95% CI 0.98–2.93), 3.77 (95% CI 2.00–7.10) for age group 35–44 years, and 5.27 (95% CI 2.66–10.43) for age 45 years and more, compared with the age group 13–24 years (P < 0.001). Sixty-six individuals who seroconverted before 31 December 2003 (33 men, 33 women, median age 36 years for men, 22 years for women) were invited to join the clinical cohort, but did not enroll for various reasons: 16 refused to join, 19 joined after 1 January 2004, 17 died and 14 moved out of the study area. Seventy-three adult seroconverters were identified in the general population cohort but were not invited into the clinical cohort as the period between the last negative and the first positive HIV test exceeded 4 years, and their vital status was obtained from the annual census in the general population. Figure 5 shows that the shorter time to death among the invited but not enrolled was largely the result of deaths in the first 2 years after seroconversion. After adjusting for age, compared with the enrolled group there was a significantly increased hazard in those not enrolled (HR 2.12, 95% CI 1.21–3.69, P = 0.008) and a non-significant lower hazard in those not invited (HR 0.84, 95% CI 0.42–1.64, P = 0.6). Survival from AIDS or antiretroviral therapy eligibility For the 117 (53 prevalent, 64 incident) cases who developed AIDS (WHO stage 4), the median survival from AIDS to death was 8.5 months (95% CI 4.7–12.0). Using the current WHO definition, 174 (117 incident and 57 prevalent cases) HIV-infected participants became eligible for ART (CD4 cell count < 200 cells/μl or WHO stage 4 event or WHO stage 3 and CD4 cell count < 350 cells/μl) before 31 December 2003. Of the 174 eligible for ART, 107 died and 67 were censored at the end of 2003 with the median time to death of 34.7 months (95% CI 27.8–46.0). Using the previous WHO definition (CD4 cell count < 200 cells/μl or WHO stage 4 event) 147 participants (48 prevalent, 99 incident) were eligible for ART before 31 December, and the median survival to death was 25.7 months (95% CI 21.4–30.5). Figure 2c shows the survival after ART eligibility using the current WHO recommendations. We have provided an update of our results on the natural history of HIV infection in a rural African context. These current estimates take into account a longer follow-up (until 2003) since the last publication, which included data until 2000 . HAART has been made available to participants of both the clinical and the general population cohort since January 2004, and these estimates are probably the best available evidence from the African continent. It is unlikely that similar studies will be conducted in the future, as it would be unethical to deny HAART to study participants. Survival from seroconversion in our cohort before the introduction of HAART was similar to what has been described in high income countries before the widespread use of HAART (8.3 to approximately 13 years) [14–17], and we also found that older age at seroconversion was a risk factor for faster progression, as has been reported from a large multicentre study from industrialized countries . Symptom-free survival in our cohort was substantially shorter than described in a cohort of blood donors in Abidjan : 79.3% of the participants in that west African study had no symptoms after 3 years, whereas in our cohort only 42.7% had no symptoms after 3 years. The Abidjan cohort used the Centers for Disease Control classification, whereas we use the WHO classification, and 95% of the subjects in Abidjan were prescribed primary prophylaxis with trimethoprim–sulfamethoxazole (TMS). Although cohort participants were encouraged to know their HIV results through the village-based counseling offices, only approximately 25% of our cohort participants were aware of their serostatus by 2003, and TMS prophylaxis was only offered for secondary prophylaxis during the observation period as the official guidelines in Uganda did not recommend the use of TMS until recently. The median age at seroconversion in both cohorts was similar (29 years). The median time to a CD4 cell count below 200 cells/μl or WHO stage 4 in our cohort was slightly lower than observed among Thai sex workers (6.2 years versus 6.9 years) . Data from developed countries showed significant differences between geographical areas: the Tricontinental Seroconverter Study showed median times from seroconversion to a CD4 cell count less than 200 cells/μl varying between 7.1 and 10.2 years between different sites . According to the new WHO criteria, our study participants were found to be eligible for ART more than one year earlier compared with the old WHO eligibility criteria. If this is representative of the situation in other low and middle income countries, more individuals could benefit from ART than reflected in the current estimates of ART need, and more resources will be required to address this need. The survival time from ART eligibility (CD4 cell count below 200 cells/μl or WHO stage 4) in our cohort was over 2 years. This is more than double the time found in another Ugandan cohort of mostly symptomatic HIV patients, who had a median survival of 9 months with a CD4 cell count below 200 cells/μl . The median age at enrollment in that predominantly female seroprevalent cohort was approximately 30 years. All patients were aware of their HIV status and most presented with symptoms suggestive of HIV infection. The survival time in our cohort is comparable with what was found in a cohort in Baltimore, USA, who reported survival times between 600 and 820 days with CD4 cell counts less than 200 cells/μl . Cohorts from low and middle income countries have reported comparable survival times after seroconversion [3,5,7]. Those cohorts consisted of specific groups, e.g. sex workers, army recruits and miners, who tend to be younger than the general population. Five-year survival among Thai female sex workers was 85.9%, 82.3% among Thai army conscripts, and 89% among South African miners, compared with 80.5% in our general population-based cohort. The survival time for HIV incident cases who were invited but never enrolled in the cohort was shorter than for those who did enroll. Almost 20% of those who never enrolled died within 2 years of seroconversion, indicating a possibly very rapid progression among these individuals: it is easy to miss these individuals in similar cohort-based studies, and this might introduce a bias, overestimating median survival and progression times. We therefore thought it important to present the survival analysis including individuals who never enrolled, although we lacked detailed clinical information about those individuals. From interviews with relatives postmortem, however, it seemed that most of those individuals were older and died from what could have been an AIDS-related illness. Only one of the deaths was caused by trauma. Individuals who were never invited into the cohort because they had an interval of more than 4 years between the last negative and the first positive HIV test had a non-significantly longer survival than those who were invited. This indicates that there is a risk of overestimating survival in population-based cohorts with longer periods between consecutive HIV tests. As we only assessed CD4 cell counts every 6 months, participants could have crossed the thresholds of 200 and 350 cells/μl in the time interval between two measurements. We preferred not to make assumptions on when this occurred, as CD4 cell levels tend to be influenced by physiological and pathological events occurring in the same interval. Consequently, we might have overestimated the time from seroconversion to a defined CD4 cell threshold, and underestimated the time from a defined CD4 cell level to death. As we took 6-monthly measurements, however, we do not think this error is large. The same applies for WHO staging, which was only recorded at the regular visits every 3 months. Here again, we think the error is probably small, because of the frequency of the measurements. The risk of becoming HIV infected may not be constant over the interval between the last negative and the first positive HIV test. Taking the midpoint of this interval as the estimated date of seroconversion could possibly cause a bias in the estimates of disease progression, especially if the interval is longer than 2 years. Approximately 75% of all incident cases had an interval of up to 2 years, and survival analysis restricted to these incident cases did not change the results (data not shown). This updated analysis of disease progression and survival in a rural African cohort will provide useful information for modelling the HIV epidemic and public health planning. It will also serve as a background to assess the impact of treatment and care programmes in sub-Saharan Africa. The authors would like to thank all the participants, the clinic and laboratory staff as well as staff involved in data entry and management. They are grateful for comments on an earlier draft by Shabbar Jaffar. The authors would also like to thank the anonymous reviewers for useful comments on the paper. Sponsorship: This work was supported by a grant from the Medical Research Council, UK. Conflicts of interest: None. 1. Crampin AC, Floyd S, Glynn JR, Sibande F, Mulawa D, Nyondo A, et al . Long-term follow-up of HIV-positive and HIV-negative individuals in rural Malawi. AIDS 2002; 16:1545–1550. 2. Anzala OA, Nagelkerke NJ, Bwayo JJ, Holton D, Moses S, Ngugi EN, et al . Rapid progression to disease in African sex workers with human immunodeficiency virus type 1 infection. J Infect Dis 1995; 171:686–689. 3. Kilmarx PH, Limpakarnjanarat K, Saisorn S, Mock PA, Mastro TD. High mortality among women with HIV-1 infection in Thailand. Lancet 2000; 356:770–771. 4. Lavreys L, Baeten JM, Chohan V, McClelland RS, Hassan WM, Richardson BA, et al . Higher set point plasma viral load and more-severe acute HIV type 1 (HIV-1) illness predict mortality among high-risk HIV-1-infected African women. Clin Infect Dis 2006; 42:1333–1339. 5. Rangsin R, Chiu J, Khamboonruang C, Sirisopana N, Eiumtrakul S, Brown AE, et al . The natural history of HIV-1 infection in young Thai men after seroconversion. J Acquir Immune Defic Syndr 2004; 36:622–629. 6. Leroy V, Msellati P, Lepage P, Batungwanayo J, Hitimana DG, Taelman H, et al . Four years of natural history of HIV-1 infection in african women: a prospective cohort study in Kigali (Rwanda), 1988–1993. J Acquir Immune Defic Syndr Hum Retrovirol 1995; 9:415–421. 7. Glynn JR, Sonnenberg P, Nelson G, Bester A, Shearer S, Murray J. Survival from HIV-1 seroconversion in Southern Africa: a retrospective cohort study in nearly 2000 gold-miners over 10 years of follow-up. AIDS 2007; 21:625–632. 8. Salamon R, Marimoutou C, Ekra D, Minga A, Nerrienet E, Huet C, et al . Clinical and biological evolution of HIV-1 seroconverters in Abidjan, Cote d'Ivoire, 1997–2000. J Acquir Immune Defic Syndr 2002; 29:149–157. 9. Morgan D, Mahe C, Mayanja B, Okongo JM, Lubega R, Whitworth JA. HIV-1 infection in rural Africa: is there a difference in median time to AIDS and survival compared with that in industrialized countries? AIDS 2002; 16:597–603. 10. Morgan D, Mahe C, Mayanja B, Whitworth JA. Progression to symptomatic disease in people infected with HIV-1 in rural Uganda: prospective cohort study. BMJ 2002; 324:193–196. 11. Mbulaiteye SM, Mahe C, Whitworth JA, Ruberantwari A, Nakiyingi JS, Ojwiya A, Kamali A. Declining HIV-1 incidence and associated prevalence over 10 years in a rural population in south-west Uganda: a cohort study. Lancet 2002; 360:41–46. 12. Mulder DW, Nunn AJ, Wagner HU, Kamali A, Kengeya-Kayondo JF. HIV-1 incidence and HIV-1-associated mortality in a rural Ugandan population cohort. AIDS 1994; 8:87–92. 13. World Health Organization. Acquired immune deficiency syndrome (AIDS): interim proposal for WHO staging for HIV-1 infection and disease . Wkly Epidemiol Rec 14. Multicohort Analysis Project Workshop. Part I. Immunologic markers of AIDS progression: consistency across five HIV-infected cohorts 15. UK Register of HIV Seroconverters Steering Committee. The AIDS incubation period in the UK estimated from a national register of HIV seroconverters 16. Koblin BA, van Benthem BH, Buchbinder SP, Ren L, Vittinghoff E, Stevens CE, et al . Long-term survival after infection with human immunodeficiency virus type 1 (HIV-1) among homosexual men in hepatitis B vaccine trial cohorts in Amsterdam, New York City, and San Francisco, 1978–1995. Am J Epidemiol 1999; 150:1026–1030. 17. Veugelers PJ, Page KA, Tindall B, Schechter MT, Moss AR, Winkelstein WW Jr, et al . Determinants of HIV disease progression among homosexual men registered in the Tricontinental Seroconverter Study. Am J Epidemiol 1994; 140:747–758. 18. Collaborative Group on AIDS Incubation and HIV Survival including the CASCADE EU Concerted Action. Time from HIV-1 seroconversion to AIDS and death before widespread use of highly–active antiretroviral therapy: a collaborative re-analysis. Concerted Action on SeroConversion to AIDS and Death in Europe 19. French N, Mujugira A, Nakiyingi J, Mulder D, Janoff EN, Gilks CF. Immunologic and clinical stages in HIV-1-infected Ugandan adults are comparable and provide no evidence of rapid progression but poor survival with advanced disease. J Acquir Immune Defic Syndr 1999; 22:509–516. 20. Veugelers PJ, Schechter MT, Tindall B, Moss AR, Page KA, Craib KJ, et al . Differences in time from HIV seroconversion to CD4+ lymphocyte end-points and AIDS in cohorts of homosexual men. AIDS 1993; 7:1325–1329.
<urn:uuid:c6555d90-eaa1-4fc1-a83a-ae721762b5a9>
CC-MAIN-2022-33
https://journals.lww.com/aidsonline/Fulltext/2007/11006/HIV_1_disease_progression_and_mortality_before_the.4.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571987.60/warc/CC-MAIN-20220813202507-20220813232507-00295.warc.gz
en
0.954746
6,918
2.609375
3
Picture this: You’re sitting in your backyard or you’re driving to work, and you suddenly spot a flash of red right before your eyes. You’re hardly an ornithologist (an expert on birds), but the glimpse of a red-headed bird has piqued your interest. If you’re lucky enough to have seen more features of the bird, such as the color of its body or the shape of its head, you’re probably wondering “what birds have red heads?”. In North America alone, there are said to be over 2,000 bird species. A bunch of these species exhibit some type of red or orange coloration on their heads and bodies, with some featuring a more pinkish hue. Regardless of the strength of the red, all of these birds come under the category of being a “red head” bird. Here is the ultimate guide to what birds have red heads, including pictures and profiles of each species! Often referred to as the “red-headed finch” (which is an entirely different finch species found in Africa), the house finch is probably the most common red-headed bird to find in your backyard. Native to western North America, these birds are found in backyards, suburban areas, towns, and farms across the country. With a wingspan of 8-10” and an average length of 5-6”, this is a moderately-sized finch. The reason house finches are often referred to as “red-headed finches” is because of, you guessed it, of their distinctive reddish heads. The red color often looks slightly more orange or pink, but it’s still enough to be classified as a red-headed bird. Interestingly, as with most bird species, the male is generally more colorful than the female. The rest of their bodies are covered in dull-brown, often grayish, feathers with occasional streaking down the breast. As well as their red heads, the house finch is known for its distinctively chirpy call. If you have a bird feeder, you’ll be well-acquainted with these noises. Speaking of feeding, the house finch’s diet consists of berries, grains, and seeds found on the ground or amongst vegetation. If you want to lure house finches to your backyard, fill your bird feeders with nyjer or sunflower seeds. While this might sound endearing to backyard owners, farmers find house finches rather annoying as they tend to feast on (and thus damage) fruit trees. House finches build their nests in small cavities, including underneath roofs, inside hanging plants, and even in hanging outdoor decorations. These nests are small, cup-shaped, and made of twigs and random debris. The female will generally lay two clutches a year, with each clutch sizing at 2 to 6 eggs. If you saw a red-headed bird while walking through a forest or woodland area, it might have been a pine grosbeak! Pine grosbeaks are a member of the finch family found in coniferous woods in Alaska, Canada, the western mountains of the United States, and also in parts of Eurasia. These birds are also found amongst fruit crops throughout the year in search for an abundance of their favorite foods, as the pine grosbeak is a frugivore. There are a few variations of coloration in the pine goshawk, but generally speaking, the male will exhibit a red head, back, and rump, while the female possesses yellow corresponding parts. The rest of the pine grosbeak’s body is predominantly gray with black wings and a black tail. Measuring at 7.9-10” in length and with a wingspan of 13”, this pine grosbeak is one of the largest finch species. As the pine grosbeak is most commonly found in forests and woodlands, you’re most likely to spot one of these red-headed birds in these areas. However, if you own farmland filled with fruit trees or you live near an orchard, you might be lucky enough to see one or two of these birds. Pine grosbeaks primarily forest for seeds and insects in their forest habitats, but as they are frugivores, they will often search for an abundance of fruit. It’s fairly rare to see young pine grosbeaks as the parents nest in the forks of conifer trees, hidden from human populations and other predators such as wolves and lynxes – though they can’t hide from hawks. The female will lay one clutch a year of 2-4 eggs. Another red-headed bird in the finch family is the Cassin’s finch. Named after John Cassin, a curator at the Philadelphia Academy of Natural Sciences, the male Cassin’s finch is most distinctive for its raspberry red forehead and brushed pink faces. Native to and found across North America, you’re most likely to spot a Cassin’s finch in their favorite habitats – coniferous forests in mountainous regions. These habitats range across western North America, specifically New Mexico, Arizona, and Southern California. During the winter, they move down to lower elevated forests. As well as their red heads, male Cassin’s finches are predominantly brown with a light brushing of pink throughout their feathers. The females, however, are light-brown all over with darker brown streaks across the body. The average length of a Cassin’s finch is 6.3” and the average wingspan is 9.8-10.6”. Cassin’s finches are foragers who scavenge around the forest floors looking for seeds, berries, buds, and insects. As they have an abundance of food sources in their coniferous forest habitats, these birds are unlikely to be found on bird feeders in residential areas. However, if you happen to live near these forests, you might be lucky enough to catch a rare sighting of these birds as they move to lower elevations during winter. If you do, you’ll hear their distinctively sweet song, wherein the notes are far less harsh than that of the house finch. These birds will breed in the coniferous forests at elevations of 10,000 feet high, with their nests deliberately perched on the forks of trees to protect their young from ground predators such as wolves and lynxes. Unfortunately, little is known about the species’ breeding habits and clutch sizes due to the inaccessibility of their habitats. Another red-headed member of the finch family, the purple finch is, despite the name, not actually purple. Instead, the male purple finches are categorized by their distinctive pinkish-red heads and breasts. This coloration blends throughout their body into brown feathers, however the females don’t exhibit such colors. The purple finch was once famously described as a “sparrow dipped in raspberry juice” by American naturalist and ornithologist Roger Tory Peterson. The habitat of a purple finch includes coniferous and mixed forests in the northeast United States and Canada. These birds are also often seen in woodland areas along the Pacific Coast. Due to their bright coloration (for the males at least), they are usually quite easy to spot amongst the forest greenery. During winter, it’s common for the purple finch to descend to fields and farmland at lower elevations to source a mixture of foods, including buds, berries, seeds, and insects. If you live along the Pacific Coast, you’ll probably see a purple finch or two on the edge of wooded areas, shrubland, farmland, and even your backyard if you have a bird feeder. The most distinctive feature of a purple finch is their call. There are some regional differences in their call, as the purple finch along the Pacific Coast has a faster song than the eastern purple finch. The general song sounds like three warbled notes followed by two short ones. Purple finches don’t generally nest in densely populated areas as they prefer to nest in lowland forests and woodlands, however, they can often be found nesting in quiet rural residential areas. The nest is shaped like a cup and made of twigs and debris, and the female will have 1-2 broods a year of 2-7 eggs. Also known as the common crossbill in Europe, the red crossbill is a unique bird in the finch family. The most distinctive feature of a red crossbill is its beak, wherein the bill is crossed over, hence the name! The crossed mandible is an adaptation allowing the bird to successfully extract the seeds from removing the scales of conifer cones. The bigger the crossed bill, the larger the cones they can extract seeds from. Again, as their name suggests, the red crossbill is a brightly colored bird with a red or orange head and chest that blends into the brown wings. However, this red coloration is only exhibited in males, as females tend to be more yellow or green in color. The average length of a red crossbill is 7.8” and the average wingspan is 11”. The habitats of red crossbills are coniferous forests in Alaska, Canada, some New England states, and through the mountains of Mexico into middle America. Their range overlaps with the two-barred crossbill, which shares the same crossed mandible feature. These birds aren’t migratory, but researchers have tracked their movements according to the scarcity and availability of conifer seeds, the bird’s main food source. It’s also common for red crossbills in the west to gather around bird feeders filled with sunflower seeds. The red crossbill will typically nest in conifer forests filled with an abundance of conifer seeds between June and September. Depending on the availability of food, red crossbills can lay between 2-4 clutches a year with around 2 to 6 eggs per clutch. Interestingly, male red crossbills are the most active sex of the species. The males typically perch on the tops of trees, turning their heads quickly in a jolted action, looking for predators and making the appropriate calls to warn their female partner. Also known as the desert cardinal (which is far easier to pronounce than pyrrhuloxia) is a North American songbird and one of the three birds in the Cardinalis genus (cardinals). This is a medium-sized bird that sits between a sparrow and a robin, with an average length of 8.3” and wingspan of 10-12”. The pyrrhuloxia is most distinguishable for its distinctive appearance. While the majority of the body is a brownish-gray, the breast, tail, and face of the desert cardinal is a contrastingly bright red. These birds exhibit a mohawk-feathered shape at the top of their heads, which is also bright red. As with most birds, the female possesses less coloration than the male – though both sexes exhibit yellowish stout bills. Desert cardinals are distributed across desert scrubs and mesquite thickets, hence the name, throughout Arizona, Texas, New Mexico, and the edges of woodlands in Mexico. Due to the ability to live in such dry terrains, the pyrrhuloxia is a surprisingly hardy bird when it comes to water requirements. It will only move short distances in the winter to find a better availability of water. The diet of a desert cardinal is based on their habitat, which includes insects, seeds, and fruits found in desert trees and cactus gardens. As the species sources its food from trees, the desert cardinal is considered a benefit to cotton fields as they like to eat weevils and cotton worms that destroy the cotton. The pyrrhuloxia will nest in cactus gardens and mesquite thickets between March and August in egg-shaped nests made of twigs, grass, and bark. The male will become highly territorial and defensive while the female lays a clutch of 2-4 eggs. The northern cardinal doesn’t just have a red head – they are red all over! Well, the males are at least. Male northern cardinals feature a bright red coloring all over their bodies (including the bill), with a distinctive black mask that covers their eyes and the top of their chests. The red color fades into a dull, dark shade towards the wings. The females, however, are predominantly fawn with hints of gray and red amongst their wings. They share similar face markings, but the female’s are far less pronounced than the male’s. These songbirds are medium-sized, with an average length of 8.3-9.3” and an average wingspan of 9.8-12.2”. The male is generally bigger than the female. Northern cardinals are found across southeastern Canada, the eastern states in the US, and all the way down to Mexico, Guatemala, and Belize. Due to the male’s fabulous coloration, the species has also been introduced to Hawaii and Bermuda. This is a fairly adaptable species that can be found in woodlands, forests, wetlands, and backyards. As a result of the wide distribution, northern cardinals feast on a range of foods including fruits, weeds, seeds, and grains. They also love to eat insects, oats, sunflower seeds, and they often drink maple sap. Like the desert cardinal, northern cardinals are territorial songbirds who sing in a loud, clear whistle as a sign of defense. Females are similarly vocal, and both sexes will sing throughout the year. The female will build a nest from the materials sourced from the male before she lays 3-4 eggs. During this time, the male will keep an eye out for predators such as bald eagles, owls, golden eagles, hawks, falcons, snakes, squirrels, foxes, and more. Like the northern cardinal, scarlet tanagers are known for their bright red plumage that covers their whole bodies, not just their heads. This medium-sized songbird was once considered a tanager, but has recently been made a part of the cardinal family due to the similar vocalizations and plumage – though they don’t share the same thick bill. The scarlet tanager, as we said, is red all over with distinctive black wings and tail. The females, however, are predominantly olive-toned with yellowish underparts. Interestingly, the males shed into a winter plumage that looks fairly similar to the female’s plumage, except the wings and tail are darker. The average length of a scarlet tanager is 6.3-7.5” with a wingspan of 9.8-11.8”, making it a mid-sized songbird. This species is found across North America in deciduous forests, especially forests with oak trees. Scarlet tanagers are known to occur in suburban areas, farmland, parks, and cemeteries also. In winter, they generally migrate down to northwestern South America, starting their journey through Central America in October. Scarlet tanagers are known for their hunting technique called “sallying”, wherein they typically catch insects in the air rather than on the ground. Once they’ve caught flying insects like a wasp or hornet, they will scrape the insect against a tree to get rid of the sting. However, they will take food from the ground, including termites, earthworms, snails, and spiders. These birds also feast on a variety of fruits and berries when the insect population is low – hence why they migrate south to warmer countries in winter where the insects also migrate to. The breeding season for scarlet tanagers is between May and June, wherein the male and female will build a nest for the female to lay a clutch of around 4 eggs. Another once-considered-tanager, the summer tanager is now formerly classified in the cardinal family alongside the scarlet tanager due to the species’ cardinal-like plumage and vocalization. These birds are, in fact, very similar to the scarlet tanager, in that the male is red all over the body and the female is far paler. Unlike the scarlet tanager, however, the male summer tanager does not possess black wings or tail, nor does it have a winter plumage. Instead, its feathers remain a bright crimson red throughout the year. The summer tanager is most commonly found in deciduous forests, specifically oak forests, in the southern United States going as far north as Iowa. In the western part of their range, this species is often found in cottonwoods as they like to feast on weevils and cotton worms. During winter, summer tanagers migrate further south to Central America and the north of South America in search of warmer temperatures and better accessibility to food. Speaking of food, the summer tanager’s diet largely consists of spiders, moths, beetles, caterpillars, and a range of other insects. Like the scarlet tanager, summer tanagers often hunt with the “sallying” method, wherein they will catch flying stinging insects in the air. The birds will then remove the stinger against a tree before consumption. Summer tanagers also enjoy eating fruit when there is a scarcity of insects, which is why they are often drawn to residential areas and farmland. Summer tanagers are monogamous birds that brood once or twice a year in a flimsy, grass-made nest about 10 meters above the ground. The female will generally lay between 3 and 5 eggs. While the female incubates and feeds the chicks, the male will spend this time pruning himself and ensuring his feathers are a splendid red. Alongside the summer tanager and the scarlet tanager is the western tanager, which is now classified within the cardinal family. While not as brightly colored as the former tanagers, the western tanager exhibits a characteristic red face alongside yellow underparts with a black back, wings, and tail. The female western tanager is predominantly olive-green across the body with a yellow face and darker wings and tail. With an average length of 6.3-7.5” and a wingspan of 11.5”, the western tanager is considered a medium-sized songbird. This species is distributed amongst forests throughout the western coast of the United States, from Alaska to Baja California. It’s also common to see western tanagers in southwestern Canada, Texas, and parts of South Dakota. They generally prefer coniferous forests, but they’re surprisingly not that fussy about the types of trees they reside in. During winter, the western tanager will migrate south to pine-oak woodlands in Middle America. People will often see western tanagers during the migratory period as they reside in open countryside, wherein they will either migrate alone or in a group of 30. Like the summer tanager and scarlet tanager, the western tanager is mostly insectivorous and will eat anything from snails to wasps. They will also eat fruits in winter when there is a scarcity of insects. However, as with their habitats, this species isn’t overly picky, which means they can often be found pecking away at bird feeders in backyards (provided the area isn’t densely populated by humans). During the breeding season, western tanagers stick to mixed and conifer forests to provide some protection from predators. The clutch size is generally between 3 and 5 eggs. Named after the Latin word “pileatus”, meaning “capped”, the pileated woodpecker is most notably recognized for its distinctive red-capped crest. This is currently the largest woodpecker species native to and found in North America, sizing at 16-19” long and with an average wingspan of 26-30”. There are currently two subspecies of the pileated woodpecker, with the northern subspecies being larger than the southern subspecies. The pileated woodpecker, as mentioned before, is most notable for its red crown on its head. Other than that, the bird is mostly black and white with distinctive striped markings on its face. Interestingly, both the male and female pileated woodpecker look very similar, except the female’s markings are generally more defined. This species is found in forests across Canada, eastern United States, and even parts of the Pacific Coast. They aren’t too fussy about what type of forest they live in, just as long as the forests are mature and densely wooded. While most birds are often affected by deforestation, the efforts to remove honeysuckle and buckthorn (two invasive species) somewhat benefit the pileated woodpecker, as they allow the birds to easily locate the trees they require. The diet of the pileated woodpecker is primarily insectivorous, as they mostly eat beetle larvae and ants found in and on trees. This species is most famous for the classic woodpecker motion of drilling a hole into a tree to source insects, specifically ant colonies. Pileated woodpeckers will also eat fruit, berries, and nuts when insects aren’t in abundance. Male pileated woodpeckers will also drill holes into dead trees to make their nests in order to woo a female. When successful, the female will lay 3 to 5 eggs inside the tree and the parents will share the responsibility of incubation. Unlike most bird species, the woodpeckers will never return to the same nest. Red-headed woodpeckers exhibit a striking appearance. As their name suggests, the most distinctive feature of this species is their prominent red heads, contrasting with a black back and tail with white underparts. Unlike most bird species on this list, the male and female are sexually monomorphic, meaning they exhibit the same coloration and pattern. This is a small to medium-sized bird, averaging at 7.5-9.8” in length and with a wingspan length of 16.7”. The red-headed woodpecker is native to temperate North America, residing mostly in open forests and pine savannas. These woodpeckers generally don’t live in densely wooded areas, which is why they are also often found in agricultural treerows and on the timber in beaver dams. However, they mostly require trees that are tall enough to drill into with their pointed beaks. As for their diet, the red-headed woodpecker is very adaptable with how they eat. These birds are known to catch insects in mid-air, as well as source for insects within trees, fruits, and nuts. In some cases, these omnivorous birds will eat small rodents and unattended eggs from other birds. It’s also very common for this species to store food within the tight crevices of trees – including live insects like grasshoppers, but they’re wedged in such small spaces that they cannot escape. Like the pileated woodpecker, the red-headed woodpecker will nest in the cavity of a dead tree or even on a tall utility pole to protect their young from potential predators. Considering this species also hunts for unattended eggs, they have adapted to become highly territorial around their nests. At the beginning of May, the female will lay a clutch of 4 to 7 eggs, and they will often have another brood during this time. Not to be confused with the red-headed woodpecker, the red-breasted sapsucker is another species of woodpecker found on the west coast of North America. As their name suggests, this species is distinctive for its red breast and head, but it is often misidentified for the red-headed woodpecker which bears a similar coloration. However, the red-breasted sapsucker exhibits a white belly and rump along with black wings that are speckled with white. Not to offend the red-breasted sapsucker, but it kind of looks like the scruffier version of a red-headed woodpecker. This species is most commonly found in both coniferous and deciduous forests, often in high elevations. Other common habitats include orchards, backyards, and power lines. As long as they are near tall trees for sucking sap, the red-breasted sapsucker is an adaptable bird. These birds are distributed from southwestern Alaska all the way down the west coast to California. During winter, the red-breasted sapsuckers in the north migrate down to the south, and the birds in the south tend to migrate short distances to lower elevations. The winter range is between Baja California and Mexico. As their name suggests, red-breasted sapsuckers primarily feed on sap from trees. Their bills and tongues are specifically designed to drill multiple holes into trees before collecting the sap with the bristle-like hairs on the tongue. Insects inside the tree are also consumed by the sapsuckers, and the holes they leave make for an easy food source for other birds. Like most other woodpeckers, this species will nest in the cavity of a dead tree between April and May, wherein the female will lay between 4-7 eggs. Another member of the woodpecker family, the red-bellied woodpecker is closely related to and often confused with the red-headed woodpecker, but in fact looks very different. Despite their name, the red-bellied woodpecker doesn’t actually possess a red belly – instead, the only red part of the bird is the back of its head. The rest of their body is predominantly white and gray with black and white barring on their wings. In most cases, there is the slightest tinge of red on the belly of this bird, but it’s not usually visible. This species breeds and resides across the eastern United States, going as north as south Canada and as south as Florida. Red-bellied woodpeckers aren’t fussy about where they live, provided they have enough trees to drill holes for nesting and sourcing food. In some areas of high deforestation, the species has been forced to adapt to living in backyards filled with trees. Speaking of food, the diet of a red-bellied woodpecker is entirely omnivorous. They mostly feast on insects found inside and on trees, but they’ll also catch insects mid-flight and eat a range of fruits, nuts, and seeds. For the red-bellied woodpeckers who live in backyards, they love to eat sunflower seeds and peanuts from bird feeders. Red-bellied woodpeckers are notoriously noisy birds. Breeding activities and rituals begin with a series of drumming patterns, wherein the female will be particularly noisy to entice a male. They’ll even drum against aluminum roofs in urban settings. When it comes to breeding, this species will create a nesting cavity in a dead tree between April and May, and the pair will only brood once a year. Acorn woodpeckers are most notably identified for their distinctive appearance. The body of these birds is predominantly black with white speckled underparts and white markings around the face, but the standout feature is the bizarrely bright red cap at the top of the head. The females do not possess this red cap. This is a medium-sized woodpecker with an average length of 8.3” and a wingspan of 14-17”. The habitat of an acorn woodpecker is predominantly wooded areas (specifically filled with oak trees) located near the coastal areas of California, Oregon, and the southwestern United States. The range also extends down to parts of Central America. Wherever there is an abundance of pine-oak trees, you’ll probably find evidence of an acorn woodpecker. As their name suggests, the diet of an acorn woodpecker largely consists of acorns, which is why the species is so prominent in California during fall when there is an abundance of acorns. Once they have collected an acorn, they will drill holes into trees to store the acorns and eat them in winter. Like most woodpeckers, this species is also known to sally insects as well as feast on fruits, sap, and other nuts. While acorn woodpeckers are monogamous breeders, they will often commit to breeding collectives, wherein they will share the roles of parenthood with other pairings (kind of like a natural babysitting service). This is because the species like to live in large groups, especially with members of their family – although incest does not occur. It’s hard to say how many eggs each female will lay if several females will lay eggs together in one nest. The vermilion flycatcher is a species within the largest family of birds in the world known as the tyrant flycatcher family. A small songbird, the vermilion flycatcher deserves its place on this list for its vivid red coloration on the head, chest, and underparts of the bird. The wings and tail are brownish-gray, creating a striking contrast. The female vermilion flycatchers, however, are completely grayish-brown with a slight blush of pink on their underparts. This is a small species, with an average length of 5.1-5.5” and a wingspan of 9.4-9.8”. The distribution of the vermilion flycatcher is predominantly in Mexico, spanning northwards to the southwestern United States and even going as high as Canada. Birds further south, particularly in Central America, will migrate to the Brazilian Amazon, whereas birds in the northern part of the range generally only migrate short distances. This species prefers areas of open land usually near water, such as scrub, agricultural farmland, sparse trees, and riparian woodlands. The vermilion flycatcher, as the name suggests, is an opportunistic feeder that will often sally from a perch to catch insects mid-air. As opportunistic feeders, they will eat just about anything, including fruit, nuts, and even small fish. When it comes to breeding, the vermilion flycatcher is a socially monogamous species, but will often mate outside the pairing. Unlike most other bird species, these birds will also share nests with other pairings. These nests are highly insulated with lichen, hair, fur, and even spider webs. Males become particularly territorial and aggressive during the breeding season as the female lays between 2-4 eggs. Last but not least, the Anna’s Hummingbird is certainly one of the more unique red-headed birds. This species was bred in Baja California and southern California at the beginning of the 20th century, and the species’ range was soon expanded to the whole of California, Oregon, and Arizona. Anna’s hummingbirds, as with most hummingbirds, are incredibly small, with an average length of 3.9-4.3” and a wingspan of 4.7”. Interestingly, only the male Anna’s hummingbird can make its place on our list, because while the male features an iridescent red crown and gorget, the female exhibits a dull green crown instead. As these feathers are iridescent, the red often looks more pink or purple depending on the movement of the feathers. The diet of an Anna’s hummingbird consists mostly of insects and the nectar of flowers, which they consume thanks to their long, sharply pointed bills. Despite their size, this species can be particularly feisty around other hummingbird species who threaten to take their supply of nectar. When it comes to reproduction, the Anna’s hummingbird will breed in shrubby areas, mountain meadows, and open woodlands across the country. Unlike most bird species, the female builds the nests and takes care of her young without any help from the male. The only contribution from a male is copulation, which is achieved through aerial displays and a series of explosive calls and squeaks.
<urn:uuid:114360d8-1bda-439b-893d-d7f3345db4d5>
CC-MAIN-2022-33
https://trinidadbirding.com/what-birds-have-red-heads-the-ultimate-guide/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571147.84/warc/CC-MAIN-20220810040253-20220810070253-00496.warc.gz
en
0.963905
6,920
3.5
4
Posts Tagged Friends of Plant Conservation On November 3, I attended the annual meeting of the Friends of Plant Conservation. This small — but surprisingly effective — North Carolina nonprofit organization was formed to support a tiny program in NC state government’s Department of Agriculture and Consumer Services called the NC Plant Conservation Program. The mission of that state government program is “to conserve the native plant species of North Carolina in their natural habitats, now and for future generations.” That’s a tall order for a large, biologically diverse state like North Carolina, even if efforts were well-funded. As you might have guessed, they are not. Budgets are tight; staffing is equally tiny, which is why the Friends of Plant Conservation was founded to help support the efforts of the NC Plant Conservation Program any way it can. Visit the links to their Web sites to learn all the details about what both organizations do. I have always been impressed by how much they continue to accomplish, and most especially by their unwavering enthusiasm for their work. These groups are attempting to create and maintain preserves that will protect healthy populations of plant species identified by experts as threatened or endangered. The locations of these preserves are not advertised, nor are they easily accessible by the public; these rare resources flourish best when undisturbed. At their annual meetings, the Friends of Plant Conservation receive updates on the activities of their group and the NC Plant Conservation Program. This year, those updates were preceded by a lecture by Wesley Knapp, Western Region Ecologist/Botanist for the North Carolina Natural Heritage Program. This is the group in NC state government tasked with compiling and maintaining information on the status of rare species (flora and fauna) and natural communities in North Carolina. Their group identifies the most endangered plant species that the NC Plant Conservation Program then attempts to protect, with the help of the Friends of Plant Conservation. Mr. Knapp gave a fascinating presentation on extinct plants. These were not tales — at least not mostly — of long-lost plants. Instead, he focused on the continent-wide collaboration he is coordinating with his fellow botanists to attempt to figure out what plants in North America north of Mexico are extinct today. Surprisingly — at least it was surprising to me — botanists don’t actually have a good handle on this important information, but they’ve realized that between climate change and rampant habitat destruction, species extinction rates are rapidly increasing. So botanists across North America are attempting to compile lists for their regions of expertise that represent the best information they have on which plant species are officially extinct. Most extinct plants are fairly obscure and possibly unimpressive — at least to the average citizen. An exception is Franklinia alatamaha; you can find my post on its story here. Mr. Knapp used Florida to illustrate the urgency of the collaboration he is coordinating. This biologically diverse state contains a number of unique plant species that will likely be obliterated by sea level rise over the next 100 years. From that factor alone, the experts believe 29 plant species endemic to Florida will become extinct during the next 100 years. It behooves botanists to create reliable lists of which species are and are not still with us, so that we can better monitor the expected, likely dramatic, increase in extinction rates. How does this relate to the work of the Friends of Plant Conservation? One of the strategies for battling rising extinction rates is the creation of preserves, conservation gardens, and seed banks where these species can be protected. It is true that in the first two cases, we are coming close to what Joni Mitchell described as “tree museums,” where these plants will continue to exist, but in the case of conservation gardens, not in the locations where they evolved. The preserves created and maintained by the NC Plant Conservation Program are protecting naturally occurring populations of threatened plant species, which is more optimal, but in, for example, Florida’s case, not always possible. Seed banks are another important tool, where seeds of a diverse array of species are stored; perhaps in the future, they can be used to re-introduce species to stabilized habitats. I found Mr. Knapp’s lecture to be heartening, because I now know that botanists across the continent are working hard to quantify what we have and what we are losing — and disheartening, because we are losing so much so quickly. It was thus a bit of a relief to listen to the next speaker — Ms. Lesley Starke, NC Plant Conservation Plant Ecologist — who updated attendees on the status of threatened North Carolina plant species and the preserves that protect them. She told us that her group has targeted 486 plant species in North Carolina as significantly rare. Fortunately, some of these species occur in the same habitats, so by preserving habitat, multiple rare species are preserved. Right now, 24 preserves scattered across the state are being protected and maintained by Ms. Starke’s office, with help from the Friends of Plant Conservation. Two more preserves will be in operation very soon. The 24 current preserves comprise about 14,000 acres and protect 75 plant species. When the additional two preserves are operational, 83 plant species will be protected. Ms. Starke’s group works tirelessly, but the math behind their problem is not on their side. She did share one exciting story about how they are successfully protecting increasingly rare populations of native wild ginseng (Panax quinquefolius). As you may know, prices for the roots of this species are so high that poachers are a significant threat to populations of this plant on public lands, where harvesting is against the law. You can read more about this issue here. A scientist working with Ms. Starke has developed a chemical dye that is used to label ginseng roots without harm to the plants. The dye is invisible to the naked eye, but readily identifiable under ultraviolet light, and it persists forever. Most important, the dye is being tweaked so that distinct populations of ginseng each have their own distinct and readily identifiable dye label. For several years now, teams of volunteers have been marking populations of wild ginseng growing on public lands and preserves with unique dye formulations. Before wild ginseng can be sold, it must be assessed by government officials. Now, with a simple UV scan, they can detect whether the roots being assessed were illegally harvested. This innovative system is so foolproof that 100% of criminal prosecutions brought against illegal harvesters who tried to sell dye-marked roots have been successful — a big win for the good guys! These first two presentations were quite lengthy, and when Ms. Starke finished, the time allotted for the entire meeting had been expended. I had other obligations that afternoon and was forced to leave before the meeting concluded with another speaker from the NC Plant Conservation Program, an update on the status and future direction of the Friends of Plant Conservation by its current president, and an award presentation — all of which I was sorry to miss. I hope that at least the president’s presentation will appear on the group’s Web site, so that I can learn about its future plans. I encourage all lovers of native plants, especially those in North Carolina, to consider joining the Friends of Plant Conservation. This group has an impressive knack for stretching its nonprofit dollars in ways that maximize benefits for threatened plants. Volunteer opportunities abound; the group is always looking for local folks to keep watch over their preserves, assistance on work days for tasks like invasive species removal, and as a perk, members are given opportunities to tour these special, protected places — usually when the rare species are in bloom. Most avid gardeners understand that their home landscapes don’t grow in a botanical vacuum. Our little tomato patches, and rose and cottage gardens all grow within a larger context. In my case, that context is the southeastern piedmont region of the United States. It’s easy to forget this larger context when we are battling aphids on our tomatoes or worrying about black spots on our rose leaves, but it’s important not to forget. I was reminded of the interconnectedness of the natural world and the increasing fragility of those connections when I recently attended the annual meeting of the NC Friends of Plant Conservation (FOPC). This small nonprofit organization was created to help support the work of the NC Plant Conservation Program (NCPCP), a tiny NC government group charged with conserving native plant species in their native habitats now and for future generations. David Welch, the Administrator of this NC program says his group is one of the only one of its kind in the country targeting the preservation of rare plant species. He says, “We’re breaking new ground, setting the standards in this field.” This year’s meeting of the Friends of Plant Conservation focused on the preservation status of rare plant species native to the piedmont region of North Carolina. You should visit the Web sites of the FOPC and NCPCP for all the details, but as I understand it, the ultimate goal of the NCPCP is to establish two preserves for each plant species on their list. They use data from the NC Natural Heritage Program to identify which plant species are most imperiled, and to learn where they are still known to exist. The NCPCP divides my state into four regions: mountains, piedmont, inner coastal plain, and outer coastal plain. Currently, the NCPCP has 419 plant species on their list of plants they need to preserve; 21 of these are on the federal protection list. That breaks down to 177 mountain species, 87 piedmont species, 92 inner coastal plain species, and 157 outer coastal plain species. For those who are counting, that adds up to more than 419, because some of these rare species occur in more than one geographic region of NC. At best, the NCPCP is managing to create one preserve a year. At the rate they’re going, many of the endangered species will likely be gone before the NCPCP can protect them. The FOPC is trying to help accelerate preserve creation by soliciting funds from the public, but they are a tiny, mostly unknown nonprofit. They need the help of every North Carolina lover of the natural world, which is why I’m writing about them today. Early in this century, a number of federal and state programs existed that granted funds to organizations like the FOPC and NCPCP to enable them to do their work. But I learned at this meeting from Jason Walser, Executive Director of The Land Trust for Central North Carolina, that today only 10% of the funding once available for land protection is available today. Only ten percent! Why should we gardeners care about preserving rare species? I can think of several reasons. First, as lovers of beauty and appreciators of the gifts plants bestow on us, we value the exquisite beauty of all plants, especially the rare ones. From an ecological perspective, rare and endangered species are the proverbial canaries in coal mines. Before the days of oxygen sensors in coal mines, miners carried canaries with them, because the birds were more sensitive to low oxygen levels than humans. If the canaries suddenly keeled over, the miners knew they had only minutes to evacuate the mine before they too died. The demise of rare plants usually points to environmental degradation. Factors such as pollution, habitat destruction via land clearing, habitat fragmentation, and the introduction of invasive non-native species are destroying the special environments that shelter these species. When they start disappearing, we know we are losing pieces of our ecosystems. No one knows how many links in the chain can disappear before the entire ecosystem fails. Personally, I don’t want to find out. As our native ecosystems are degraded, their health declines. When native plant species disappear, the native animals that need them — insects, arachnids, reptiles, amphibians, and mammals — also disappear. As gardeners, we should be paying attention, because this will affect our home landscapes. Butterfly gardens won’t get many visitors if the native plants their larvae require are gone. Other native pollinators — from mason bees to bumblebees to many species of wasps, flies, and beetles — require native ecosystems for their reproductive cycles. Without our native pollinators, fruit and vegetable production will decline as flowers fade without being pollinated. It is in the best interests of everyone who enjoys the natural world and/or likes to eat fruits and vegetables to start paying attention to what we are losing at an increasingly rapid pace. Now that I know how few grant funding sources remain for the work of preserving and protecting our native ecosystems, I feel obliged to call upon my fellow plant-loving gardeners to step into the void. As we approach the traditional season of giving, I’m asking that you set aside a few dollars to give to one of the many struggling nonprofit groups trying to preserve as many links in the chains of our ecosystems as possible. I’m starting with North Carolina groups, because that’s where I live. I’ll be featuring some of the ones I support in the coming weeks, beginning today with the NC Friends of Plant Conservation. You can get a sense of the kind of plants they’re protecting from their Web site, and from this blog by Rob Evans, Plant Ecologist with the NCPCP. Even small donations can make a big difference. As you can see from this page, even $25.00 is enough to pay for essential tools they need to protect the preserves. We protect ourselves, our gardens, and those who come after us when we protect our native ecosystems. This year, please consider donating the money you were going to spend on a new plant or gardening tool for your yard to one of the many conservation nonprofit organizations valiantly working to protect us all. Humanity world-wide loses a piece of itself every time we lose more of the natural world that nurtures and protects us. When we destroy the natural world, we lose pieces of our soul, the part of us that thrives on the beauty of a cool mountain breeze kissing our faces, the melodic chatter of a clear-running stream, and the exquisite call of a Wood Thrush echoing through a healthy forest. Our hearts are so much smaller without our connection to the beauty of the natural world. In the southeastern Piedmont region of NC where I live, the natural world is under assault every hour of every day. The population of my region is soaring, mostly due to the arrival of many new residents from other parts of the US and the world. As people move in, the forests I grew up with are disappearing. The dwindling patches left are degrading rapidly, due in large part to the invasion of an increasing number of non-native invasive exotic species of plants, animals — especially devastatingly damaging insects — and diseases. One casualty of this urbanizing landscape — throughout the US — is the Monarch butterfly. My generation grew up knowing this beautiful creature — one of the most recognizable species of butterflies in North America. In school, we learned about their life cycle, admired their emerald green chrysalises, and marveled at their annual migrations to Mexico. Every gardener who plants with butterflies in mind knows that species of milkweed are the only plants that Monarch caterpillars will eat, so we tuck them into our yards to ensure Monarch visits. However, in recent years — and most especially this year — our milkweeds have been uneaten by the colorful Monarch caterpillars. In my yard, I’ve only seen two adult Monarch butterflies during the entire growing season. Wonder Spouse took the photos of the one in this post last week. We were so excited when we spotted it in our front garden that we dropped what we were doing and ran for our cameras. Many experts believe that Monarch butterflies are in serious trouble. Much of the reason is probably habitat destruction, both in North America and in their winter homes in Mexico. You can read an article about their decline here. Monarch butterflies are well known and loved, and still they are in trouble. Multiply their peril a thousand-fold for a delicately exquisite, extremely rare wildflower: Oconee Bells. Oconee Bells live in just a couple of spots along a geographic region known as the Southern Blue Ridge Escarpment. This part of the Blue Ridge Mountains rises abruptly up from the piedmont regions of South and North Carolina, creating a remarkable rise in land elevation over a short distance. The region is also characterized by very narrow gorges; at their bottoms, sunlight never penetrates, and temperature and moisture levels remain remarkably steady. Such areas possess unique microclimates that an astonishing array of species of plants and animals have exploited. So much so, in fact, that this region holds more than three times the number of plant and animal species than undisturbed rainforests in Central and South America. The diversity of life is astounding, and tightly adapted to the unique geography and microclimates of this region. The Southern Blue Ridge Escarpment is beginning to be degraded by the intrusion of concrete and asphalt, consequently destroying the delicate ecology of gorge bottoms, where Oconee Bells live. Deforestation on the ridge tops leads to massive erosion down the sides of the steep gorges. In some cases, the Oconee Bells living at the bottom have been scoured from their homes by water cascading down eroded ridge tops. Oconee Bells were never plentiful, and now their increasing rarity makes them coveted by gardeners who want to possess every rare and beautiful plant they can. Oconee Bells are almost impossible to propagate; growing conditions cannot vary for them at all. Thus, plants sell for very high prices, making them a target of plant poachers. In North Carolina, we have plant poacher problems at both ends of our state. On the coast, they steal into our preserves at night to dig up Venus Fly Traps. This species, native only to a 75-mile area around Wilmington, NC, is successfully propagated in the horticulture trade. Even so, plant poachers steal thousands, degrading their habitats at the same time. In our mountains, plant poaching is worse. Folks illegally collect our native ginseng, goldenseal, and other wildflowers known for their medicinal properties. They steal Oconee Bells for covetous gardeners. They do not care that they may eliminate a plant population from a site. They see dollar signs, not irreplaceable beauty. In North Carolina, we are fortunate to have a group in our government with an important mission: The Mission of the Plant Conservation Program is to conserve the native plant species of North Carolina in their natural habitats, now and for future generations. Three individuals comprise this department. They are working to identify and protect the rarest and most threatened plant species in the state. They’ve identified plant populations all over the state. That’s a big job for three people. Fortunately, they have help. The Friends of Plant Conservation is a non-profit organization founded explicitly to support the work of the NC Plant Conservation Program. Members volunteer to help manage and protect the preserves created by the Plant Conservation Program by participating in activities such as work days devoted to clearing out competing vegetation. They also provide essential financial support, since, like every governmental department in NC, the Plant Conservation Program’s budget does not begin to pay for the work that needs doing. Right now, the Friends of Plant Conservation are frantically trying to raise enough money to pay for the purchase of land holding the last healthy population of Oconee Bells in North Carolina — the last natural population of Shortia galacifolia var. brevistyla in the world. The owners of this property love and appreciate this unique wildflower, and they’ve agreed to sell it, at cost, to NC to create a preserve. The owner who has protected this population from poachers and who cherished his land recently died after a long illness. His heirs wish to honor his memory by fulfilling his dream of creating this preserve for Oconee Bells. Time is critical. Funds are short. An anonymous lover of natural beauty has recently stepped forward and is offering to match all donations — four dollars for every dollar donated. Imagine — a donation of $100 will become $500. For once, perhaps beauty can be saved. To learn more about this wildflower and how to send your donation to save beauty, please go here. Even small donations will make a difference, thanks to the anonymous matching donor. Of course, saving rare species like Oconee Bells, and suddenly declining species, like the Monarch Butterfly, is about much more than saving beauty. Scientists compare these imperiled species to canaries in coal mines. Before the days of oxygen sensors, miners carried caged canaries. The canaries were more sensitive to drops in oxygen levels than humans. When the canaries keeled over, the miners knew they had only minutes to escape the same fate. No animal or plant exists in a vacuum. They are parts of ecosystems, intricate groupings of species that evolved together and depend on each other in ways that are still not fully understood. Scientists do know that every time another species disappears from the delicate dance of an ecosystem, remaining species are also imperiled. No one knows how many species can disappear before the dance stops. The natural world feeds us, body and soul. Please follow the link provided above, and if you can help save this uniquely special place, know that you will become an invaluable contributor to saving Oconee Bells, and a piece of our souls as well.
<urn:uuid:9e7ced9a-e89a-4707-8e0a-04bc2559a05e>
CC-MAIN-2022-33
https://piedmontgardener.com/tag/friends-of-plant-conservation/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00497.warc.gz
en
0.952707
4,482
2.609375
3
Whether you are helping your users establish habits, engage in something new or unknown, onboard, or just want to motivate your users into using into giving your product a try, the Fogg Behavior Model can guide you. About BJ Fogg BJ Fogg is a Behavior Scientist and founder of the Behavior Design Lab (formerly the Persuasive Technology Lab – Captology) – at Stanford University in California. Fogg has been researching the area of behavior change since the beginning of the millennium. BJ Fogg has summed up some of his most practical and popular findings on how to change behavior into his Fogg Behavior Model. In 2002, BJ Fogg published one of the first books on Persuasive Design called Persuasive Technology. He further explains his Fogg Behavior Model in his 2020 book, Tiny Habits: The Small Changes that Change Everything. Simply stated, the Fogg Behavior Model states that any behavior will only happen when three elements converge at the same moment in time. These three elements are: - Motivation. Your users are sufficiently motivated to engage in the behavior. - Ability. Your users are able to perform the behavior. - Prompt. Your users are prompted at the right time to perform the behavior. Even though the model is fairly straightforward and easy to understand, that isn’t necessarily the case for applying it to real life situations. In this article, I’ll explain the Fogg Behavior Model, but while doing so, I will explain specifically which persuasive techniques you can use for each of the three elements. The Fogg Behavior Model, B = MAP, explained Fogg suggests three things need to converge at the same moment in time for behavior to happen: Motivation, Ability, and a Prompt. If a behavior does not occur, at least one of those three elements is missing. For a user to perform a target behavior, increasing his or her motivation or ability to do so will increase the likeliness of the behavior happening. Secondly, that for behavior to occur, it needs to be prompted. Lastly, that Motivation and Ability can be traded off. That is, if motivation is very high, ability can be low, and vice versa. In the Fogg Behavior Model, Motivation and Ability have a compensatory relationship to each other illustrated by the curved line representing an activation threshold in the figure below. When a combination of motivation and ability places a person above the activation threshold, then a prompt will cause that person to perform the target behavior. If a person is placed below the activation threshold, then a prompt will not lead to the target behavior. Digital products often do a bad job of prompting behavior. Spam, pop-ups, and ads are actual prompts, but rarely convert to behavior on cue as we have a low motivation to do what the prompt says. Instead, email alerts, bouncing icons, and notifications become annoying and distracting. When we are ready to perform a behavior, a well-timed prompt is welcome. It isn’t when out motivation is low for that specific behavior. Then it’s distracting. In reverse, when we do want to perform a prompted behavior, but lack the ability, we feel frustrated. Before going into detail on well-timed prompts, let’s examine the prerequisites on what makes prompts work: sufficient motivation and ability to perform a behavior in the first place. Influencing the subcomponents of behavior change BJ Fogg identified three core motivators in his behavior model, the underlying drives which motivate us as humans: sensation (physical), anticipation (emotional) and belonging (social). Influencing each of the three drives underlying drives requires different measures. Sensational motivation: we seek pleasure and avoid pain Gamification has been successful in quantifying the outcome of our actions to make users feel either pleasure by achievement and obtaining closure – or pain by not achieving our goals. Mechanisms such as Badges, points, and leaderboards makes it easy for users to gauge their status and progress toward a goal they hope one day will give them pleasure achieving. Several concept can be utilized to quantify the user experience to let users feel and discover pleasure or pain: - Achievements. We tend to engage in behavior in which meaningful achievements are recognized. Consider how you are currently linking desired behavior to achievements. - Completion. Having closure is a reward in itself. Our need for closure and completion drives us toward action, so find ways to anticipate celebration of completion to engage users in your target behavior. - Levels. Using levels to communicate both progress and future goals is a great way to keep the skill level of users in check as their ability grows. - Cognitive Dissonance. When we are psychologically uncomfortable, we are motivated to resolve conflict in order to reduce the dissonance. - Revenge. When we feel unfairly treated, we have an urge to want everyone else to know what happened to us and retaliate. An experienced pain can be as motivating as an experienced pleasure. In fact, behavioral economics suggests that the emotional negative value of a monetary loss of $100 is at least twice as big as the emotional positive value of a monetary gain of $100. This phenomenon is explained by the Value Function in Prospect Theory by Kahnemann and Tversky1. Drawing further from the field of behavioral economics, several concepts can be applied to increase motivation to engage in a specific behavior: - Loss Aversion. Our fear of losing motivates us more than the prospect of gaining something of equal value. - Endowment Effect. Possession feels like ownership, so when we possess something, we feel that it would be a loss to let go – even though we don’t own it. - Framing. We tend to avoid risk when a positive frame is presented to us, but tend to seek risks when a negative frame is presented. - Anchoring. When making decisions, we often rely proportionally more on the first piece of information offered to us. - Status Quo Bias. We tend to accept the default action instead of comparing the actual benefit to the actual cost. - Sunk Cost Effect. We have a tendency to continue to invest, even if it brings us losses, as we hate to see our initial investment to go waste. Anticipatory motivation: our hopes and fears influence our emotions According to BJ Fogg, hope is the most ethical and empowering motivator. Motivating through hope talks to our innermost intrinsic motivations: our desire to do or be part of something that matters. Hope is being part of something meaningful or the anticipation that what you are about to engage in leads to something meaningful. This feeling can be facilitated through mastery by providing a sense of purpose by recognizing a job well done or simply enjoying a job well done. Several tricks can be used to cater to our intrinsic hopes and fears. - Storytelling. The narrative qualities of stories help users engage in a different perspective than their own. - Autonomy. We feel autonomous when we feel as if we have control over our own destiny. The feeling is reinforced when that freedom is not granted to everyone. - Curiosity. We crave more when teased with a small bit of interesting information. - Achievement of a learning goal. People who manage to set goals typically achieve more than when their goals have been set by someone else (typically a teacher). Similarly, fear, the anticipation of something bad happening, often a loss, will equally motivate us toward action. Just as the hope of gaining a future reward can motivate us to act, so can the fear or not obtaining it. It’s a matter of framing and perspective. All rewards and possible achievements can be framed as something we stand to gain or as something we stand to lose. Scarcity is often used as a tool to frame a future gain as something negative. By framing something as being less attainable or accessible, its perceived value rises. Social cohesion: we seek social acceptance and avoid social rejection As humans, we strive to feel accepted – as if we belong. Similarly, we try to avoid feeling rejected2. We are motivated to act, when we can win social acceptance and status just as we are motivated to avoid negative consequences which might lead to social rejection. A number of persuasive patterns seek to motivate users to act by influencing our sense of belonging: - Reciprocation. We feel obliged to give back when we receive something. - Liking. People respond not only to the message, but also to the messenger. That’s you. - Social Proof. When we are in new and unfamiliar situations (socially or not), we assume the actions of others to feel safe. - Status & Reputation. We tend to adjust our personal behavior to reflect positively on how peers and the public see us - Nostalgia Effect. Reminiscing about the past and the social connections we have had, we tend to favor social connections and downplay economic costs. Ability refers to the ease of which a specific behavior can be performed – the skill level required. The obvious approach to making sure a user possesses the necessary skill-level to perform a behavior is to make sure that it represents an Appropriate Challenge, as described with the Flow Channel by Mihaly Csikszentmihalyi. We have examined how we can influence whether a target behavior occurs. The other side of the equation is ability.To perform a target behavior, a person must have the ability to do so. There are three ways to increase the ability to perform a behavior: - Train people to increase their ability. Give people more skills (more abililty) to do the target behavior. This approach to increasing ability is hard and is quite an investment, so only take this path if you must. Training people comes with the risk that most people resist learning new things, as we as humans are inherently lazy. - Provide a tool that makes behavior easier to do. A second route is to give people tool or resource that makes the target behavior easier to do. A cookbook makes cooking a home easier to do and an electric screwdriver makes it easier to screw. - Scale back target behavior. Work with people to scale back their ambitions and save the difficult behavior (such as stop smoking or weight loss) for later projects – after they have learned to succeed at more manageable behavior changes. Break down big habit changes into a series of tiny habits to make sure that a change ini the right direction happens. Increase Ability through higher simplicity The simpler a behavior is to perform, the higher our ability will be. By focusing on simplicity of the target behavior you increase Ability. Ensure that users possess a sufficiently high skill by making the target behavior as easy to perform as possible: to simplify the behavior. Nobody will do anything they don’t think is worth the effort, why making a task simpler is a great way to encourage new behavioral patterns to stick. Simplicity is a function of your scarcest resource at that moment Simplicity is a function of your scarcest resource at that moment. Find the weakest link by asking: “What factor making this behavior difficult?” Difficult refers to any amount of friction that is holding back people from doing their target behavior. Often our perception of the difficulty is more important than the actual difficulty of an activity. Once you have found the weakest link in the ability chain, strengthen it by asking: “How can I make this easier?”. Answers to this question should be based in the person, in the action, or in the context. Your weakest link determines what makes a behavior hard to do. Think about time as a resource, If you don’t have 10 minutes to spend, and the target behavior requires 10 minutes, then it’s not simple. Money is another resource. If you don’t have $1, and the behavior requires $1, then it’s not simple. Fogg outlines six ways a task can be made simpler: time, money, physical effort, brain cycles, social deviance, and non-routine. Limit time to make behavior simpler to perform If a target behavior requires our time, but we do not have time available, then the behavior is not simple. If the target behavior is to fill out a registration form with a 100 fields, then that behavior is not simple for me because I usually have other things more worthwhile to do. There is a significant persuasive effect in simplifying behavior: - Limited Choice. It is much easier for us to make a decision, when there are fewer options to choose from. - Tunneling. Close of detours from your desired behavior without taking away the user’s sense of control. - Tailoring. Tailored information is more effective in motivating behavior as there is less irrelevant information for the user to filter. - Powers. Provide users a way to reach their goal more quickly than they could before. - Unlock Features. Unlock new features as a reward for specific behavior. - Feedback loops. Make it easier for users to adjust their behavior and future actions by providing prompt feedback as they interact. While creating products that actually save users time is always be preferable, there are a number of persuasive design techniques that will decrease perceived time consumption: - Chunking. It is easier to process and remember information when it is grouped into familiar and manageable bits. - Sequencing. When complex activities are broken down into smaller bits, it is easier for people to start engaging in that behavior. - Serial Positioning Effect. It is easier to recall the first and last elements in a list. Limit the brain cycles we need to use to reduce cognitive load As humans, we hate to think. We have a limited cognitive load, the amount of mental effort being used in our working memory, and being forced to think too hard in turn reduces our processing fluency, the ease with which we process information. If performing a target behavior causes us to think hard, then we don’t see the behavior as simple, as it hurts our processing fluency. If we in one way or the other can get away with not spending brain energy, we will try to do so. For the most part, we as designers overestimate how much everyday people want to think. Limiting choice, tunneling, tailoring, chunking, sequencing, and using the serial positioning effect discussed earlier in regards to limiting time spent, all work to minimize our cognitive load. There are however even more applicable tricks from persuasive design we can use to increase the user’s processing fluency and limit the cognitive load: - Priming Effect. It is easier for us to access particular items in our memory, when we have recently been exposed to related stimuli. - Recognition over Recall. We are better at recognizing things from a list than we are recalling them from memory. - Intentional Gaps. We are motivated to complete the incomplete, the closer to completion a task is. - Isolation Effect. Items that stand out from their peers are easier to remember. - Conceptual Metaphor. It is easier for us to understand a new idea or concept, when it is linked to another more familiar concept. - Reduction. Simplify complex behavior to increase the benefit/cost ratio, making it easier for users to engage in the target behavior. Money can both complicate and simplify target behavior If your users have limited financial resources, a target behavior that costs money is not simple. However, it is a trade off: wealthy users will simplify their lives by using money to save time. Limit the physical effort needed When a significant amount of physical effort is required before a target behavior can be performed, then it is not simple to do. If switching from one desk to the other with my computer requires unplugging and plugging in a bunch of cables, then that is less simple than just switching to a similar docking station. Going to my office by bike seem simpler than walking the same distance. We don’t like to be social deviant Accepting the norm and following the lead of others is simple behavior as we then avoid thinking too hard. Going against the norm, breaking the rules of society, creates complications for a target behavior. Wearing pyjamas to a client meeting may require little effort, but I would have to pay with my social pride. As we most often learn from the behavior of others, highlighting socially deviant actions of others or simulating a socially deviant behavior, can make performing a social deviant behavior seem simpler. - Social Proof. Social Proof establishes the norms of which others follow. - Positive Mimicry. When learning new things, we tend to automatically imitate other people’s behaviors. Make it easier for users to engage in behavior, by first showing how other people are conducting similar behavior. - Role Playing. How we act depends on the social norms of the context we are in. Can you modify the context, so that conducting unfamiliar behavior seems natural? We find routine behaviors simple, as we do them over and over again. When we as humans face a behavior that is not routine, then in many cases, we won’t find it simple. As we seek simplicity, we often stick to our routine; buying groceries from the same shop or gas at the same station – without comparing the actual benefits to the actual costs. A few persuasive design techniques can be used to make non-routine behavior seem less frightening, providing prompt feedback that we are on the right path toward behavior change. - Simulation. Enable users to observe the link between cause and effect in real time. - Self-monitoring. Make it easy for people to know how well they are performing a target behavior. The power of simplicity Each person has their own individual simplicity profile. While some people have more time, others have more money. While some can invest brain energy, others can’t. These factors vary by the individual, but also depend on the context. For example, if my bike is stolen, it is no longer simple to travel to my work location, as I’m then travelling by foot. Fogg states that simplicity is a function of a person’s scarcest resources at the moment a behavior is prompted5. As designers, we should seek to discover what resources are most scarce for our audience at the time a behavior is proompted: time, money, or ability to think? When we have discovered what the scarcest resource is, we can start to account for BJ Foggs six factors of simplicity and start reducing the barriers for performing a target behavior. In general, Persuasive Design succeeds faster, when we focus on making behavior simpler instead of trying to pile on new motivational factors. As humans, we often try to resist attempts at motivation, but naturally engage in simple tasks. Influencing with prompts Without an appropriate prompt, behavior will not occur. Even if both motivation and ability are both high. Timing is key. When we are ready to perform a behavior, a well-timed prompt is a welcome distraction. Successful prompts generally have three characteristics: - We notice the prompt. If we don’t notice the prompt, we can’t act on it. - We associate the prompt with a target behavior. Otherwise, how would we know what to do? - The prompt is fired as we are both motivated and able to perform the behavior. Otherwise we will annoyed or frustrated. Lastly, there is timing. The opportune moment to act is any time motivation and ability place people above the activation threshold. A prompt is something that tells people to perform a specific behavior. It typically goes by the name of call-to-action, notifications, prompts, etc. However, not all prompts perform in the same way. Fogg outlines three different kinds of prompts: sparks, facilitators, and signals. A spark motivates behavior, a facilitator makes behavior easier, and a signal indicates or reminds users. In situations where users have the ability, but lack the motivation, highlighting fear or inspiring hope are effective means. Leveraging the power and corresponding persuasive patterns of any of the three core motivational drives mentioned earlier (sensational, anticipatory, and social cohesion) will do the trick. Powerful spark prompts could be: - Periodic Events. Construct recurring events or traditions to build up anticipation. - Fresh Start Effect. We are more likely to achieve goals set at the start of a new time period. The chosen channel or form does not matter as much as the prompt is recognized, associated with the target behavior, and is presented to users at a moment, when they are able to take action. In situations where users have a high motivation, but lack ability, facilitation is an appropriate strategy. When facilitating prompts are effective, they convey to users that the target behavior is easy to do and that it will not require resources he or she does not already have. In situations where users have both high motivation and high ability to perform a target behavior, a signal, for instance a simple reminder, is enough. As opposed to facilitator- and spark-prompts, it is not necessary for the signal-prompt to motivate or simplify a task. The signal merely services as a reminder. An everyday example could be a traffic light: A traffic light doesn’t need to motivate me to drive, it simply indicates that now is a good time to do it. As our digital devices become more context aware, the more powerful prompts have the potential to be. As recipients, we will be most tolerant of signal- and facilitator-prompts, as spark-prompts has the potential to annoy us by being an unwelcome distraction as they try to motivate us to something we have no original intention of doing. Motivation matching and motivation waves You can’t simply layer motivation on top of something you want people to do. How your users are going to be motivated to perform a specific behavior shouldn’t be an afterthought. Instead, you pick specific behaviors that your users already want to do. Those behaviors help them achieve an outcome or result they already want. It’s not that motivation doesn’t matter, but you shouldn’t tack it on at the end. You need to match your own behavioral goal as a business (or how you frame what you want them to do) with what they already want to do. Coming back to what BJ Fogg said that motivation has only one role, to make hard things easier, it’s not that motivation doesn’t matter. But you don’t simply layer motivation over something you want people to do. As BJ Fogg puts it, you have to be motivation matching. What you are trying to motivate to needs to match what people already want to do. But sometimes you have cases in which you do have to insert motivation into the situation to get people to do something. The right time to motivate In our lives, we have all experiences moments, in which we have peaks of motivation. Peaks happen when we get excited about something. It could be about something good, for example when we watch sports on TV and get the inspiration to go out and start doing sports yourself. Motivation can also come in the shape of something bad happening: a natural disaster would make us prepare for what is to come and a breakup could motivate us to get back in shape. During those situations, we go into what BJ Fogg calls a motivation wave where we are able to do harder things. As the wave comes back down and flattens out, and it usually does, we again become less able to do hard things. When the wave is high, we have a temporary opportunity to do hard things. From a product or business perspective, it means that when your customer is in a motivation wave, that’s the time when you prompt them to do hard things. The strength of a motivation wave is limited in time, why you need to act quickly before the wave subsides. We aren’t always going to be thinking, “I got to get back in shape” or “spend more time with my family”. Competing behavior and motivation waves At all times, there will be more than one behavior competing for our attention. Several motivation waves will similarly at all times overlap, each requiring its own different behavior. For example, we can struck by a motivational wave to get back in shape and start exercising, while at the same time, we get a call from the school your kid goes to, that he or she got hurt. In this case, you would hopefully feel more motivation to pick up your son or daughter from school, than you would going to the gym. At all times, we are choosing between competing behaviors and motivation waves. Breaking big tasks up into small baby steps, which wouldn’t require as much motivation to perform, is a great strategy for managing to do both. BJ Fogg advocates for building up tiny habits, letting users take baby steps to begin with, simply to lower the needed ability to start a behavior. 9 5 Factors for Success: The Tiny Habit’s Ability Chain by Hannah Aster 10 Fogg, BJ (2009). A behavior model for persuasive design, Persuasive ’09: Proceedings of the 4th International Conference on Persuasive Technology. Pages 1–7 11 Fogg, BJ (2009). Creating Persuasive Technologies: An Eight-Step Design Process. Persuasive Technology, Fourth International Conference, PERSUASIVE 2009, Claremont, California. Proceedings 12 How Using the Fogg Behavior Model Increases Clicks & Sales by Paul Boynton
<urn:uuid:2fc5c409-db66-41de-9959-e424122fedaf>
CC-MAIN-2022-33
https://ui-patterns.com/blog/making-the-fogg-behavior-model-actionable?utm_source=ui-shop
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571989.67/warc/CC-MAIN-20220813232744-20220814022744-00497.warc.gz
en
0.937324
5,363
2.59375
3
A programmable, electronic device that accepts data input, performs processing operations on that data, and outputs and stores the results. entering data into the computer performing operations on the data presenting the results saving data, programs, or output for future use The transmission of data from one device to another. Information processing cycle (IPOS cycle or information processing cycle) - Input Processing Output Storage - User Computer Computer Computer - types in adds displays saves data - numbers numbers result for future - Any fact or set of facts can become computer data. - Ex: letter to a friend, numbers in a monthly budget, images in a photo, notes in a song, or facts stored in an employee record. The physical parts of a computer Programs or instructions used to tell the computer what to do to accomplish tasks. A person who uses a computer to perform tasks or obtain information. To use data, applications, and resources stored on computers accessed over the Internet rather than on users' computers. - A tiny computer embedded in a product and designed to perform specific tasks or functions for that product. - Ex: dishwashers, microwaves, ovens, coffee makers, thermostats, answering machines, treadmills, sewing machines... A very small communications device with built-in computing or Internet capability. - Microprocessor contains the core processing capabilities of an entire computer on a single chip. - The original IBM PC and Apple Macintosh computers & most of today's modern computers, fall into this category. Personal computer (PC) A type of computer based on a microprocessor and designed to be used by one person at a time. - A personal computer designed to fit on or next to a desk. - Two standards/platforms: PC-compatible or Macintosh - Ex: 1)Tower *most commonly used style 2) desktop case - (AIO case) - A small personal computer designed to be carried around easily. - (portable computers now outsell desktops) Notebook computer (laptop computer) A small personal computer designed to be carried around easily A portable computer about the size of a notebook that is designed to be used with an electronic pen. (digital pen or stylus) (aka mini-notebooks, mini-laptops, and ultra-portable computers) A very small notebook computer Ultra-mobile PC (UMPC) (aka handheld computers) A portable personal computer that is small enough to fit in one hand - A computer that must be connected to a network to perform processing or storage tasks. - Ex: A thin client and Internet appliance Thin client (aka a network computer, or NC) - A device designed to access a network for processing and data storage instead of performing those tasks locally. - Ex: hotel lobby for Internet access, rm to rm calls, free phone calls via Internet - Main advantage: overall lower cost (hardware, software, maintenance, power & cooling) - Disadvantages: ltd or no local storage, not a stand-alone Internet appliance (aka Internet device) - A specialized network computer designed primarily for Internet access and/or email exchange. - Ex: built in refrigerator or telephone; chumby stand-alone Internet device; gaming consoles (Wii, Playstation3), new tv sets, video players, etc. Midrange server (aka minicomputer) - A medium-sized computer used to host programs and data for a small network. - Can serve many users at one time. Used in small-med sized businesses (medical or dental offices) or in school computer labs. - The creation of virtual versions of a computing resource. - Offers increased efficiency. Mainframe computer (referred to as high-end servers or enterprise-class servers) - A computer used in large organizations that manage large amounts of centralized date and run multiple programs simultaneously. - Larger, more powerful, and more expensive than midrange servers. It can serve thousands of users - The fastest, most expensive, and most powerful type of computer. - Generally, run 1 program at a time as fast as possible. - Ex: sending astronauts into space, control missile guidance systems & satellites, weather forecasting, 3D medical imaging... A supercomputer composed of numerous smaller computers connected together to act as a single computer. Computers and other devices that are connected to share hardware, software, and data. - The larges and most well-known computer network, linking millions of computers all over the world. - Technically, a network of networks; consisting of thousands of networks that can all access each other via the main backbone infrastructure of the Internet. Internet service provider (ISP) - A business or other organization that provides Internet access to others, typically for a fee. - ISP servers are continually connected to a larger network, called a regional network, which is connected to one of the major high-speed networks w/in the country, called the backbone network. World Wide Web (Web or WWW) The collection of Web pages available through the Internet A document located on a Web server A collection of related Web pages. (web pages belonging to one individual or company) - A computer continually connected to the Internet that stores Web pages accessible through the Internet. - Can be accessed at any time by anyone with a computer or other Web-enabled device and an Internet connection. - A program used to view Web pages. - Ex: Internet Explorer (IE), Chrome, Safari, Opera, or Firefox. An address (unique numeric or text-based) that identifies a computer, person, or Web page on the Internet, such as an IP address & domain name (computers), URLs (Web pages), or email address (people). IP address (Internet Protocol) - A numeric Internet address used to uniquely identify a computer on the Internet - Ex: 220.127.116.11 - A text-based Internet address used to uniquely identify a computer on the Internet. - Corresponds to a computer's IP address, but searching for domain name is easier than IP address. - Ex: microsoft.com URL (uniform resource locator) - An Internet address that uniquely identifies a Web page. - Ex: http:// (Hypertext Transfer Protocol) for regular Web pages - https:// (Hypertext Transfer Protocol Secure) for secure Web pages - ftp:// (File Transfer Protocol) used to upload and download files Breakdown of URL for a Web page - http:// - Web page URLs usu. begin with this standard protocol identifier - twitter.com/ - identifies the Web server hosting the Web page - jobs/ - identifies the folder(s) in which the Web page is stored, if necessary - index.html - identifies the Web page document that is to be retrieved and displayed An Internet address consisting of a username and computer domain name that uniquely identifies a person on the Internet. Username (an identifying name) - A name that uniquely identifies a user on a specific computer network. - As in an email address: email@example.com - Username: jsmith - domain name: cengage.com surf the Web To use a Web browser to view Web pages - Messages sent from one user to another over the Internet or other network - can be sent via email program (Outlook) or via a Web mail service (gmail or Window's Live Mail) Intellectual property rights - The legal rights to which creators of original creative works are entitled - Ex: music & movies; paintings, computer graphics, and other works of art; books, poetry, symbols, names, designs 3 Types of Intellectual Property Rights Copyrights, Trademarks, and Patents - The legal right to sell, publish, or distribute an original artistic or literary work; it is held by the creator of a work as soon as it exists in physical form. - applies to both published & unpublished works and remain in effect until 70 yrs after the creator's death. - Exception: fair use - permits limited duplication and use of a portion of copyrighted material for specific purposes. - A word, phrase, symbol, or design (or a combo. of these) that identifies goods or services. - Ex: I'm lovin' it (McDonalds), ebay, iPod, and their symbols Digital watermark (a rights-protection tool) - A subtle alteration of digital content that identifies the copyright holder. - not noticeable when work is viewed or played Digital rights management (DRM) software - Software used to protect and manage the rights of creators of digital content. - limits who can view, print, or copy digital content Presenting someone else's work as your own Overall standards of moral conduct - Standards of moral conduct as they relate to computer use. - Ex: distributing computer viruses, spam, and spyware; distributing copies of software, movies, music & other digital content. Standards of moral conduct that guide a business's policies, decisions, and actions. Repetitive stress injury (RSI) A type of injury, such as carpal tunnel syndrome (CTS), that is caused by performing the same physical movements over & over again. Carpal tunnel syndrome (CTS) A painful and crippling condition affecting the hands and wrists that can be caused by computer use. A condition in which the tendons on the thumb side of the wrist are swollen and irritated Computer vision syndrome (CVS) a collection of eye and vision problems, including eyestrain or eye fatigue, dry eyes, burning eyes, light sensitivity, and blurred vision. - The use of computers in an environmentally friendly manner - Ex: minimizing the use of natural resources, such as energy and paper, using ENERGY STAR hardware - *Avg US household spends an est. $100/yr powering devices that are turned off or in standby mode. A certification, usually issued by a government agency, that identifies a device as meeting minimal environmental performance specifications. - The science of fitting a work environment to the people who work there. - Ex: Tilt-and-swivel monitor, Document holder, Proper user position, Adjustable table/desk, Footrest, Adjustable chair. - Electronic trash or waste, such as discarded computer components - *According to most estimates, at least 70% of all discarded computer equip ends up in landfills and in foreign countries with lower recycling costs, cheaper labor, & lax environmental stds than the US. - computers and other hardware devices that are connected to share hardware, software, and data. - used extensively throughout society-people around the world use them every day in business, at school, at home, and on the go. - The use of computers and networking technology to enable an individual to work from a remote location - individuals work from a remote location and communicate with their places of business and clients via networking technologies a network in which computers and other devices are connected to the network via physical cables. A network in which computers and other devices are connected to the network without physical cables A location that provides wireless Internet access to the public - A network that uses a host device connected directly to several other devices - If the central device fails, the network cannot function - A network that uses a central cable to which all network devices are attached - If the bus line fails, the network cannot function since all data is transmitted down the bus line from one device to another. - A network that uses multiple connections between network devices. - used most often with wireless networks - if one device on a mesh network fails, the network can still function (assuming an alternate path is available) The way computers are designed to communicate A network that includes both clients and servers A computer or other device on a network that requests and uses network resources A computer that is dedicated to processing client requests To retrieve files from a server to a client To transfer files from a client to a server peer-to-peer (P2P) network A network in which the computers on the network work at the same functional level, and users have direct access to the network devices Personal area network (PAN) A network that connects an individual's personal devices that are located close together Local area network (LAN) A network that connects devices located in a small geographical area Metropolitan area network (MAN) A network designed to service a metropolitan area wide area network (WAN) A network that connects devices located in a large geographical area A private network that is set up similarly to the internet and is accessed via a Web browser An intranet that is at least partially accessible to authorized outsiders virtual private network (VPN) A private, secure path over the internet used for accessing a private network The amount of data that can be transferred in a given time period A type of signal where the data is represented by 0s and 1s A type of signal where the data is represented by continuous waves A type of data transmission in which the bits in a byte travel down the same path one after the other A type of data transmission in which bytes of data are transmitted at one time with the bits in each byte taking a separate path A type of serial data transmission in which data is organized into groups or blocks of data that are transferred at regular, specified intervals A type of serial data transmission in which data is sent when it is ready to be send without being synchronized A type of serial data transmission in which data is sent at the same time as other related data A type of data transmission in which data travels in a single direction only A type of data transmission in which data can travel in either direction, but only in one direction at a time A type of data transmission in which data can move in both directions at the same time A method of transmitting data in which messages are separated into packets that travel along the network separately, and then are reassembled in the proper order at the destination A method of transmitting data in which data is sent out to all nodes on a network and is retrieved only by the intended recipient. A networking cable consisting of insulated wire strands twisted in sets of two and bound into a cable coaxial cable (coax) A networking cable consisting of a center wire inside a grounded, cylindrical shield, capable of sending data at high speeds A networking cable that contains hundreds of thin transparent fibers over which lasers transmit data as light - Short range - wireless keyboard or mouse to a computer - Medium range - computer to a wireless LAN or public hotspot - Long range - provides internet access to a large geographic area or to broadcast a TV show cellular radio transmission A type of data transmission used with cell phones in which the data is sent and received via cell towers high-frequency radio signals that can send large quantities of data at high speeds over long distances A device that sends and receives high-frequency, high-speed radio signals A device that orbits the earth and relays communications signals over long distances infrared (IR) transmission A wireless networking medium that sends data as infrared light rays A set of rules to be followed in a specific situation Transmission Control Protocol/Internet Protocol (TCP/IP) A networking protocol that uses packet switching to facilitate the transmission of messages; the protocol used with the Internet The most widely used standard for wired networks A widely used networking standard for medium-range wireless networks An emerging wireless networking standard that is faster and has a greater range than Wi-Fi. mobile WiMAX (802.16e) A version of WiMAX designed to be used with mobile phones - A networking standard for very short-range wireless connections - 10 meters (approx 33 feet) or less - works using radio signals in the frequency band of 2.4 GHz - A Bluetooth network - up to 10 individual Bluetooth networks A wireless version of USB designed to connect peripheral devices Ultra Wideband (UWB) A networking standard for very short-range wireless connections among multimedia devices An emerging wireless networking specification designed for connecting home consumer devicses A networking standard for wireless connections between devices that are touching A device used to connect a computer to a network network interface card (NIC) A network adapter in the form of an expansion card A device that is used to connect a computer to a network over telephone lines A device that connects multiple devices on a wired network and forwards data only to the intended recipient. A device that connects multiple networks together and passes data to the intended recipient using the most efficient route wireless access point A device on a wireless network that connects wireless devices to that network A router with a built-in wireless access point A device used to connect two LANs A device on a network that amplifies signals A repeater for a wireless network A device used for receiving or sending radio signals and often used to increase the range of a network
<urn:uuid:c3f15890-1f4c-4b53-9a34-e15103347b30>
CC-MAIN-2022-33
https://freezingblue.com/flashcards/195475/preview/cis-107-551
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573699.52/warc/CC-MAIN-20220819131019-20220819161019-00297.warc.gz
en
0.873503
4,235
3.25
3
Web Only / Features » July 13, 2020 The Forgotten History of the Jewish, Anti-Zionist Left A conversation with scholar Benjamin Balthaser about Jewish, working-class anti-Zionism in the 1930s and ’40s. We can see the emptiness and barrenness of aligning ourselves with an American imperial project. Israeli Prime Minister Benjamin Netanyahu’s push to forcibly annex up to 30% of the occupied West Bank is exposing the violence inherent in imposing a Jewish ethno-state on an indigenous Palestinian population. While the plan is delayed for now, the human rights organization B’Tselem reports that, in preparation for annexation, Israel already ramped up its demolitions of Palestinian homes in the West Bank in June, destroying 30 that month, a figure that does not include demolitions in East Jerusalem. The theft and destruction of Palestinian homes and communities, however, is just one piece of a much larger—and older—colonial project. As Palestinian organizer Sandra Tamari writes, “Palestinians have been forced to endure Israel’s policies of expulsion and land appropriation for over 70 years.” Today, this reality has evolved into an overt apartheid system: Palestinians within Israel are second-class citizens, with Israel now officially codifying that self-determination is for Jews only. Palestinians in the West Bank and Gaza are subject to military occupation, siege, blockade and martial law—a system of violent domination enabled by political and financial support from the United States. Anti-Zionists argue that this brutal reality is not just the product of a right-wing government or failure to effectively procure a two-state solution. Rather, it stems from the modern Zionist project itself, one established in a colonial context, and fundamentally reliant on ethnic cleansing and violent domination of Palestinian people. Jews around the world are among those who call themselves anti-Zionists, and who vociferously object to the claim that the state of Israel represents the will—or interests—of Jewish people. In These Times spoke with Benjamin Balthaser, an associate professor of multiethnic literature at Indiana University at South Bend. His recent article, “When Anti-Zionism Was Jewish: Jewish Racial Subjectivity and the Anti-Imperialist Literary Left from the Great Depression to the Cold War,” examines the erased history of anti-Zionism among the Jewish, working-class left in the 1930s and ‘40s. Balthaser is the author of a book of poems about the old Jewish left called Dedication, and an academic monograph titled Anti-Imperialist Modernism. He is working on a book about Jewish Marxists, socialist thought and anti-Zionism in the 20th century. He spoke with In These Times about the colonial origins of modern Zionism, and the Jewish left’s quarrel with it, on the grounds that it is a form of right-wing nationalism, is fundamentally opposed to working-class internationalism, and is a form of imperialism. According to Balthaser, this political tradition undermines the claim that Zionism reflects the will of all Jewish people, and offers signposts for the present day. “For Jews in the United States who are trying to think about their relationship not only to Palestine, but also their own place in the world as an historically persecuted ethno-cultural diasporic minority, we have to think of whose side we are on, and which global forces we want to align with,” he says. “If we do not want to side with the executioners of the far-right, with colonialism, and with racism, there is a Jewish cultural resource for us to draw on—a political resource to draw on.” Sarah Lazare: Can you please explain what the ideology of Zionism is? Who developed it and when? Benjamin Balthaser: A couple of things need to be disentangled. First of all, there is a long Jewish history that predates the ideology of Zionism that looks at Jerusalem, the ancient kingdom of Judea, as a site of cultural, religious and, you can say, messianic longing. If you know Jewish liturgy, there are references that go back thousands of years to the land of Zion, to Jerusalem, the old kingdom that the Romans destroyed. There have been attempts throughout Jewish history, disastrously, to “return” to the land of Palestine, most famously, Sabbatai Zevi in the 17th century. But for the most part, through much of Jewish history, “Israel” was understood as a kind of a cultural and messianic longing, but there was no desire to actually physically move there, outside of small religious communities in Jerusalem and, of course, the small number of Jews who continued to live in Palestine under the Ottoman Empire—about 5% of the population. Contemporary Zionism, particularly political Zionism, does draw on that large reservoir of cultural longing and religious text to legitimize itself, and that’s where the confusion comes. Modern Zionism arose in the late 19th century as a European nationalist movement. And I think that’s the way to understand it. It was one of these many European nationalist movements of oppressed minorities that attempted to construct out of the diverse cultures of Western and Eastern Europe ethnically homogenous nation-states. And there were many Jewish nationalisms of the late 19th and early 20th centuries, of which Zionism was only one. There was the Jewish Bund, which was a left-wing socialist movement that rose to prominence in the early 20th century that articulated a deterritorialized nationalism in Eastern Europe. They felt their place was Eastern Europe, their land was Eastern Europe, their language was Yiddish. And they wanted to struggle for freedom in Europe where they actually lived. And they felt that their struggle for liberation was against oppressive capitalist governments in Europe. Had the Holocaust not wiped out the Bund and other Jewish socialists in Eastern Europe, we might be talking about Jewish nationalism in a very different context now. Of course, there were Soviet experiments, probably most famous in Birobidzhan, but also one very brief one in Ukraine, to create Jewish autonomous zones within territories that Jews lived, or elsewhere within the Soviet Union, rooted in the Yiddish idea of doykait, diasporic hereness, and Yiddish language and culture. Zionism was one of these cultural nationalist movements. What made it different was that it grafted itself onto British colonialism, a relationship made explicit with the Balfour Declaration in 1917, and actually tried to create a country out of a British colony—Mandate Palestine—and use British colonialism as a way to help establish itself in the Middle East. The Balfour Declaration was essentially a way to use the British Empire for its own ends. On some level, you could say Zionism is a toxic mixture of European nationalism and British imperialism grafted onto a cultural reservoir of Jewish tropes and mythologies that come from Jewish liturgy and culture. Sarah: One of the underpinnings of modern Zionism is that it's an ideology that represents the will of all Jews. But in your paper, you argue that criticism of Zionism was actually quite common on the Jewish left in the 1930s and '40s, and that this history has been largely erased. Can you talk about what these criticisms were and who was making them? Benjamin: The funny part about the United States, and I would say this is mostly true for Europe, is that before the end of World War II, and even a little after, most Jews disparaged Zionists. And it didn’t matter if you were a communist, it didn’t matter if you were a Reform Jew, Zionism was not popular. There were a lot of different reasons for American Jews to not like Zionism before the 1940s. There's the liberal critique of Zionism most famously articulated by Elmer Berger and the American Council for Judaism. The anxiety among these folks was that Zionism would basically be a kind of dual loyalty, that it would open Jews up to the claim that they're not real Americans, and that it would actually frustrate their attempts to assimilate into mainstream American culture. Elmer Berger also forwarded the idea that Jews are not a culture or a people, but simply a religion, and therefore have nothing in common with one another outside of the religious faith. This, I would argue, is an assimilationist idea that comes out of the 1920s and '30s and tries to resemble a Protestant notion of “communities of faith.” But for the Jewish left—the communist, socialist, Trotskyist and Marxist left—their critique of Zionism came from two quarters: a critique of nationalism and a critique of colonialism. They understood Zionism as a right-wing nationalism and, in that sense, bourgeois. They saw it as in line with other forms of nationalism—an attempt to align the working class with the interests of the bourgeoisie. There was at the time a well-known takedown of Vladimir Jabotinsky in the New Masses in 1935, in which Marxist critic Robert Gessner calls Jabotinsky a little Hitler on the Red Sea. Gessner calls the Zionists Nazis and the left in general saw Jewish nationalism as a right-wing formation trying to create a unified, militaristic culture that aligns working-class Jewish interests with the interests of the Jewish bourgeoisie. So that's one critique of Zionism. The other critique of Zionism, which I think is more contemporary to the left today, is that Zionism is a form of imperialism. If you look at the pamphlets and magazines and speeches that are given on the Jewish left in the 1930s and '40s, they saw that Zionists were aligning themselves with British imperialism. They also were very aware of the fact that the Middle East was colonized, first by the Ottomans and then by the British. They saw the Palestinian struggle for liberation as part of a global anti-imperialist movement. Of course, Jewish communists saw themselves not as citizens of a nation-state, but as part of the global proletariat: part of the global working class, part of the global revolution. And so for them to think about their homeland as this small strip of land in the Mediterranean—regardless of any cultural affinity to Jerusalem—would just be against everything they believe. As the Holocaust began in earnest in the 1940s, and Jews were fleeing Europe in any way they possibly could, some members of the Communist Party advocated that Jews should be allowed to go to Palestine. If you’re fleeing annihilation and Palestine is the only place you can go that is natural. But that doesn’t mean you can create a nation-state there. You need to get along with the people who live there as best as you possibly can. There was a communist party of Palestine that did advocate for Jewish and Palestinian collaboration to oust the British and create a binational state—which, for a host of reasons, including the segregated nature of Jewish settlement, proved harder in practice than in theory. In any case, the Jewish left in the 1930s and 1940s understood, critically, that the only way Zionism would be able to emerge in Palestine was through a colonial project and through the expulsion of the indigenous Palestinians from the land. In a speech by Earl Browder, chairman of the Communist Party, in Manhattan’s Hippodrome, he declares that a Jewish state can only be formed through the expulsion of a quarter-million Palestinians, which attendees thought was very shocking at the time, but it actually ended up being a dramatic undercount. Sarah: You wrote in your recent journal article, “Perhaps the single most pervasive narrative about Zionism, even among scholars and writers who acknowledge its marginal status before the war, is that the Holocaust changed Jewish opinioin and convinced Jews of its necessity.” You identify some major holes in this narrative. Can you explain what they are? Benjamin: I would alter that a bit to say I’m really talking about the communist and Marxist left in this context. I grew up with in a left-wing family where opinion was definitely divided on the question of Zionism—yet, nonetheless, there was a pervasive idea that the Holocaust changed opinion univerally, and everyone fell in line as soon as the details of the Holocaust were revealed, Zionist and anti-Zionist alike. It’s undeniably correct to say that without the Holocaust there probably would have been no Israel, if just for the single fact that there was a massive influx of Jewish refugees after the war who would have undoubtedly stayed in Europe otherwise. Without that influx of Jews who could fight the 1948 war and populate Israel just after, it’s doubtful an independent state of Israel could have succeeded. However, one thing I found most surprising going through the Jewish left press in the 1940s—publications of the Trotskyist Socialist Workers Party, the Communist Party, and writings by Hannah Arendt—is that even after the scope of the Holocaust was widely understood, their official position was still anti-Zionist. They may have called for Jews to be allowed to resettle in the lands from which they were expelled or massacred, with full rights and full citizenship, be allowed to immigrate to the United States, or even be allowed to emigrate to Palestine if there was nowhere else to go (as was often the case). But they were still wholly against partition and the establishment of a Jewish-only state. What is important to understand about that moment was that Zionism was a political choice—not only by western imperial powers, but also by Jewish leadership. They could have fought more strenuously for Jewish immigration to the United States. And a lot of the Zionist leaders actually fought against immigration to the United States. There were a number of stories reported in the Jewish Communist press about how Zionists collaborated with the British and Americans to force Jews to go to Mandate Palestine, when they would have rather gone to the United States, or England. There’s a famous quote by Ernest Bevin, the British Foreign Secretary, who said the only reason the United States sent Jews to Palestine was “because they do not want too many more of them in New York.” And the Zionists agreed with this. While this may seem like ancient history, it is important because it disrupts the common sense surrounding Israel’s formation. “Yes, maybe there could have been peace between Jews and Palestinians, but the Holocaust made all of that impossible.” And I would say that this debate after 1945 shows that there was a long moment in which there were other possibilities, and another future could have happened. Ironically, perhaps, the Soviet Union did more than any other single force to change the minds of the Jewish Marxist left in the late 1940s about Israel. Andrei Gromyko, the Soviet Union’s ambassador to the United Nations, came out in 1947 and backed partition in the United Nations after declaring the Western world did nothing to stop the Holocaust, and suddenly there’s this about-face. All these Jewish left-wing publications that were denouncing Zionism, literally the next day, were embracing partition and the formation of the nation-state of Israel. You have to understand, for a lot of Jewish communists and even socialists, the Soviet Union was the promised land—not Zionism. This was the place where they had, according to the propaganda, eradicated antisemisitm. The Russian Empire was the most antisemitic place throughout the late 19th and early 20th century, before the rise of Nazism. Many of the Jewish Communist Party members were from Eastern Europe, or their families were, and they had very vivid memories of Russia as the crucible of antisemitism. For them, the Russian Revolution was a rupture in history, a chance to start over. And, of course, this is after World War II, when the Soviet Union had just defeated the Nazis. For the Soviet Union to embrace Zionism really sent a shockwave through the left-wing Jewish world. The Soviet Union changed its policy a decade or so later, openly embracing anti-Zionism by the 1960s. But for this brief pivotal moment, the Soviet Union firmly came down in favor of partition, and that seems to be what really changed the Jewish left. Without this kind of legitimation, I think we are all starting to see the Jewish left such as it exists return back in an important way to the positions that it had originally held, which is that Zionism is a right-wing nationalism and that it is also racist and colonialist. We are seeing the Jewish left return to its first principles. Sarah: That's a good segue to some questions I wanted to ask you about the relevance of anti-Zionist history to the present day. For a lot of people, Israel’s plan to annex huge amounts of Palestinian land in the West Bank, while delayed, is still laying bare the violence of the Zionist project of establishing Jewish rule over a Palestinian population. And we are seeing some prominent liberal Zionists like Peter Beinart publicly proclaim that the two-state solution is dead and one state based on equal rights is the best path. Do you see now as an important moment to connect with the history of Jewish anti-Zionism? Do you see openings or possibilities for changing people's minds? Benjamin: In a way, Beinart’s letter was 70 years too late. But it is still a very important cultural turn, to the extent that he is part of a liberal Jewish establishment. I would also say that we're in a different historical moment. In the 1930s and '40s, you can really talk about a kind of global revolutionary sentiment and a real Jewish left that's located in organizations like the Communist Party, the Socialist Workers Party and the Socialist Party. And you can see that again in the 1960s. Students for a Democratic Society, which also had a very sizeable Jewish membership, formally backed anti-Zionism in the 1960s, along with the Socialist Workers Party, and formed alliances with the Student Nonviolent Coordinating Committee, which had also taken an official anti-Zionist position in the late 1960s. You could think about a global revolutionary framework in which Palestinian liberation was an articulated part—you could think about the Popular Front for the Liberation of Palestine and the Palestine Liberation Organization as part of the fabric of global revolutionary movements. Today we’re in a much more fragmented space. On the same note, though, we're seeing the rebirth, or maybe continuity, of Palestinian civil rights movements, with Palestinian civil society putting out a call for decolonization—both out of their own traditions of liberation, but also looking to models from the South African freedom struggle. For contemporary Jews who are progressive and see themselves on the left, they’re suddenly realizing that there really is no center anymore, there is no liberal Zionist position any longer. The center has really fallen away. And we're faced with this very stark decision: that either you're going to be on the side of liberation, or you're going to be on the side of the Israeli right, which has eliminationist and genocidal intent that has always been there, but is nakedly apparent now. And so I think people like Beinart are waking up and saying, “I don't want to be on the side of the executioners.” The history of the old Jewish left and the new Jewish left of the 1960s shows us this isn’t new. Any liberation struggle is going to come from the oppressed themselves, so the Palestinian liberation movement is going to set its terms for struggles. But for Jews in the United States who are trying to think about their relationship, not only to Palestine, but also their own place in the world as an historically persecuted ethno-cultural diasporic minority, we have to think of whose side we are on, and which global forces we want to align with. If we do not want to side with the executioners of the far-right, with colonialism and with racism, there is a Jewish cultural resource for us to draw on—a political resource to draw on. This history of the anti-Zionist Jewish left demonstrates that an important historical role in a diaspora has been solidarity with other oppressed people. That’s the place from which we’ve gathered the most strength historically. So I look at this not as saying, “We're not going to reproduce the Communist Party of the 1930s and 1940s.” We’re saying, “We’ll produce something new, but the past can be a cultural resource that we can use today.” Sarah: Who or what is responsible for the erasure of this history of Jewish, left anti-Zionism? Benjamin: I wouldn’t blame the erasure solely on the Soviet Union or Zionism, because we also have to think of the Cold War and how the Cold War destroyed the old Jewish left, and really drove it underground and shattered its organizations. So I think we also have to see how the turn toward Zionism was understood as something that would normalize Jews in a post-war era. With the execution of the Rosenbergs, the Red Scare of the late 1940s and '50s, and the virtual banning of the Communist Party, which had been throughout the 1930s and '40s half Jewish, for much of the Jewish establishment, aligning themselves with American imperialism was a way for Jews to normalize their presence in the United States. And hopefully that moment has to some degree passed. We can see the emptiness and barrenness of aligning ourselves with an American imperial project, with people like Bari Weiss and Jared Kushner. Why would someone like Bari Weiss, who describes herself as liberal, want to align herself with the most reactionary forces in American life? It’s a bloody matrix of assimilation and whiteness that emerged out of the Cold War suburbanization of the 1950s. Israel was part of that devil’s bargain. Yes, you can become real Americans: You can go to good U.S. universities, you can join the suburbs, enter into the mainstream of American life, as long as you do this one little thing for us, which is back the American Empire. Hopefully, with the emergence of new grassroots organizations in the United States, among Jews and non-Jews who are questioning the U.S. role supporting Zionism, this calculus can begin to change. With the rise of Jewish Voice for Peace, IfNotNow, the Democratic Socialists of America and the Movement for Black Lives all taking a serious stance against U.S. support for Zionism, the common sense in the Jewish community has begun to move in a different direction, particularly among the younger generation. The battle is very far from over, but it makes me just a little optimistic about the future.
<urn:uuid:156ef4fb-d5ad-4665-b64b-938ffd0fec1b>
CC-MAIN-2022-33
https://bolky.jinbo.net/index.php?mid=board_kbol88&sort_index=readed_count&order_type=desc&page=28&listStyle=viewer&document_srl=8809
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00297.warc.gz
en
0.971456
4,824
2.59375
3
THE CAPTURE OF WINSTON CHURCHILL – THE ARMOURED TRAIN INCIDENT. 15 NOVEMBER 1899. “I have had, in the last four years, the advantage, if it be an advantage, of many strange and varied experiences. But nothing was so thrilling as this: to wait and struggle among these clanging, rending iron boxes, with the repeated explosions of the shells and the artillery, the grunting and puffing of the engine – poor, tortured thing, harassed by at least a dozen shells, any one of which, by penetrating the boiler, might have made an end to it all”. With Ladysmith besieged by Boer forces, by early November 1899 the main body of British troops had retired south of the Tugela River, basing itself at Estcourt. In a sort of “phoney war”, extensive scouting was carried out around Estcourt south to Willow Grange, west towards the Drakensberg mountains and north towards Colenso, including the use of an “armoured train” which made periodic forays up the line to Frere, Chieveley and Colenso. However, the use of an armoured train as a reconnaissance vehicle was somewhat of an oxymoron. It could be seen and heard for miles, and any Boers in the vicinity merely had to take cover and wait for it to pass. Indeed, the armoured train that ran regularly up the line from Estcourt to Colenso was known as “Wilson’s Death Trap”, with good cause. The Times History of the War @ Volume ii pages 304 – 305 concludes – “That the train was certain to be caught in a trap, sooner or later, was the outspoken conviction of every officer in Estcourt, but no precautions were taken to accompany it by a few mounted men to scout on both sides of the railway. Between Estcourt and Colenso the line ran like a switchback, up and down a number of narrow valleys, with hardly a point from which an unbroken view of 800 yards on either side could be obtained. On the other hand, even when it was hidden in cuttings the puffing of its approach could be heard for miles. The particular train in question was of the most primitive character, consisting simply of open trucks boxed in with thick loop-holed walls of thick boiler plate to a height of seven feet from the floor of the trucks. It would be hard to devise a better target than a soldier laboriously climbing in and out of that death-trap”. According to “The Durban Light Infantry 1854 – 1934” by Lt. Col. A.C. Martin @ page 61, “ … It was decided to send out the Armoured Train on another sortie on the 15th. (November 1899). Thus were sent a company of Dublin Fusiliers, Lt. Frankland and 72 men; 45 men of “C” Company of the Durban Light Infantry under Captain J.S. Wylie and Lieut. W. Alexander, and a detachment of 5 men of H.M.S. Tartar with a 7-pdr. under a petty officer. Capt. A. Haldane, D.S.O., of the Gordon Highlanders, recently recovered from wounds @ Elandslaagte, was placed in command. Accompanying the train were some platelayers, telegraphists, and linesmen, and Mr. Winston Churchill, War Correspondent to the Morning Post. At the head of the train was an open flat truck carrying an antiquated 7-pdr. manned by men of H.M.S. Tartar. Then came an armoured truck in which were three sections of the Dublin Fusiliers, Captain Haldane and Winston Churchill. This was followed by the engine and tender, two armoured trucks, and an open bogie. In the truck behind the engine and tender were the fourth section of Fusiliers, Capt. Wylie and half the men of the Durban Light Infantry. The following truck contained Lieut. Alexander, the remainder of the Durban Light Infantry, the telegraphists, platelayers and the like. The bogie carried the stores thought necessary, and the guard.</em The driver of the train was Charles Wagner, with Stoker Alexander James Stewart, linesmen Charles Godfrey D.C.M., A. Bramley, William Yallup and James Welsh. The telegraphists referred to were Robert Taylor McArthur and A.R. Foster. The latter had been specially assigned to Winston Churchill for the transmission of press reports Randolph S. Churchill, in his book “Winston S. Churchill – Youth 1874 to 1900”, quotes Captain Aylmer Haldane’s report to the Chief of Staff, Natal Field Force “Left Estcourt @ 5:10 a.m. and on arrival at Frere Station @ 6:20 a.m. I reported that no enemy had been seen. Here I met a party of eight men of the Natal Mounted Police, from whom I learnt that their patrols were in advance reconnoitring towards Chieveley, wither I proceeded after a halt of 15 minutes. On reaching there at 7:10 a.m., I received the following message by telephone from Estcourt “Remain at Frere in observation, watching your safe retreat. Remember that Chieveley station was last night occupied by the enemy. Nothing occurred here yet. Do not place reliance on any reports from local residents as they may be untrustworthy”, and in acknowledging it stated that a party of about 50 Boers and three wagons was visible moving south on the west side of the railway.</em I at once retired towards Frere, and on rounding the spur of a hill which commanded the line the enemy opened on us at 600 yards with artillery, one shell striking the leading truck”. This small hill referred to by Haldane lies west of the railway line less than two miles from Frere. According to Martin, the occupants of the train saw a cluster of Boers gathered on the top, and the next thing they heard was the crash of shrapnel shells bursting overhead. The engine driver put on steam, but the train was unable to make the turn at the bottom of the hill. With an almighty crash, the bogie “was thrown into the air and landed off the line, up-side down. The armoured truck next to it was derailed and thrown on its side, spilling out its occupants and pinning some of them under, and crushing one”. There have been several theories as to what caused the derailment. According to Haldane, “the enemy, lying in ambush, had allowed the train to pass towards Chieveley, and had then placed a stone on the line, which the guard, probably owing to the shell fire, had neither seen nor reported”. Winston Churchill writes of a huge stone placed on the line, which is backed-up by official Boer accounts. Burleigh states that the Boers had removed the fishplates and propped the line up with stones. The Times History states that stones were placed between the guide rail and outer rail. R.A. Curry, driver of a light engine that had piloted the train as far as Frere, stated that bolts were removed and had to be replaced by the platelayers. Whatever the cause, it is almost certain that the speed of the train also contributed to the derailment. Martin records that – “The next armoured truck in which was Capt. Wylie, was partly on and partly off the line, leaning over on one side and obstructing the line so as to trap the rest of the train. Many were injured. All were shocked, having been thrown violently to the ground or to the floor of their truck”. Winston Churchill’s belated despatch for the Morning Post describes the incident – “The Boers held their fire – suddenly three wheeled things appeared on the crest, and within a second a bright flash of light … then two much larger flashes … the iron sides of the truck tanged with the patter of bullets. There was a crash from the front of the train … the Boers had opened fire on us at 600 yards with two large field guns … and a Maxim. I got down from my box into the cover of the armoured sides of the car … the driver put on full speed, as the enemy had intended. The train leapt forward, ran the gauntlet of the guns, which now filled the air with explosions, swung round the curve of the hill, ran down a steep gradient, and dashed into a huge stone. To those … in the rear truck there was a tremendous shock, a tremendous crash, and a sudden full stop. The truck containing the tools and materials of the breakdown gang and guard was flung into the air and fell bottom-upwards on the embankment. The next, an armoured car crowded with Durban Light Infantry, was carried on twenty yards and thrown over onto its side, scattering its occupants in a shower on the ground. The third wedged itself across the tracks, half on and half off the rails”. Haldane states that “Mr. Winston Churchill, special correspondent of the Morning Post, who was with me in the truck next to the gun-truck, offered his services, and knowing how thoroughly I could rely on him, I gladly accepted them, and undertook to keep down the enemy’s fire while he endeavoured to clear the line. Our gun came into action at 900 yards, but after four rounds was struck by a shell and knocked over. I recalled the gun detachment into the armoured truck, whence a continuous fire was kept up on the enemy’s guns, considerably disconcerting their aim, as I was afterwards informed, killing two and wounding four men”. Churchill encouraged Wagner, the engine driver, to climb back into his cab, and reported to Capt. Haldane that he believed there was a chance of clearing the line if the enemy were kept engaged and some volunteers assisted him. Unfortunately only 9 men answered his call, the remainder preferring to take cover behind the trucks and embankment. For the next seventy minutes, under a storm of small arms fire, the partially derailed truck next to the tender had to be bumped off the line by the engine in order to clear it. Churchill managed to persuade Driver Wagner to stay at his post by telling him that, although he had received a head wound from a splinter, it was impossible to be hit twice on the same day! They finally managed to partially dislodge the truck, to the point where the engine and tender just managed to squeeze past. As they did so, however, the truck slipped back, making it impossible for the engine to hitch up the other trucks, which were now stranded on the other side of the obstacle. Capt. Haldane gave the order for “as many wounded as possible to be loaded on the engine and tender and for it to make its way slowly so as to provide some cover for the men who were to fight their way back to some houses near Frere where it was proposed to make a stand. So the engine started back accompanied by the men on foot. There was an immediate increase in the volume of Boer fire and soon about a quarter of the force was killed or wounded. The engine was being hit, its tender was set alight in part, and so the driver increased speed, and soon the infantry were left behind. Winston Churchill, who was on the engine up to this moment, decided to go back to the men fighting their rearguard action, but not before instructing the driver to make for the far side of the bridge over the Blauuwkranz River near Frere. By that time it was all over with the infantry. They had become scattered and one man, wounded, put up a white handkerchief. According to Winston Churchill – “As many wounded as possible were piled onto the engine, standing in the cab, lying on the tender or clinging to the cowcatcher … (The Maxim) discharging with an ugly thud, thud, thud exploded with startling bangs on all sides. One struck the footplate of the engine scarcely a yard from my head … another hit the coals in the tender, hurling a black shower in the air. A third struck the arm of a Private in the Dublin Fusiliers. The whole arm was smashed to a horrid pulp – bones, muscle, blood and uniform all mixed together”. Churchill’s stated objective of rounding up stragglers was, however, untenable. On his way back to the train, two Boers caught Churchill in a railway cutting. He scrambled up the bank in an attempt to escape, only to find a mounted Boer waiting for him, some 40 yards away. According to Pakenham, this Boer is alleged to have been Veld Cornet Sarel Oosthuizen, known as the “Red Bull of Krugersdorp”. Churchill decided that there was no hope of escape, and surrendered. Robert Taylor McArthur, who had recently lost his employment as a Clerk @ the Glencoe Post Office after the battle of Talana and subsequent evacuation of Dundee, had signed up as a civilian employee on the Governor’s staff. He later wrote an account of his experiences during the incident, which was published in Bennett Burleigh’s “The Natal Campaign” @ pages 82 – 83. “We left Escourt @ 5:30 a.m., and ran on to Ennersdale, and reported by wire, “All clear”. Shortly before reaching Frere we met and spoke to some of the Natal Police, who had been bivouacking upon the kopjies. They told us that the Boers had all gone back the previous night. Then we went to Frere, where we wired the General, “All well”, and without waiting for a reply ran on to Chieveley. Shortly before entering the station we saw fifty Boers going west at a canter with some wagons, as we thought. We waited a few minutes @ Chieveley, and then started back. About three miles out, or two miles north of Frere, we noticed several hundred Boers about 800 yards off, on the west side. Then we saw more on the east of the line. They began firing at us, first with rifles and then with Maxim-Nordenfeldt cannon. One of their shots made a big dent in the rear armoured truck, but did not enter. Their guns were behind the kopje; at least, two repeating cannon, which shot a steady flame, and a heavier piece that threw shrapnel at us.</em But a few yards on our rear, then, the front truck ran off the line, shaking and jolting terribly, and ours, the next or armoured one, followed suit, and soon all three left the rails. The next thing I knew we were all upset, and, strangely, only one man was killed. We all scrambled out. Sitting down, the soldiers began firing volleys at the Boers, who responded by peppering us with more shot and shell. In about five minutes Mr. Churchill came from what had been the front of the train, took charge, and asked for volunteers to shift the trucks. About fifteen men helped to do so, but the wagons were too heavy to move. Then the engine managed to smash through, breaking them up, and getting knocked about in doing so. All this was done under heavy fire. We tried to couple up the trucks that had been in front, but could not, the line being blocked. Then, picking up all the wounded we could see, we started for Frere to get assistance. My instruments were smashed, and we found those @ Frere had been carried off by the Police for safekeeping. From there we pushed ahead to Ennersdale, whence I wired to the General, giving him a few details. We had several men shot down whilst we were putting the wounded on the tender, although our troops did their best to cover the operation by firing volleys. Several of the shells struck the telegraph wires and poles, cutting and knocking down the line”. McArthur later went on to win a Distinguished Service Order (DSO) in World War 1 and retired as a Lieutenant Colonel in the S.A.F.T. & P.O. (South African Field Telegraph and Post Office). “The Durban Light Infantry Volume 1 1854 – 1934” by A.C. Martin @ page 67 also quotes Dr. Raymond Maxwell’s first hand recollections. He was then serving as a British Doctor with the Red Cross Ambulances attached to the Boer forces. “The rain came down in torrents all day, and our mules only made slow progress, but we caught up with our commando at Chieveley station. Heard rifle and artillery fire from here, and on getting on towards Frere station found the Boers just capturing an armoured train and some 56 prisoners. The train had come up from Estcourt to Chieveley, passing through Boers lying on both sides of the line without seeing them. While it was up at Chieveley, some Boers went and fixed in big stones just where the line crossed a culvert. Just as they were finished, back came the train at a great rate with the engine in the middle. On reaching the obstruction the wheels on one side left the line, but still it ran 250 yards without overturning, but then the front trucks upset trying to negotiate a curve, and one lay across the line obstructing further progress. All the time the Boers were pouring in a dreadful fire from practically perfect cover, and the artillery kept putting shells clean through the trucks. At last the engine managed to butt the obstructing truck out of the way, and got off loaded up with what seemed about 40 men. The other soldiers who had been lying behind what cover they could get were compelled to surrender, several of them being wounded. In one of the overturned trucks there were three dead soldiers, one being a D.L.I. man (Pte. F. Copeland) and further down the line were two more dead soldiers. Some time later a party of the Ermelo Commando, who had gone on ahead to break up the line, were completely surprised by a patrol, and received a volley without any warning, which killed two men.” The “patrol” referred to comprised two squadrons of the Composite Regiment, one from the Imperial Light Horse (“A” Squadron under Capt. Herbert Bottomley) and the other of the Natal Carbineers, under Major Duncan McKenzie. This encounter was of great assistance in enabling the engine, as well as a number of men, who made their way on foot to Estcourt, to get away. According to G.F. Gibson in his “Story of the Imperial Light Horse”: “A” Squadron ILH and an NC Squadron, numbering some 160 men, under Major Duncan McKenzie, were sent to attempt to rescue the armoured train. They extended, and with reckless gallantry charged headlong into the midst of some 2 000 Boers who were busily employed in completing the wreck of the armoured train. In thus engaging the enemy, the ILH and Carbineers would probably all have been killed or captured, had not a tropical storm descended suddenly. Under cover of the downpour, the squadrons were able to extricate themselves from this dangerous position, and cover the withdrawal of the engine and tender, with its load of men hanging on precariously. Trooper Fred A. Freshney’s account “My Experiences in the South African War” gives a more realistic estimation of Boer numbers – “The news spread that the armoured train had been upset near Frere, and that we (“A” Troop) were to go to their assistance. We went “full gallop” in the pouring rain, and very soon horses and men were covered in mud, but we were far to excited to even notice this. About 1½ miles out of Estcourt we met the engine of the ill-fated train coming slowly in with its load of dead and wounded – a sad spectacle. The poor wounded were lying on the coals, some on the cow-catcher, some on top of the boiler and one even clinging to the chimney. The engine itself had been badly battered by the shellfire, and was leaking in several places. We stopped a moment to get further information from them, and then were off again at full speed. In a short time we reached Ennersdale Station (six miles out) … found that the Boers had looted the stationmaster’s house and the station buildings. Once again we went by the side of the railway … we had gone about 4 miles north of Ennersdale, when suddenly “crack” went a rifle to our right, to be followed immediately by others from our men – we had taken a Boer Commando of about 300 strong by surprise. They were in a valley about 600 yards away, and just rounding a curve out of sight. We dismounted as quickly as possible, and began blazing away at their rear. The Boers now took up position about 800 yards away and began to return our fire. Trooper Hillhouse (the smallest man in our Squadron) being shot through the knee. We had taken cover behind some Kaffir kraals …</em Major McKenzie … gradually withdrew, the Boers pressing us up to Ennersdale Station”. Dr. George Oliver Moorhead, who was serving with the Red Cross attached to the Middelburg Commando, then stationed at Chieveley, said that some 50 British surrendered. He saw them soon afterwards, “trudging towards us in the rain and mud, a little compact body of men on foot surrounded by mounted burghers. As they came near us we distinguished the sodden soiled khaki uniforms; a few officers marched stolidly in front, a man in mufti with an injured hand among them’. This was, of course, Winston Churchill. The engine and tender arrived back at Estcourt with the survivors at 10 a.m. It had taken three hits by shells, and the tender had 63 bullet strikes. According to official records, there had been 5 killed, 2 who died of wounds and 47 wounded on the British side. 20 wounded and 70 unwounded got back to Estcourt. However, these figures do not tie-in with the Hayward Roll. Captain Aylmer Haldane’s report to the Chief of Staff, Natal Field Force mentions those men who did well – “2nd. Lieutenant T.C. Frankland, 2nd. Bn. Royal Dublin Fusiliers, a gallant young officer who carried out my orders with coolness. A.B. Seaman E. Read, H.M.S. Tartar, No. 6300 Lance Corporal W. Connell, Privates Phoenix and Cavanagh, 2nd. Bn. Royal Dublin Fusiliers who, notwithstanding that they were repeatedly knocked down by the concussion of the shells striking the armoured truck, continued steadily to fire until ordered to cease. Captain Wylie and the men of the Durban Light Infantry, who ably assisted in covering the working party engaged in clearing the line, and who were much exposed to the enemy’s fire. The driver of the engine (Wagner) who, though wounded, remained at his post. The telegraphist (R.T. McArthur) attached to the train, who showed a fine example of cheerfulness and indifference to danger”. Captain Wylie, who commanded “C” Company of the D.L.I. contingent on that day, was later asked what he thought of Winston Churchill’s conduct? He replied “ He was a very brave man but a damned fool”. Churchill was back in action a month later, having escaped (and made his name a household word in the process) from the New Model School in Pretoria and making his way to freedom via Delagoa Bay. On his way back to the coast he called in at the railway workshops in Pietermaritzburg and asked to speak to Driver Wagner. However, Wagner was busy with some maintenance and declined to see his distinguished visitor as he was not in a fit state to do so. Churchill would not take “no” for an answer. “Dirty overalls are nothing compared to what he and I went through together. I have to shake his hand, oil and all!” “When, in 1910, I was Home Secretary, it was my duty to advise the King upon the awards of the Albert Medal. I therefore revived the old records, communicated with the Governor of Natal and the railway company, and ultimately both the driver and his fireman received the highest award for gallantry open to civilians”.</em Why, you might ask, have I bothered to research and write this article? Well, as someone once said, courage never goes out of fashion. And 2nd Class Railway Fireman Alexander James Stewart’s Albert Medal Second Class and his Queen’s South Africa medal are on the May 2018 Dix Noonan and Webb catalogue. Hell, how I wish I had 20 000 Pounds to spare! ROLL OF HONOUR. At the site of the action the graves of Privates 5031 J. Burney, 5861 E. McGuire and 5792 M. Balfe of the Royal Dublin Fusiliers can be found, just across the railway line on the opposite side to where the Churchill capture memorial now stands. 5263 Private C. Johnstone Royal Dublin Fusiliers died of his wounds at Estcourt on 22 November. 3993 Colour Sergeant P.J. Magee is recorded as having died on 15 November, although Steve Watt shows that he died on 15 December and is buried at Ambleside. 806 Private F. Copeland of the Durban Light Infantry is buried at Chieveley, 401 Private E. Espeland at Estcourt and 516 Corporal David Brown was crushed and subsequently died of his injuries on 23 December 1899. He is buried at Intombi Camp, Ladysmith. The following list is an attempted reconciliation between the Hayward Roll, “Volunteers All” and “The Durban Light Infantry”. However, there are a number of discrepancies and, in the absence of a Regimental Number, there is still some doubt as to correct spellings of surnames and, in that case, positively placing them at the scene – |NAME||RANK||REGIMENT||NATURE OF WOUND| |Haldane, Aylmer. DSO||Capt.||Gordon Highlanders||Missing| |Churchill, Winston Leonard Spencer||Correspondent||Morning Post||Slight – hand| |Godfrey, Charles. DCM||Mr.||Natal Govt. Railways||Slight| |94 Hillhouse, Robert||Trooper||Imperial Light Horse||Severe – gunshot wound, knee| |Jacks, A.||Trooper||“C” Squadron, Colonial Scouts||Slight| |508 Bruce, M.||Pte.||Durban Light Infantry||Gunshot wound left thigh and right leg| |374 Brunyee, R.J.||Pte.||D.L.I.||Slight – lacerated left arm| |671 Cadenhead, James||Cpl.||D.L.I.||Slight – gunshot wound index finger| |481 Christie, J.H.||Pte.||D.L.I.||Slight. Gunshot wound, chest. Captured| |654 Coldbeck, Thomas Russell||Pte.||D.L.I.||Severe. Gunshot wound, left knee| |325 Connell, R.||Sgt.||D.L.I.||Captured| |54 Cumming, George||Col. Sjt.||D.L.I.||Slight. Crushed| |382 Dickie, Andrew||Cpl.||D.L.I.||Slight. Gunshot wound, left thigh| |573 Hotter, J.W.||Pte.||D.L.I.||Slight – gunshot wound right shoulder and hand| |689 Humphreys, G.B.||Pte.||D.L.I.||Gunshot wound both legs. Captured + evacuated – Ladysmith.| |594 McDougall, P.||Pte.||D.L.I.||Gunshot wound, leg. Captured| |647 Murray, William||Pte.||D.L.I.||Slight| |532 Paxton, J.F.||Pte.||D.L.I.||Slight. Gunshot wound neck + left foot| |701 Rhodes, Richard Percy||Pte.||D.L.I.||Captured| |251 Rollo, Alexander||Col. Sgt.||D.L.I.||Slight| |528 Service, J.A.||Pte.||D.L.I.||Gunshot wound, shoulder. Captured| |811 Smith, A.||Pte.||D.L.I.||Slight| |4 Tod, James||Ord. Rm. Sgt.||D.L.I.||Slight – flesh wound. Saved Capt. Wylie’s life by dragging him into cover| |365 Wartski, B.S.||Pte.||D.L.I.||Crushed| |466 Webb, H.N.||Pte.||D.L.I.||Slight. Crushed| |637 Wright, Alexander||Pte.||D.L.I.||Gunshot wound, head + captured| |541 Woodward, A.G.||Pte.||D.L.I.||Captured| |Wylie, James Scott. M.V.O., D.S.O., V.D.||Captain||D.L.I.||Severe.| |5826 Flood, J.||Pte.||Royal Dublin Fusiliers||Slight| |5914 Coote, J.||Pte.||R.D.F.||Severe| |3715 Osbourne, J.||Sjt||R.D.F.||Missing – released Glencoe| |4443 Hoey, W.||Drummer||R.D.F.||Missing| |3672 Hassell, E.||Sjt.||R.D.F.||Missing| |5114 Hall, H.||Cpl.||R.D.F.||Missing| |5800 Buckley, E.||Pte.||R.D.F.||Missing| |6293 Kempster, C.||Pte.||R.D.F.||Missing| |5499 Byrne, P.||Pte.||R.D.F.||Missing| |4497 Berry, J.||Pte.||R.D.F.||Missing| |5755 Collins, L.||Pte.||R.D.F.||Missing| |6140 Dunpley, L.||Pte.||R.D.F.||Missing| |5741 Dwyer, J.||Pte.||R.D.F.||Missing| |5256 Cavanagh, M.||Pte.||R.D.F.||Missing| |4691 O’Rourke, T.||Pte.||R.D.F.||Missing – released Glencoe| |5968 Glynn, B.||Pte.||R.D.F.||Missing| |5057 Kerwan, M.||Pte.||R.D.F.||Missing| |5329 Stanton, T.||Pte.||R.D.F.||Missing| |5316 Daly, O.||Pte.||R.D.F.||Wounded| |5516 Scully, B.||Pte.||R.D.F.||Missing| |5697 Davis, T.||Pte.||R.D.F.||Missing| |5841 Hoy, W.||Pte.||R.D.F.||Missing| |5287 Lynch, T.||Pte.||R.D.F.||Missing| |6308 Connell, W.||Pte.||R.D.F.||Missing| |6116 Harty, O.||Pte.||R.D.F.||Missing| |6319 Burke, C.||Pte.||R.D.F.||Wounded| |4676 Driscoll, M.||Pte.||R.D.F.||Wounded| |4865 Reynolds, G.||Pte.||R.D.F.||Missing| |6354 Sheridan, W.||Pte.||R.D.F.||Missing| |5310 Black, T.||Pte.||R.D.F.||Wounded| |5296 Drew, J.||Pte.||R.D.F.||Wounded| |5263 Johnston, C.||Pte.||R.D.F.||Severe| Amery, Leo. Editor of “The Times History of the War in South Africa 1899 – 1900”. Sampson, Low, Marston and Company. St. Dunstan’s House, London. 1900 Burleigh, Bennett. “The Natal Campaign”. Chapman and Hall, London. 1900. Churchill, Randolph S. “Winston S. Churchill. Youth 1874 – 1900”. William Heinemann Limited. London. 1966. Freshney, Trooper Fred A. “My Experiences in the South African War” Gibson, G.F. “Story of the Imperial Light Horse”. GD and Co. 1937. Kisch, Henry and Tugman, H. St. J. “The Siege of Ladysmith in 120 Pictures”. George Newness Limited, London. 1900. Martin, A.C. “The Durban Light Infantry Volume 1 – 1854 to 1934”. Hayne and Gibson Limited, Durban. 1969. Pakenham, Thomas. “The Boer War”. Futura Publications, London. 1982. The South African War Casualty Roll – the Natal Field Force 20 October 1899 – 26 October 1900”. J.B. Hayward and Son, Polstead. 1980. Sandys, Celia. “Wanted Churchill, Dead or Alive”. Harper Collins Publishers, London. 1999. Simpson, Cameron. “Volunteers All”. Private publication. Smail, J.L. “Monuments and Battlefields of the Transvaal War 1881 and the South African War 1899 – 1902”. Howard Timmins, Cape Town. 1966. Watt, Steve. “In Memoriam. Roll of Honour Imperial Forces Anglo Boer War 1899 – 1902”. University of Natal Press, Pietermaritzburg. 2000.
<urn:uuid:a28333ea-86c2-44ca-b801-a73bf0afa6a3>
CC-MAIN-2022-33
https://www.battlefieldsregionguides.co.za/the-capture-of-winston-churchill/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572021.17/warc/CC-MAIN-20220814083156-20220814113156-00096.warc.gz
en
0.964708
7,741
2.890625
3
Summary of conclusions and recommendations This report has been prepared by an independent team of experts commissioned by UNHCR to evaluate the agency's preparedness and response to the 1999 Kosovo refugee emergency.1 The emergency developed in the wake of NATO air strikes against the Federal Republic of Yugoslavia (FRY), and ended 11 weeks later when a framework for peace was established in mid-June and repatriation started. While focusing on UNHCR, the evaluation team was also asked to "consider the role and impact of other actors involved in the crisis, to the extent and insofar as they affected UNHCR's operations". The evaluation uses a historical-analytical method to reconstruct and analyse the relevant events. While the team has jointly formulated the conclusions, the main report is structured as a collection of expert papers written by individual authors. The report is divided into the following chapters: - Context (nature of the emergency and international response) - Preparedness (early warning and contingency planning) - "Day One" (initial response and emergency management) - Management (field and HQ, emergency staffing, logistics, financial constraints) - Assistance and co-ordination (co-ordination mechanisms, provision of material assistance, registration) - Protection (securing first asylum, humanitarian evacuation and transfer programmes, registration, security) - Relations with the military The report assesses UNHCR's response in relation to three criteria: - the overall outcome: did the refugees obtain appropriate protection and assistance? - agency criteria: did UNHCR meet its own standards for providing protection and assistance during an emergency? - situation-specific demands: were UNHCR standards and responses relevant to the unusual characteristics of the Kosovo case? The evaluation takes the extraordinary nature of the Kosovo emergency as its starting point. In physical terms, the refugee movement was unusually large and swift - half a million people arrived in neighbouring areas in the course of about two weeks, and a few weeks later the total was over 850,000. In political terms, the emergency was an extraordinary event of a type that is rare in contemporary international relations. It involved the national interests of major powers, strong regional organizations, and military action in Europe. NATO, and to some extent the OSCE, shaped policy towards the conflict after a controversial decision to bypass the UN Security Council. In this situation, the displacement issue became an important element in the diplomacy of war. To many governments, the refugees were too important to be left exclusively to UNHCR. 2. Main conclusions The refugees from Kosovo generally received adequate assistance. Indicators of mortality rates were well below the generally accepted threshold for emergencies, and there were no serious epidemics. This was partly due to fortuitous factors - the generally good health of the refugees and the short duration of the emergency - and support from the host families, as well as the massive aid apparatus marshalled to help them. UNHCR's contribution to this outcome must be judged against its relatively limited role in the overall relief response. The agency's shelter programme funded only 12 per cent of the refugee population housed in some 278 camps and collective centres in Albania (the equivalent figure for The former Yugloslav Republic of Macedonia2 is unknown but was probably similar); furthermore, nearly two-thirds of the refugees lived with host families outside camps. UNHCR expended about $73 million in Albania and about $50 million in FYR Macedonia between March and the end of the year,3 presumably most of it during the emergency.4 On the protection side, there was a near-disaster at the outset of the emergency, when thousands of refugees were trapped at the Blace crossing point on the border between Kosovo and FYR Macedonia. The immediate cause was the refusal of FYR Macedonia to admit a massive refugee flow unless it had reasonable assurances that other states would help. The result was a "burden-sharing programme" based on the underlying premise that protection is a common responsibility of states. Governments rather than UNHCR took the initiative in these programmes, particularly the USA, which was moved by strategic-political interests as well as humanitarian concerns. UNHCR worked with states to develop and co-ordinate the evacuation and later transfer programmes. The agency made significant efforts to raise protection issues and should be commended for quickly producing guidelines to clarify standards. Within these parameters, and given the power and specific resources that it did command, the agency performed variably. Early warning: UNHCR did not anticipate the size and speed of the exodus, nor could it reasonably be expected to have done so. However, preoccupation with IDPs inside Kosovo distracted attention from preparing for the unlikely, but possible, worst-case scenarios of refugee outflows. Preparedness and initial response: The agency did not fully meet its own standards for providing immediate assistance. The target current at the time of the emergency called for non-food relief items for 250,000 persons to be immediately available, and for field deployment of emergency response teams (ERT) within 72 hours. However, reserve stocks of some key items were low and the decision to dispatch the ERT was not taken soon enough. The reasons were largely due to management factors under the agency's control. The agency had insufficient high-level staff to address critical diplomatic challenges that arose simultaneously in several places in the initial phase of the emergency. Emergency management: Staff deployment was generally slow, critical mid-level management for field operations was lacking, and some key field positions were not staffed. Junior or inexperienced staff were at times placed in overly demanding positions. At Headquarters, the unique decision-making structure developed for the former Yugoslavia had responsibility for the Kosovo crisis, but was not well suited to manage a large and complex emergency operation. Overall co-ordination: Weaknesses in staff deployment reduced the effectiveness of UNHCR's co-ordination role. At the same time, the dominance of bilateralism and the presence of numerous actors made system-wide co-ordination extraordinarily difficult. While not assessing the consequences on the overall effectiveness of the response, the evaluation noted wide variations in standards (particularly in shelter), incomplete coverage (particularly regarding the host family refugees), and a tendency for the relief process to be supply-driven and dominated by a competitive concern for visibility. Registration: The pressure placed on UNHCR to register the refugees stemmed from concerns that differ from those in normal operations: it focused on family tracing and issues related to denial of nationality that could lead to statelessness, rather than on facilitating the provision of assistance. This led to unrealistic demands from donors. A basic UNHCR registration was successfully completed in FYR Macedonia but was slow and incomplete in Albania. The shortcomings were partly attributable to management weaknesses, but UNHCR could not reasonably have been expected to complete a full registration in the 11 weeks the emergency lasted, particularly as most refugees were still mobile and widely dispersed in host families. Security: Some donors appeared to have unreasonable expectations that UNHCR was solely responsible for camp security. Despite accepted refugee norms, host states and donors situated camps too close to the border and the war zone. Security within camps rested on unclear lines of responsibility and was attained through ad hoc arrangements. Protection: Effective protection depends, in the first instance, on the host states' assuming their international responsibilities. FYR Macedonia's unwillingness to grant unconditional asylum placed UNHCR in a position where it was criticized in relation to two conflicting criteria. Some donors criticized the agency for not being sufficiently sensitive to the destabilization concerns of FYR Macedonia, and for putting too much pressure on the government to open the border unconditionally. Some human rights groups criticized the agency for not putting enough pressure on the Skopje government. The evaluation report recognizes that UNHCR was placed in a difficult situation. Faced with contradictory demands, and armed chiefly with the power of international refugee law and creative diplomacy, the agency had limited ability to break the impasse. Recognizing that burden-sharing schemes are likely to be rare, the agency emphasized the principle of unconditional first asylum, as repeatedly confirmed by its Executive Committee. On the other hand, the report finds that UNHCR should have given more attention earlier to the probability that this kind of situation would arise and been prepared more creatively to develop policy. Instead, it was left to the donors to unblock the border and set the pace of innovation. The evaluation assessed the two policy innovations - HEP and HTP. HEP (humanitarian evacuation programme) transferred refugees out of the region in an operation of unprecedented speed and scale. By alleviating the burden on a vulnerable host state, the operation enabled other refugees to enter FYR Macedonia, thereby enhancing overall protection. On the other hand, the implementation was marred by inconsistency on the part of states and its opportunistic use by refugees. HEP also fundamentally undermined the alternative of transfers in the region (HTP). HEP is likely to remain rare in view of the limited public support for receiving refugees from more distant regions, and the lack of interest of Western states in promoting such programmes unless they themselves are directly involved in the conflict. HTP (humanitarian transfers programme) was feasible in that Albania accepted refugees, and UNHCR's leadership as well as key donors encouraged the programme. However, it attracted few refugees and did not contribute significantly to enhance protection during the emergency. Part of the reason was that UNHCR's standards varied from explicit (i.e. fully voluntary) to implicit consent (or absence of reasonable objections). International law is not completely clear on this point. 3. Analysis of UNHCR's role As a result of the intense international interest in the Kosovo refugee crisis, many factors affecting UNHCR's performance were not under its own control. However, the agency was in some respects weaker than it needed to be by not optimally utilizing the resources which it did control, or could easily acquire. This applies particularly to management practices and staffing patterns, possibly also to diplomacy in the field during the initial phase. These weaknesses fuelled criticism over agency failures, further encouraging bilateralism and assertive behaviour of other organizations. The constraints on UNHCR operations were both external and internal. - extensive bilateralism - significant blurring of humanitarian and military-political missions - powerful role and independent agenda of NATO in the humanitarian sector - reluctant governmental hosts or partners in the frontline states - complex institutional rivalries among major actors - high visibility and saliency of the emergency - small in-house surge capacity of staff and other resources for emergencies - inappropriate decision-making structure for the conflict area - cumbersome decision-making structure for managing the emergency - limited financial and human resources compared with other actors - undigested, recent organizational restructuring and previous down-sizing - underestimation of the special requirements of a high-profile emergency External constraints are most graphically illustrated by an episode on 31 March, when the aircraft supposed to carry UNHCR's first emergency response team to Albania did not receive flight clearance from NATO due to crowded air space. For UNHCR, NATO's humanitarian engagement was a mixed blessing. It added significant resources to deal with the emergency, but also inserted competing priorities and, especially in Albania, took a form that blurred the line between military and humanitarian missions. For NATO, as a party to the war, it was important to demonstrate its commitment to alleviate the humanitarian crisis that followed. NATO initiated humanitarian support operations in many ways, including logistics and camp building, and deployed a special NATO force to Albania (AFOR) whose only formal mission was humanitarian. The unusual concern of states to have a visible field presence through national NGOs or state agencies (military or civilian) was in UNHCR's perspective also a double-edged sword. It brought enormous resources to the emergency, but relatively little of it was channelled through the agency, and consultation with UNHCR varied considerably. Uneven consultation combined with a large number of actors - about 250 NGOs operated in Albania and FYR Macedonia at the peak of the emergency - made co-ordination difficult. Only about 20 per cent of the NGOs were UNHCR implementing partners. The pronounced bilateralism seems not to have been primarily a response to UNHCR's performance, but rather reflected the independent interests of states involved. The refugee crisis erupted close to western Europe, where the previous wave of Bosnian refugees and recent asylum seekers from Kosovo had made governments weary of receiving more. Fearing that the new exodus would spill over into western Europe, EU members took rapid action to contain the flow within the region. There was large-scale assistance to refugees, aid to Albania and FYR Macedonia, rapid construction of refugee camps in both countries, and an early UK proposal to create a "security zone" on the border between FYR Macedonia and Kosovo. In theory, these concerns were not incompatible with multilateralism, had funds been channelled through UNHCR and had the agency been properly consulted. In practice, high stakes in the outcome made states inclined towards independent action. Moreover, the high visibility of the emergency in west European countries - accentuated by the refugee trains that recalled the more ignominious parts of west European history - created strong incentives to "show the flag" on the humanitarian front. Charges from critics that NATO air strikes had inadvertently triggered the outflow had the same effect. The competitive logic became so strong that the idea of a "national" refugee camp was discussed even by committed multilateralists such as Norway and Canada. Bilateralism in terms of funding was most marked in the European Union. The top six EU contributors to the emergency allocated $279 million in public humanitarian assistance to the emergency (excluding military expenditures); of this, UNHCR received only $9.8 million directly, or 3.5 per cent. As a high visibility event for Western states, the crisis attracted an unusual amount of relief resources (including "luxury camps") and invited special asylum treatment (evacuation to Western states). In part, this represented an acknowledgement by contributing states that their role in the Kosovo conflict entailed a special obligation to assist the refugees. This is quite legitimate in the perspective of political morality. UNHCR, however, is institutionally committed to universal standards of refugee protection and to that extent disinclined to support differential treatment of refugees. The result was that UNHCR and the donors were out of step on some key issues. The most important difference in perspective concerned the first asylum issue in FYR Macedonia. UNHCR vigorously defended unconditional first asylum, as indeed it might be expected to under the norms enunciated by its Executive Committee. The USA and the UK were more attuned to the destabilization concerns of FYR Macedonia, and worried that the refugee presence would make the government withdraw its support for NATO's military campaign. This made the USA initiate "burden-sharing" schemes in which onward passage to third countries was offered as an incentive for FYR Macedonia to admit refugees. Other countries, including Canada and the Nordics, pushed for evacuation on general humanitarian grounds. At times, UNHCR was faced with the unusual situation of some donors competing to take in refugees, and was criticized for not adjusting quickly enough to their demands. UNHCR had problematic relations with the other frontline state as well. Albania provided unconditional asylum, but preferred NATO, governments and OSCE as channels of co-operation. The Kosovo emergency came at a difficult time for UNHCR. The agency was experiencing the cumulative effects of three to four years of steady budget decline, including an unusual 1997-98 reduction in General Programme funding that was read as an austerity warning. It had just been through a round of staff cuts in 1997-98 when it was announced that the 1999 budget would be reduced from $900 million to around $800 million. The reduction was partly a correction to the high budget levels associated with the Bosnia operation in the middle of the decade, but it affected the agency's ability to rapidly mobilize resources for the crisis. The effects of shrinking margins were most evident in the unwillingness of managers to release staff for the Kosovo operation, leading to delays in staffing. Competition for resources among the regional bureaux of the agency - framed by the recurring question of the equity of the disproportionate use of resources for refugees in Europe as compared with Africa - further sharpened internal negotiations over staff allocations. The organizational restructuring in early 1999 probably reduced rather than enhanced the emergency response capacity of the agency; for one thing, the changes were undigested. The crisis placed heavy demands on UNHCR's diplomatic skills as well. Yet the agency has a thin leadership structure at the top and the High Commissioner's Special Envoy seemed impossibly overtasked. The decision-making unit responsible for Kosovo was a unique structure in UNHCR. A post-Dayton, down-sized version of the Yugoslavia operation, it was not anchored in a Bureau and lacked associated management support. The operation reported directly to the High Commissioner through the Special Envoy, yet the High Commissioner was dealing with policy issues far above the din of operations. More generally, it seems that UNHCR responded to the Kosovo refugee crisis as if it were a "normal" emergency. Standard routines for a smaller or slower emergency were followed (although not always attained). Even within the existing framework for emergency preparedness (200,000-250,000 immediate case-load), the response was often too little, or too late. This might not have been noticed in a less visible and less "popular" emergency. By not sufficiently taking into account the extraordinary political nature of this emergency, UNHCR opened itself to criticism - some of it fuelled by mixed motives in a competitive and intensely politicized humanitarian field. UNHCR seemed to expect that its mandate and traditional lead agency role in refugee crises would automatically assure it a leadership position in co-ordination. The experience in the former Yugoslavia, particularly in Bosnia and Herzegovina, where conditions had favoured this position, possibly reinforced the expectation. The humanitarian sphere in the Kosovo emergency, however, was more intensely competitive and UNHCR's leadership by no means assured. 4. Consequences for UNHCR Much of the criticism of UNHCR's performance during the emergency concerns its assistance and co-ordination functions. This seems ironic insofar as these shortcomings did not have grave consequences for the welfare of the refugees; indeed, they were relatively minor in relation to the overall relief response. There may be more consequences for UNHCR itself. Areas of demonstrated weakness and inability to rapidly meet its own standards of response affected the credibility of the agency. Since the shortcomings occurred in a crisis of high visibility to the Western world, their significance was magnified. The Kosovo emergency became a defining event in terms of who was there (particularly at the early stage) and how they had performed. The Kosovo case also brought out some fundamental questions of policy facing UNHCR. Since the evaluation is assessing both operations and policy, the broader policy implications arising from the agency's response to the Kosovo emergency case will briefly be examined below. 5. Implications for policy The most obvious issue concerning assistance is the size of the emergency for which UNHCR should prepare. The agency has an in-house dedicated capacity for emergency preparedness and response of nine persons (in EPRS), reserves of basic relief items supposed to meet the immediate needs of 200,000-250,000 persons, and emergency response teams drawn from a roster of 30 staff members who are recalled from their current postings around the world for redeployment to an emergency. Even if used with utmost efficiency, this in-house capacity would have been totally inadequate in the Kosovo emergency without large-scale external support. The Kosovo case suggests that UNHCR should not develop an in-house capacity to meet major material assistance requirements for emergencies of this kind.5 First, massive emergencies are historically rare - while three have occurred in the last decade, in a slightly longer historical perspective they are infrequent and it is unclear if recent occurrences constitute a trend. Second, states and organizations currently command significant standby capacity that can be rapidly mobilized for large emergencies. To build up a parallel capacity in UNHCR would be a sub-optimal use of resources. Third, to attain the needed level would entail a radical expansion of UNHCR's current capacity that seems politically unrealistic. Rather, UNHCR should prepare for massive emergencies by strengthening its in-house capacity for strategic planning to mobilize external resources. Critical elements include reviewing and developing standby agreements and national service packages with governments and other organizations (civilian as well as military). Strategic planning includes "thinking-outside-the-box" by preparing for the possible occurrence of the rare but catastrophic event. Plans should take into account assistance that supports the co-ordination function. This means prioritizing shared resources such as warehousing, transport and communications, which provide a bridge between the discrete assistance packages of other actors and facilitate the overall response. The need for such shared services also encourages independent actors to collaborate with and be co-ordinated by UNHCR. The availability of national responses will always be conditioned by political considerations and hence carry an element of unpredictability, yet they are essential to assist large-scale refugee flows. The failure of "early warning" in the Kosovo case confirms the historic tendency of such systems to be unreliable or inadequate. Rather than develop its "early warning" capacity, UNHCR should strengthen its mechanisms to react rapidly. The pressing protection problems on the Kosovo-FYR Macedonia border raised basic issues of first asylum in relation to the obligations and rights of states. In this case, a solution was developed which permitted the refugees to enter on condition that a certain number would be passed on to third countries, thereby lightening the burden on the first asylum state. "Burden-sharing" arrangements of this kind are historically rare. Only two clear cases have occurred in the last half-century (after the Second World War and after the Viet Nam War), and of these only the latter was premised on conditional first asylum. The constellation of strategic and political interests that made evacuation programmes possible in this case is unlikely to recur frequently. It is equally self-evident that mass inflows can entail significant costs and risks for first asylum states, as was demonstrated in FYR Macedonia. There was, in this case, legitimate fear that the small, newly established and ethnically fragile state might disintegrate in conflict. The potential tragedy at the Blace border crossing dramatically juxtaposed the rights of refugees against the interests of state. Resolving such conflicts is the fundamental challenge of a viable protection policy and should motivate burden-sharing initiatives. This is not easy, as the inconclusive discussion on burden-sharing in Europe and elsewhere suggests. Nevertheless, UNHCR has a special responsibility to bring the discourse forward. The Kosovo case suggests that burden-sharing can be essential for small and vulnerable states that face mass inflows. UNHCR should take the initiative to re-examine the principles and dynamics of burden-sharing for such cases. In the present decentralized, international humanitarian regime, co-ordination is an elusive goal. In the Kosovo case it was particularly difficult. Yet UNHCR's co-ordination performance varied significantly over time and place, depending on the willingness of the actors to be co-ordinated, relations with local or national authorities, resources, skills and appropriate deployment of UNHCR staff. This suggests that within the constraints of consensual co-ordination, the shortcomings were not structurally related to the lead agency model, but due to variations in the policy environment or the staff capacity of UNHCR. The case demonstrated, however, that the exact role of the lead agency is poorly defined, leading to variable expectations and interpretations. In a massive emergency, the model demanded an additional, human resource capacity dedicated to co-ordination. The Kosovo case shows that massive emergencies demand a staff capacity that exceeds the present deployment capability of UNHCR. Surge mechanisms such as secondment from another agency (OCHA) did not function effectively in this case and should be examined more closely. The absence of significant contractual or funding obligations with other humanitarian actors required UNHCR to co-ordinate by consensus. Funding of course provides a very different measure of control and moves co-ordination from a consensual to an authoritative model. UNHCR typically funds only a small percentage of NGOs in a massive emergency (in this case some 20 per cent of the NGOs in Albania and FYR Macedonia). Yet the case demonstrates that funding is not a necessary precondition for co-ordination. Credible leadership by itself can also have the desired effect. Hence, channelling funds through UNHCR should not be considered an absolute pre-condition for co-ordination. Relations with the military If UNHCR is to lead effectively in refugee emergencies, it has to be generally accepted by a wide range of humanitarian actors. UNHCR's relations with the military are critically important in this respect. Although UNHCR's status as a non-political humanitarian agency would seem to preclude close co-operation with a military that is a party to the conflict, in the Kosovo case it was widely accepted as necessary to save lives. Co-operation has been similarly accepted when military forces were involved in UN-authorized peace enforcement operations. This suggests that contemporary norms validate operational co-operation between UNHCR and a military that is a belligerent party only under two conditions: - when the military is engaged in a UN enforcement action under the Charter and authorized by the UN, or - there is no alternative way to avoid substantial suffering and loss of life Limiting relations with the military h
<urn:uuid:7ca0161a-a0f1-46c5-8335-3e9b8ee2f7be>
CC-MAIN-2022-33
https://reliefweb.int/report/serbia/kosovo-refugee-crisis-independent-evaluation-unhcrs-emergency-preparedness-and
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571472.69/warc/CC-MAIN-20220811133823-20220811163823-00695.warc.gz
en
0.965697
5,148
2.53125
3
What's a Nymph? | Nymphs by Association | Nymphs by Name | A List of the Nereides What's a Nymph? So the most obvious metaphor is the Fairy metaphor. Nymphs are like fairies in that they were unpredictable, a little scary, and often showed up in folktales. But, seeing as they are from a different culture, they are also entirely different. For one thing, the nymphs are all women. This is definitely significant in the way they were scary. One of the things you might (and should) notice is the common theme of women's sexuality = scary and women's chastity = good. Noticing that, it should not shock you that these somewhat scary spirits are at their scariest to mortal men when sex enters the picture (Hylas is a lovely example). It should also be noted without surprise that these nymphs are spirits often personifying nature. (See below for the groups of nymphs) As Sue Blundell so awesomely points out, women are often associated with nature and wildness as things beyond the control of "civilization" (need we point out that this civilization is extraordinarily patriarchal?). It is usually said that this is because of the whole XX chromosomal ability to bear children. I argue that this is merely an excuse and the real reason is that you're dealing with a patriarchal society and "wildness" and "nature" and such things are seen as Outside of Culture, just as women are. However, they also show up from time to time as the wives of heroes and (often spurned) lovers of gods. Nymphs, like all female deities, are beautiful. When they aren't inhabiting some specific part of nature, they are often the attendents of more important deities. I like nymphs. In many ways they seem far more human to me than any of the human women. I hope you enjoy them as much as I do. Nymphs by Association - the well-known (and more obscure) groups of Nymphs: Nymphs by Name - the nymphs interesting enough to say something about: Nymphs of the river Amnisus in Crete and devotees of the goddess Artemis. They cared for the goddess's sacred deer. Nymphs of the Angridus River in Elis and they were healer nymphs of some sort. They had this grotto at the river and people with skin diseases came and bathed there and gave the nymphs gifts in the hopes that they'd get better.n - The Dodonides, or Dododaean Nymphs, were the nymphs who brought up Zeus (or possibly a whole bunch of others). Zeus at Dodona was the site of the most ancient oracle in Greece (although it was probably attributed originally to someone else). The Dryades were the Nymphs of the Forest, or wood nymphs. Dryades were immortal, unlike other types, like the Hamadryades, who lived in oak trees and would die when the tree they lived in died. They were the hunting companions of Artemis. The painting at left is by Richard Hescox. Epimeliades were the Protectors of Sheep. So I suppose you could call them Sheep Nymphs. Quite simply, sea nymphs. They show up under this name in a random play by Sophocles and in Callimachus' Hymn to Artemis (read it!), but doesn't seem to be as commonly used as the more general "nymph" or the more specific "Nereid" or "Oceanid". Haliai comes from the word for "sea" and also means "salt". - A Hamadryad was a nymph of an oak tree. She was very connected to the tree in which she lived, and very powerful if angered. If her tree was hurt, then the hamadryad was hurt. If her tree was cut down, then the hamadryad also died. It was a hamadryad who began the whole story of the Golden Fleece by punishing the son of a man who'd thrown a knife into her tree. The Heliades (also called the Heliadai) were the daughters (and sons) of Helios, the Sun, by both Clymene and Rhodos (each had their own set of kids, that is). The most famous of the Heliades were the sisters of Phaeton (son of Clymene). Well, when Phaethon died on that fateful chariot ride (he was too weak to control the divine horses and almost set the entire earth on fire and got zapped by Zeus with the typical lightening bolt from the blue), his sisters Aegle, Aetheria, Dioxippe, and Merope (also daughters of Clymene) wept uncontrollably for four months. After that the gods took pity on them and turned the maidens into poplar trees and changed their tears to amber. That set of Heliades ended up also receiving the name Phaethontiades. The other set of Heliades, the children of Rhodos, were named Cercaphus, Actis, Macareus, Tenages, Triopas, Candalus, Ochimus, and Electryone (the only daughter, and, according to Mr. Robert Graves, maybe identified with the Goddess of the Moon - frankly, I haven't seen any evidence for that, but you know Graves). So. In summary, Heliades = not that interesting chicks who probably could have used some good therapy more than a group of trigger happy dieties, but that's ancient Greece for you! The Hespirides were the nymphs who guarded the Tree of the Golden Apples, as is shown in the painting at right by Edward Burne-Jones. Their father was Hesperos, or the God of the Evening Star. Their names were: - These nymphs of debated parentage were the sisters of Hyas, who got killed by a boar. They mourned for him so much that the Gods hung them as stars in the Sky. The root of their name (and Hyas' name) is "rain" and when their constellation rose with the sun it meant stormy rainy weather. So, different authors gave them different names but the compiled list reported by Robert Graves is: Ambrosia, Eudora, Aesyle, Eidothea, Althea, Adraste, Philia, Coronis, Cleis, Phaesyle, Cleia, Phaeo, Pedile, Polyxo, Phyto, Thyene, Bacche, Macris, Nysa, Erato, Brome, and Dione. So almost all of those names should sound familiar, but probably for different reasons. - Lamusidean Nymphs Lamusidean Nymphs were the daughters of Lamus. They were the nurses of Dionysus, but because of Hera's deep jealousy they were driven mad. They would have chopped the baby Dionysus up, and not Hermes appeared on the scene just in time to save the baby God. - These dryades lived in fruit trees - were fruit trees ... ah, this is getting too confusing. Moving on. These dryades were the Nymphs of Ash Trees. They were the daughters of Gaia and of Uranus' blood. For some reason they are special, but I have not yet figured out why. The Naiades were the nymphs of freshwater streams rivers and lakes, but were not limited to these water courses. Many Naiades could be found prancing around with Artemis, who chose 20 Naiades from Amnisus for companions. They were the daughters of rivergods. They had extremely long lifetimes, but they were not considered immortal, and were believed to have sat in on the Gods discussions on Olympus. There were 5 types of Naiades: On the left is a detail from John W. Waterhouse's painting of Hylas and the Nymphs. That particular story is important to the Greeks as Hylas, the beautiful beloved (yes, in the sexual way) of Heracles, was sent to go get water on the island of Mysia, and the naiads there, totally taken in by his beauty, carried him off. Every year, the priests marched to a neighboring mountain and called Hylas's name three times. Someone will have to tell me if they still do this. Pegaiai, the Nymphs of Springs Krinaia, the Nymphs of Fountains Potameides, the Nymphs of Rivers and Streams Limnades or Limnatides, the Nymphs of Lakes Eleionomai, the Nymphs of Marshes The Napaea were the Nymphs of the Valley. In Greek nape means dell. The Nereides were the 50 daughters of Nereus (the Sea) and Doris. The were the Nymphs of the Sea, and on the right is how one artist supposed them. One of them was Amphitrite. The stories say that it was when they went and performed a dance on the island of Naxos that Poseidon decided to claim Amphitrite as his bride. There is a list of the Nereides being formed here. - Alternatively the Nysiades, were nymphs that lived on Mount Nysa and raised the young Dionysus after he was stashed their by Zeus after his thigh-birth. Where exactly Mount Nysa is seems unclear, but at least we have the names of the nymphs: Cisseis, Nysa, Erato, Eripha, Brome, and Polyhymno. Really I don't have any more about them, but I'm willing to bet they were pretty funky given what a freak (in a good way. Sometimes.) Dionysus grew up to be. There were 3000 Oceanides, and they were all the Nymphs of the Ocean. Their mother was the Titaness Tethys and their father the Titan Oceanus. Oreades were the Nymphs of the Mountains. A particularly famous one is Echo.n - The Pleiades There were seven Pleiades, and you can find them when you look in the sky (they are stars). Their names were: They were the daughters of Pleione (an Oceanid) and Atlas. Pleione means "sailing queen" and so her daughters would be the "sailing ones", but the root could also be peleiades which means a flock of doves and fits perfectly. They waited on Artemis with their half-sisters the Hyades and with them were called the Atalantides, Dodonides, and Nysiades. They were pursued by Orion for seven years, and got away only when Zeus granted their prayers and changed them into doves. The picture on the right is an engraving by F. E. Fillebrown. If you want to know more about the individual Pleiades, there is more on some of them below. Maia, "Mother" "Nurse" Alcyone, "Queen who wards off evil [storms]" Electra, "Amber" "Shining" "Bright" Sterope or Asterope, "Lightening" "Twinkling" "Sun-face" Merope "Eloquent" "Mortal" "Bee-eater" They were Bee-Nymphs who used honey to make prophesies. The three of them raised Apollo, the god of the Oracle of Delphi. Their name is shared with the little stones that are thrown to tell the future.nNymphs by Name - the nymphs interesting enough to say something about: Acantha was another beautiful nymph with the misfortune to be loved by someone she didn't love back. Apollo was the culprit in this case. He "loved" the nymph so much he tried to rape her. The nymph fought back, scratching the Sun God's face. As a result, the little nymph was transformed into the acanthus tree, a "sun-loving" but thorny plant. She was the nurse, in Crete, who took care of Zeus and hung his cradle from a tree so that he wasn't in the sea, the earth, or Heaven. This was the daughter of Asopus, a river god, and Metope (she had TONS of siblings). She was abducted by Zeus, as it every nymph eventually is, it seems, and carried off to the island of Attica (which was renamed after her for a period). There she had a son, Aeacus, and he became the monarch of the island. To make love to her, Zeus changed into a flame of fire. Later, she became a lover of Actor's and had three children by him.n Agamede was the mother of Actor by Poseidon. She was one of the first to use herbs in healing. Amaltheia was a nymph who nursed baby Zeus with the milk from a goat. Or, perhaps she was the goat. Both had the same name. Either way, she was responsible for the cornucopia, or horn of plenty. Zeus put the goat and the horn in the sky as constellations.n Arethusa was one of the many water nymphs who attended Artemis. As one of the Virgin Goddesses followers, she had no interest in men. So when the river-god Alpheus pursued her, Artemis helped her out by turning her into a fountain. Alpheus, however, would not be denied, and changed himself to flow underground so that he could touch her. In a Hymn to Artemis, Britomartis is described as the nymph Artemis loves best. She is the fawn-slaying, sharpshooting nymph of Gortyn here, but originally she was a Cretan goddess who was adopted by the Artemis cult. Anyway, the story goes that Minos, madly in love with her, chased her all over Crete. She hid in oaks and marshes, but couldn't shake him. Nine months he chased her without relenting until at last he cornered her on a cliff. She leapt off, and was caught by a fishing net and was called, from then on the Lady of the Net. There are three important parts to this story, the first is that Callisto was very beautiful. This is lucky, considering her name means "most beautiful". The second is that, at some point, she was turned into a bear and/or killed. The third is that it had to do with Zeus and Artemis. See, Callisto was one of Artemis' nymphs (and thus not supposed to be getting down with any men), but Zeus got wise and came to her as Artemis and seduced her. (Hooray for subliminalized lesbianism!) One story says that Zeus immediately turned her into a bear to hide her from Hera, which of course didn't work, since Hera promptly got Artemis to "accidentally" shoot her. Another story says that Zeus just busted and peaced, and that Artemis found out when they were bathing at a spring and she noticed that Callisto was chubbier than she ought to be (she was pregnant - my opinion on this is that they were all bleeding at the same time and she noticed that Callisto was late, but, that's just me) and Artemis then changed her into a bear and hunted her down. Other stories say that Hera hunted her down much later and the death involved Callisto's own son from that union, Arcas. Together, she and her son are the Big Bear and the Little Bear, respectively. I started off liking her, but now I just think that she's a bore. She was the daughter of Atlas. She is in the story of Odysseus. She takes a fancy to him, and keeps him prisoner for seven years, during which time they sleep together, although Odysseus remains loyal to Penelope (which I don't understand), and eventually Zeus orders her to set him free. She is also in the Goddesses section, because she was a goddess as well as a nymph. This poor nymph got turned into a turtle because she refused to attend the wedding of Hera and Zeus. The gods condemned her to eternal silence because of her insulting words. She was the daughter of Asopos the River/God that ran through the Pelopennesus. She was something like a water nymph, then, and there was a town named after her.n Clymene was but a simple Oceanid, but for a simple Oceanid, she had a lot going on. She was the wife of the Titan Iapetus, and by him bore Atlas, Epimetheus, Prometheus, and Menoetius. In another version, she was the wife of Helios and the mother of Phaeton (who is generally accepted as Apollo's offspring, but oh well). In yet another version she was the mother of Atalanta. Some other sources say that she was the granddaughter of King Minos of Crete and mother of Palamedes. Her name means Famous Might, which is an interesting name for a Oceanid. She's interesting. Clytie was an Oceanid, a daughter of Ocean and of Tethys. She was a victim of love. She and Helios dated for a while, then he dropped her from some other chick named Leucothoe. After that, she immediately went and told the King of Babylon, Leucothoe's pops, about the affair (non-virgins = a waste of the money it costs to raise a woman), and Pops immediately buried the chick. Now I won't say that Clytie intentionally killed the girl, but I'm positive she knew that her dad wouldn't be thrilled, and thus I'd say good move by Helios to get out of the whole relationship. Clytie, on the other hand, was a pathetic example of an early Greek stalker, and kept on keepin' on after ol' Helios. In her defense, it's not like she could just decide she didn't want to see him anymore, being the Sun and everything. Anyway, she eventually stared so long without surcease that she transformed into that sun-loving flower the "Heliotrope" (means "turned sunward"). That transformation is what got her into Ovid's Metamorphoses (a book you should certainly read if you dig this kind of myth).n Daphne was the unfortunate Naiad pursued by Apollo. Apollo wouldn't leave her alone, despite her obvious aversion to him. She ran to her father, a river god, and begged for help. Her father did the only thing he could do and transformed her. Just as Apollo would have caught her Daphne grew bark and transformed into a laurel tree. But the God still wouldn't let her be and plucked some of her branches and made them into a wreath, saying she would be his sacred tree. Poor kid. Echo is probably the most famous of all the nymphs. Her name and her voice live on to this day. An Oread from Mt. Kithairon, she used to run interference when Zeus would come to "play" with the other nymphs. That meant that when Hera came looking for her wayward hubby, Echo would keep her talking until everyone could make a clean get-away. Hera cursed her to lose any ability to speak for herself and to only be able to repeat the words of others. Then she fell in love with Narcissus. If you are interested in the story, check out the longer version in the Myth Pages. And while she loved Narcissus, who paid her no mind, she paid no mind to the loving attentions of the goatish nature god Pan. Euboea was the daughter of the Asopus (river than ran through the Peloponnesus) and gave her name to an island. She also slept with Poseidon and had a baby by him named Tychius that no one knows anything about.n Galatea was a Nereid loved by the Cyclop Polyphemus (you know, the stupid one from the Odyssey). This could be bad enough on its own, but matters were complicated because she loved a human named Acis. Acis was murdered by Polyphemus and then one of three things happened. Either she threw herself into the ocean and drowned (odd, being a Nereid), wept so much she was turned into a always-flowing fountain, or accepted Polyphemus and had a child named Galates by him. Lethe was another Naiad, but her river was in the Underworld. The Lethe was the river of Forgetfulness and Oblivion. Lethe was a daughter of Eris. The rockin' thing about Lethe is that she stayed involved. A lot of these ladies did their own thing, but Lethe worked for a living. Well, now, I'm not sure that "living" is the right word, but anyway, the water from her river was given to those who died so that they might be freed from the lives they had lived before, that they might not miserably remember the earth and the pleasures of the mortality. Maia, who was sort of a Goddess, has more written about her in the Minor Goddesses section. But, to make a long story short, she slept with Zeus and bore the Messenger God Hermes. She was also one of the Pleiades. Melissa was one of the nymphic nurses of Zeus, sister to Amaltheia, but rather than feeding the baby milk, Melissa, appropriately for her name (which means honey bee) fed him honey. Or, alternatively, the bees brought honey straight to his mouth. Because of her, Melissai became the name of all the nymphs who cared for the patriarch god as a baby. Nephele was a nymph who was the first wife of Athamas, King of Orchomenus. He dropped her for this human chica, Ino. She was very bitter and complained to Hera, after which this whole drama ensued. She was the mother of Phrixes and Helle, who she had to protect from Ino, their stepmother. Called Oenone in Latin, this Naiad was the daughter of the rivergod Kebren. She was a Phrigian nymph who lived during the Trojan War. Now, there are two stories. She was abducted by Paris (yes, you DEFINTELY should know who Paris is) and became his first wife. Later, when he died, she hung herself. Apollodorus, however, says she married Alexandros and bore his son, Corythus. She had learned to prophesy from Rhea, and tried to convince her husband that he would be mortally wounded in Troy, but only she would be able to heal him. He ignored her, and was indeed mortally wounded. Oinone was pissed, and had no interest in helping him. Upon her summoning she refused to heal him, but later changed her mind and hurried to Troy. By the time she got there it was too late and so she threw herself on his funeral pyre. Either way, she kills herself. Orphne was a nymph who lived in the Underworld. But though she chilled with Persephone, her hubby (Acheron, the ferry-man of the Dead) worked above ground (sort of). Pallas, according to Apollodorus, was the childhood playmate of Athena. She was a Naiad, the daughter of the rivergod Triton, and both she and Athena were raised to love to fight. One time when they were dueling, Zeus mischievously held up the aegis. Pallas looked away for only a moment, but that was enough, and she fell and died. Athena was distraught, and made a wooden statue of her friend placing the aegis on its breast. Pitys was a nymph who became a pine tree. She either loved Pan or was loved by him. If she loved him, then she chose Pan over Boreas, the North wind, and Boreas in a jealous rage threw her off a cliff. Gaia took pity on her and changed her into a pine tree which weeps when the wind blows through it. Alternatively, she ran away from Pan and in running away was so changed (like Syrinx and Daphne). Either way, the pine was sacred to Pan and often showed up in his costume.(*) Rhodus was a nymph who was a daughter of Poseidon. I believe she was also called Rhode. She was the mother of the Heliadae with Helios. The island she lived on was named after her: Rhodes. Eventually the island became the home of the Pillars of Heracles and a certain very cool person named Ramsi. Rhodope was a nymph from Thrace whose intelligence-impaired-if-enthusiastic husband compared himself and Rhodope to Zeus and Hera. I think this stupidity must have been based from the fact that he was the son of the King of Thrace (the husband's name was Haemus). Anyway, the gods were not having that, so they changed the couple into a mountain range with the same names. Rhodope into the mountain range now called Despoto, and her husband into the Balkan mountain range. There was another Rhodope who was turned into the River Nymph Styx. Salamis was the daughter of the river Asopus and was pursued by the Sea God Poseidon. In honor of her acceptance of his suit, he named the island that he took her to after her (later a huge naval battle took place there where an awesome real Greek queen named Artemisia played a cool role). No other information is available about this chick, not even whether or not she stuck around the island or went back to her river.n This is a rape story, but reverse the common gender roles. In this case, the Naiad Salmacis becomes enchanted with the beautiful youth Hermaphroditos, and against his will, the two are made one. Now, in English, obviously, you know that "hermaphrodite" refers to an intersexed person - someone with both male and female physical traits - but that comes from this story. See, originally, Hermaphroditos was just the mix of Hermes, the boy's father, and Aphrodite, his mother. He was perfectly normal, except for perhaps being super-hot. So he's going through the woods, thinking of anything except sex, and Salmacis sees him and is totally smitten. Smitten like a kitten with a mitten. She does everything to get him to give her a little love, but he's like "Back off, nutso!" Well, she does, but only to hide in the bushes and watch him step into her stream. And bam! She runs out and grabs him. He's fighting her off, but it's no good, she's too much woman for him. She calls out to the gods to keep them together forever, and her wish is granted very literally. Their limbs meld, our boy trying to escape the whole time. By the end, he is "weak and soft" and has, you know, woman-junk. He curses the water to make any man who steps in it as effeminate as he. Salmacis has been absorbed. There is much to be learned here about the dangers of a strong and sexual ancient Greek woman. See more on that above. Styx was a Naiad. Her name meant literally Hateful. This may have been because her river was the one that all of the dead must pass (according to Virgil who was not Greek). Her river was the most holy and sacred, and to swear on it was the most holy oath a God could make, an honor given by Zeus because she helped the Olympian gods defeat the Giants. If an oath was about to be sworn, Iris was sent for some water to witness it, and if the oath was broken, the god could neither breathe, nor eat, nor drink for a full year - tho the water was apparently fatal to mortals. Styx was, according to Hesiod at least, even older than the gods, being the oldest child of Oceanus and Tethys (alternatively a daughter of Erebus (Darkness) and Nyx). Dark she was and is associated with powerful stories. She was a playmate (or mother, according to some) of Persephone, and the mother of Zelus (Zeal), Nike, Kratos (Power), and Bia. Styx was said to have her river outside of the Underworld by Greeks, but her water was still deadly, dangerous, and powerful. Poisonous to men and cattle, it broke through even iron (imagine the water in Tomb Raider), but Thetis knew how to use the water correctly, and it was her careful dipping of Achilles' in the dark river that made him all but invincible. The water of Styx also had a cameo in the story of Psyche, found in the Myth Pages. Another story of Styx is actually the story of a young woman named Rhodope who dedicated herself to Artemis which pissed off the sexually dangerous Aphrodite who caused her to fall in love with a young hunter with whom she did the horizontal mambo in a cave in the mountains. Artemis turned her into a spring called Styx that had the miraculous quality of determining virginity. (If they stood in the water, which would normally be up to their knees, and it came up to their neck, then they weren't virgins. I have two thoughts on this: 1) that's way better than saying you're a witch if you don't drown, and 2) it would suck to be super short if you lived near that spring.) The painting on the right is called Crossing the Styx. Syrinx is the nymph who was pursued by Pan who, to escape him, begged the gods to save her. They took pity and turned her into reeds. Pan, following Apollo's lead, cut some of the reeds in different sizes and made a set of pipes, called the pan pipes from then on. She was one of Apollo's lovers and the mother of one of the kings of Sicyon. That's it! The rest (if there was ever more) has been lost to history.n Thetis was the chief Nereid for a long time, and it was she who found the baby Hephaestos and nursed him back to health after he was thrown from Olympus (if you don't get it, check out the Myth Pages). Zeus wanted her for his lover, but she rejected him (good for her!). Then, the Goddess Themis prophesied that she would bear a son mightier than his father. Hearing that, Zeus stopped being horny and started being scared, and immediately decreed that she could only marry a mortal. She did, and ended up becoming wife to Peleus, and mother of Achilles. As his mother, she tried to make him invincible. There are two versions of what she did, and why she missed his heel. If you don't know them you should check out the Myth Pages.n The Famous Ones | The Myth Pages | Contact me at email@example.com Last Updated July 16, 2011
<urn:uuid:b979158b-3ac5-47cf-85f0-5d1e70752371>
CC-MAIN-2022-33
https://paleothea.com/Nymphs.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00297.warc.gz
en
0.988258
7,000
2.625
3
Published : 21-08-2020 Barley, Malting and Malt – Part 1 of 4: Barley for Malting During COVID-19 Lockdown, several basic Home Distilling (and Brewing) supplies – equipment and consumables – have been difficult to get hold of. Yeast, Nutrients, Enzymes and even Bubbler, SG Hydrometers and Alcoholmeters. One of the most problematic however, was Malted Barley. Not just because stocks and supplies had run out, but because the plants producing it was shut down. Even under Level 3 Lockdown – when the plants could start operating again – it was not a situation where suddenly Malt was available again everywhere. Stocks had to be replenished, commercial Craft Brewers had to be serviced and supplied first, followed by the Craft Brewers, and only then the Home and Hobby Brewing Shops. As a result, many Home Distillers (and Brewers) resorted to Malting their own grains and using that, but they did so armed with the most basic of information gleaned from YouTube and Online sources – most of them severely lacking. This series of article will aim to address the shortcomings of information regarding Barley, Malt and Malting in three separate articles, focusing on Barley for Malting, the Barley Malting Process, and the Attributes of Malted Barley. What is Malt? Malt is Grain that has been Steeped, Germinated and Kilned according to certain procedures. Malted Grain differs from Raw Grain in several ways: - Malt contains less moisture and therefore is more suitable for storage and grinding - The Endosperm of Malted Grain has been modified during germination and is more pliable, in contrast to the hard Endosperm of the original Grain kernel - Malted Grain has much higher enzymatic values than raw Grain - Malted Grain has flavour and aroma that differs from raw Grain – due to the germination and Kilning process – and these components can be readily extracted during the starch conversion and fermentation processes Why should a Distiller care about Barley, Malt and Malting? Any of you who have done our W2 – Grain Based Spirits Course, or our C10 – Comprehensive Distilling Course, will know that the Grain types used, as well as the Malt and Enzyme Sources used in your Grain Fermentations will have a massive impact on the final product we produce – be that a fermented product like a Beer, or a Distilled Product, like a Moonshine, Whisk(e)y or Vodka. Even sticking to one specific Grain type – like Barley – the quality, yield and flavor can and will vary greatly from one cultivar or varietal to another, from one season to another, or depending on how it was Malted. Understanding these variations allows the Distiller (or Brewer) to make informed decisions and choose the Grain or Malt that would be suitable for the intended use, as well as ensure consistency in the production process and the end product quality. Barley is the principal grain used in producing Malt, the basic raw material for brewing beer, and traditionally (in Scotland, Ireland and South Africa) the basis of Whisk(e)y. Other grains such as wheat, sorghum, maize and rye can also be Malted and will impart unique characteristics in terms of flavor, but they are not widely used as a Malted grain. Why is Barley preferred for Brewing and Traditional Whisk(e)y Distilling? If we want to explore the real reasons for the original use of Barley, then this will turn into a History lesson about agriculture, climatic conditions, etc. If we want to explore the reasons for the continued use of Barley, the it will be a discussion of Tradition, Marketing and Consumer Expectations. But there are some technical reasons – less important than the others, but still relevant. - Barley is one of the hardiest of the cereal Grains. It can be Malted more easily than any other cereal Grain type. - Malted Barley provides flavor, enzymes and essential nutrients for yeast metabolism, and in the case of beer, color as well. - The Hydrolytic Enzymes developed in Malted Barley breaks down Endosperm Cell Walls, proteins and starches in both Barley and adjuncts (additional grains added into the Mash). - The Barely Husk physically protects the kernel during Malting, and it provides the filter bed for Wort Filtration in the Mash Tun. Now it is true that many (if not all) of these attributes are shared by other grain types as well, but one thing that cannot be denied is that the Enzyme concentration in Malted Barley is much higher than in other Malted Grains, and as such, when using Malted Barley, we only need to Malt a percentage of our Grain Bill, instead of Malting the entire Grain Bill when using other Grains. What does a Barley Kernel consist of? A Barley Kernel consists out of four main parts: - The Outer Layers – the husk, pericarp and testa – surrounding the endosperm and protecting the mature kernel from Microbiological spoilage - The Endosperm – the starch bearing portion of the grain, and the aleurone, a layer two or three cells deep, which is an enzyme source - The Embryo – the Germ Viable portion of the Grain – which is high in protein and nucleic acids. It contains the primordial root and acrospire of the young Barley plant and initiates the growth cycle when hydrated in the field or during Steeping - The Scutellum and Epithelium, which are additional sources of Hydrolytic Enzymes A typical composition of Barley (obviously cultivar specific) would be: |Component||Percentage of Weight| |Moisture||10 – 14 %| |Total Carbohydrates||65 – 80 %| |Inorganic Matter||2 – 4 %| |Fat||1 – 2 %| |Other||1 – 2 %| How does a Distiller determine which Barley Cultivars are suitable for Malt Production? Some varieties of Barley are bred solely for the production of feed, and some are intended for the production of Malt. Feed Barley varieties are bred for maximum agricultural yield (Agronomic reasons), with little attention given to the criteria that is critical for producing high-quality Malt. In general, feed Cultivars or Varietals of Barley will produce Malt with higher protein levels and very poor enzyme yields and flavor characteristics. The Varieties intended for Malt Production have been bred specifically for this purpose. They will Germinate well, withstand the Kilning process without excessive loss in enzymes, and provide a good and balanced combination of enzymes, carbohydrates and flavor will give the Distiller yield, successful conversions, and a flavorful product. How can we improve our Malting Barley Varietals? New Varieties of Barley for Malting purposes is continuously being developed. It is therefore important for Brewers, and to a lesser extent Distillers, to stay abreast of these developments and see what is available. New Barley Varieties are developed and bred for characteristics such as yield, disease resistance, dormancy, modification potential (attributes that come through during Germination and Kilning), Husk Retention, Flavor and overall Malt quality. Although it is impossible to have agreement or consensus on the make-up of the ideal Barley or Malt (due to different requirements from different producers), general guidelines for the key analytical characteristics of Barley and Malt are useful as targets for Breeders and Developers. The following Table gives some of the guidelines for Commercial Malting Barley. |Factors for Commercial Malt and Barley||Two-Row Barley||Six-Row Barley| |Plump Kernels||> 80%||> 70%| |Thin Kernels||< 5%||< 5%| |Germination (4 ml after 72-hour Germination)||> 96%||> 96%| |Protein||11.5 – 13.0%||12.0 – 13.5%| |Skinned and Broken Kernels||< 5%||< 5%| |Total Protein||11.3 – 12.8%||11.3 – 13.3%| |Remains on 2.78 mm Perforated Screen||> 60%||> 50%| |Remains on 2.38 mm Perforated Screen||> 90%||> 85%| |Measures of Malt Modification| |Beta-Glucan (ppm)||< 115||< 150| |Fine-Coarse Difference||< 1.5||< 1.5| |S/T Ration||40 – 46%||40 – 45%| |Turbidity (NTU)||< 15||< 15| |Viscosity (Absolute cP)||< 1.50||< 1.50| |Soluble Protein||4.9 – 5.5 %||5.3 – 5.9%| |Extract (FG db)||> 81.0%||> 79.0%| |Colour (°ASBC)||1.6 – 2.1||1.8 – 2.5| |Diastatic Power (°ASBC)||120 – 140||140 – 170| |Alpha-Amylase (DU)||45 – 65||45 – 60| NTU = Nephelometric Turbidity Units cP = Centipoise FG db = Fine Grind Dry Base °ASBC = Degrees according to the American Society of Brewing Chemists) DU = Dextrinizing Units What is the difference between Two-Row and Six-Row Barley? In the preceding Table, as well as in many articles regarding Barley, you will find mention of Two-Row and Six-Row Barley. This naming convention refers to the number of Kernels in the head. - In a Six-Row variety, all six flowers surrounding the stem develop into seeds. - In a Two-Row variety, only two of the flowers are fertile, so that only one seed forms on each side of the stem. Because of the Kernel arrangement, Two-Row varieties generally have larger, more uniformly sized Kernels. Kernels of the Six-Row varieties are uneven in size because of the restricted space for Kernel development. The unevenness will be visually apparent, as four of the six Kernels per head will be thinner and have a twisted appearance near the distal end. Varieties used for Malting can change rapidly, with the continual development of both Two- and Six-Row Varieties bred to offer advantages over the existing Varieties. However, even with such advances, certain varieties remain popular for long periods of time, and it is not unknown for certain Craft Distillers to stick with a certain Variety permanently in order to ensure consistency of production and flavor. Once developed, a new Variety is normally phased in slowly in gradually increasing percentages until the older Varieties are replaced. The discontinuation of a variety is normally due to lack of disease resistance, poor yield, imperfect traits (flavor, color, enzyme concentration) or just competition from new and “better” Varieties. Can I grow Barley for Malting? (Adapted from an Article in Farmers Weekly – 20 January 2017) Research programmes since 1991 have identified Barley cultivars that ensure an economical, optimal yield and grain conforming to SAB Maltings’ quality specifications. This is an overview of the research. The article assumes an understanding of how to grow wheat, as the soil preparation is similar. Note, however, that an exceptionally fine and even seedbed is required; if you do not provide this, the crop will develop unevenly, leading to uneven ripening and quality. Cocktail Barley and Puma Barley are currently the recommended cultivars for commercial production under irrigation. The seed is treated with an insecticide and fungicide, the latter to prevent powdery mildew during the 10-week developmental stage of the seedlings, as well as covered and loose smut. Choosing a Cultivar The most important factors that determine cultivar choice are: The average number of days from emergence to maturity. To obtain this result, the cultivar must be adapted to the climatic conditions of the area. The growth period for Puma is medium fast in the right region and medium for Cocktail. The ability of a cultivar to remain standing (no lodging) under extreme conditions is largely determined by straw length and thickness. Lodging (bending) often results in considerable yield and grain quality loss. It is usually a problem where critical yield potential conditions have been exceeded, but poor irrigation practices and excessive nitrogen fertilization or seeding density can also play a role. Puma and Cocktail have a medium straw strength. The strength of the straw between the flag leaf and the head/ear. It determines the cultivar’s susceptibility to wind damage. Puma and Cocktail have a medium peduncle strength. Determines the grade of the grain. A soil water deficit and heat stress during the grain-filling period can cause considerable losses. Puma has a medium plumpness value and Cocktail a medium low value. The planting equipment used for wheat is suitable for Barley. Do not plant Barley seed too deep, however, as this can affect seedling emergence. Optimal planting dates for the different irrigation areas in the Free State are indicated below. These may vary in certain micro-climates in the areas. Information for other provinces in South Africa was not available. Planting density can range from 65 kg/ha to 100 kg/ha depending on the state of the seedbed, planting date, irrigation method and planter used. The average recommended planting density is 80 kg/ha, given a 100% germination capacity and a 1 000 Kernel weight of approximately 40 g. Aim to establish 130 to 140 plants/m² at harvesting. Between 65 kg/ha and 80 kg/ha seed ought to be sufficient under centre pivot conditions with optimal seedbed preparation. Planting density can be increased if flood irrigation is available. The 1 000 Kernel weight and the germination capacity of the seed can vary from year to year, so adjust seeding density accordingly. For more information about SAB Maltings requirements, phone the South African Barley Breeding Institute at 028 212 2943, or visit www.sabbi.org You can also download additional information in the form of these two PDF Documents. PLEASE NOTE: The information in the latter document relating to Barley only starts on page 144, and is Free State Province specific, but most of the information is generally applicable, except the ideal sowing dates. What factors determine the quality of Barley for Malting? First and foremost, the Maltster looks at the Variety and Purity of the Barley Received. The next criteria are: - Germination (generally 95% or more) - Plumpness (normally expressed as the percentage of Kernels retained on a 2.78 mm perforation) - Thinness (normally expressed as the percentage of Kernels retained on a 2.38 mm perforation) - Brightness (the brighter the better) - Staining (the less stained the better) - Mycotoxin Content - Heat or Frost Damage to the Embryo (assessed by Pearling – the removal of the exterior Husk) - Protein Content - Moisture Content - Percentage of Skinned Kernels - Dockage (the dust and chaff content – normally a percentage of weight) - Bushel Weight - Presence of Foreign Kernels - Insect Damage - Presence of other Foreign Matter The specifications for these criteria change, depending on the variety, crop year, Barley availability and concerns about a specific crop (or supplier). In addition, new analyses are continually being developed to further assist the Maltster in determining Barley Quality. Rapid Viscosity and Falling Number Analyses can be run to assess the Alpha-Amylase content, indicating whether sprout damage may have occurred. Tetrazolium staining of longitudinally split grains can be used to give a Visual Measurement of Embryo Viability – viable tissue takes on a Red Stain, whereas non-viable tissue remains colorless. As with all Malt specifications, the Barley specifications are closely tied to the Distiller’s (or Brewer’s) needs. The single most important predictor of final Malt quality is the initial Barley quality. It is nearly impossible to make uniform, well-modified Malt from poor-quality Barley. But, having said that, should a Maltster be aware of the subtle variations in the Barley, he (or she) can compensate by making slight adjustments to the Malting process. Knowledge of Barley quality traits in each Barley delivery is just as important for the Malster as it is for the buyer of any produce or raw material. What factors influence the quality of Barley? Many factors influence Barley quality. The most important are the Varietal, Climatic conditions, soil conditions and storage practices. A cool, moist growing season will favor a plumper crop with lower protein content. A hot, dry growing season will produce thinner, higher-protein Barley, which will produce Malt with lower levels of extracts. The timing of planting can also have a significant effect. Planting early in the growing season may allow that crop to mature prior to the hot, dry portion of the summer and may avoid late season frost damage. In any Barley growing area, the quality of the crop may vary significantly, depending on when it was planted. Irrigation is increasingly being used by Barley growers, helping to produce more uniform, higher-quality crops. Crop rotation can also affect Barley quality. Barley grown in a field that was heavily fertilized for a different crop during the previous growing season may develop unacceptably high levels of protein. Barley grown in fields that produced corn in the previous growing season may be subject to disease caused by Micro Organisms that can overwinter on corn residue. Harvesting and storage conditions can also materially affect Barley quality. If there is excessive rain during the harvesting period, Barley may be stained and sprout damaged. Improper storage of grain can also reduce Barley Quality. If Barley is harvested with a moisture content greater than 14%, storage stability will be a concern. The Germinative capacity may decrease more rapidly in high-moisture Grain than in drier grain, and high-moisture Grain is more susceptible to insect infestation and heat damage from Insect activity. Proper air-drying of high-moisture Barley and effective Insect Control during storage are some of the critical success factors in maintaining Barley quality. Why must Barley for Malting have the capacity to Germinate? Barley must Germinate in order to produce Malt. Barley kernels that do not Germinate usually have either dead or injured embryos. Barley with at least 95% live embryos should be used for Malting. During Germination the Kernel content is modified through the action of enzymes that are released and activated in the Barley. These enzymes are also utilized later in the conversion of Starch and Proteins into more soluble fractions. Why should the Barley for Malting have a Moisture Content of less than 14%? Malting Barley must be able to withstand storage for many months. Barley with a high moisture content is apt to heat in storage and lose Germinative capacity as a result. For this reason, it is desirable that Barley intended for Malting purposes have a moisture content of not more than 14%. Barley with a higher moisture content must be either air-dried before storage, Malted quickly, or rejected out of hand. What is the effect of Kernel size in Malting? To the Maltster, Kernel size is an important factor to consider in determining processing parameters for the Malting Process, primarily because the water uptake rate of Barley Kernels in inversely proportional to their size, i.e. the smaller the Kernel, the faster it absorbs water during Steeping, while the larger the Kernel, the slower it absorbs water during Steeping. For the Distiller, size impacts on other factors. Larger Kernel size is generally associated with more extracts (for Kernels of the same Variety), and Kernels of uniform size can be milled more efficiently than Kernels of variable size – the latter directly impacts on the mill settings in the Distillery. Sieves are normally used to sort Kernels by size. Distillique’s Specifications List for Sieves should clear up some of the confusion in terms of the different metrics used to describe Sieves. What color and odor should Malting Barley have? Mature Barley should be uniformly light to dark golden yellow. However, color is not the most important factor in determining Barley quality. It is merely another criterion for determining suitability for Malting. Malting Barely should have the odor of clean grain, and be free of objectionable odors, such as those produced by heated, musty or moldy grain. Is stained our moldy Barley unsuitable for Malting? Stained of Moldy Barely may not store or Germinate properly, and these unwanted characteristics may carry over in the taste and smell of the final product. The key factor responsible for staining is weather conditions during the growing season. Moist, wet growing conditions, while favorable for yield and Kernel plumpness, promote staining and the growth of mold in Barley. The incidence of staining and mold may be different in each crop year and agricultural area. Stained and moldy Barley must be carefully examined, but if its Mycotoxin content, moisture and germination potential are acceptable it can be used to produce good Malt. What is undesirable about Barley not fully ripened in the field? Immature Barely Kernels are greenish white and are usually long and thin. They differ greatly in composition from fully matured, plump Kernels and may cause difficulties in Malting. How does Barley protein content influence Malt quality? The Protein content of Malting Barley is directly related to the total protein, soluble protein and free amino nitrogen content of the Malt, and inversely related to the extract content of the Malt. In general, the higher the Protein Level of a Variety, the higher the overall enzyme package it will produce during Malting. This can result in more highly modified Malt. The friability of Malt is also inversely related to the protein content of the Barley. The Barley protein level is factored into the Maltster’s processing decisions. Water uptake during Steeping occurs more slowly in high-protein Barley. The Germination and Kilning parameters may be adjusted to compensate for the higher enzyme levels, and higher color potential associated with high-protein Barley. The latter impacts solely on Malted Barley for Brewing and not for Distilling. High-protein Barley does not produce bad Malt – it merely produces Malt with an analysis and flavor signature different from that of lower-protein Malt. The Maltster’s Malting Practices can compensate for these differences but cannot eliminate them. What is the importance of the Barley Husk? The Barley Husk serves to protect the Kernel during Malting. The Husk is also essential in the formation of a Filter Bed in the Mash Tun during Lautering. Husks contribute very little to the overall Malt Analysis, but they do affect the flavor profile of the fermentation, and in the case of Beer, it can also affect Beer stability. How soon should Barley be Malted, and how is it stored? Barley harvesting occurs over a period of one to two months. This supply of Barley must now last for at least 12 months. Freshly harvested Barley will usually not Germinate at its optimum, so it is common to store Barley for 1 to 3 months prior to full-scale Malting. During this initial post-harvest aging period, Maltsters will conduct Laboratory, Pilot-scale and limited Full-scale trails to generate as much information as possible on what to expect from the new crop. Storage bins or Silos are generally constructed of either concrete or steel. Steel bins have a bigger footprint and usually more conveyance than the more compact slip-formed concrete silos. Steel bins are also usually much less expensive to construct that concrete silos. Both storage systems work well for both Malt and Barley. The bins or silos are filled from the top by way of spouts. Grain is conveyed mechanically to a spout and drops into the bin under the force of gravity. Most bins have conical bottoms to facilitate easy cleaning. The bin or silo discharges from this conical bottom into a conveyor that leads to an elevator leg. This system can be used to turn or transfer grain for aeration and conditioning. Screw conveyors and drag conveyors are commonly used to transfer Barely. Belts and drag conveyors are preferred for Malt, because of its high friability. Why should each batch of Malting Barley be from the same Variety? Varieties differ in both Physical and Chemical composition, as well as in Biological activity. One of the most important tools of the Maltster is detailed knowledge and understanding of each batch of Barley that he or she needs to Malt. This knowledge allows the Maltster to adjust the Malting Process Parameters to ensure that each step in the Process will help to create the best possible Malt. This brings us to the end of the first of our three articles on Barley, Malting and Malt. The journey continues in the next articles – the Barley Malting Process, and the Attributes of Malted Barley.
<urn:uuid:ec829417-f55f-41ec-92b5-714f26efd192>
CC-MAIN-2022-33
https://stillontap.ca/barley-malting-and-malt-part-1-of-4-barley-for-malting/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00497.warc.gz
en
0.91769
5,527
2.625
3
FASCISM: SOME COMMON MISCONCEPTIONS by Noel Ignatin Urgent Tasks No. 4/Fascism in the U.S.? A specter is haunting the U.S. left: the specter of fascism. Where is the measure taken by the party in power that is not branded as fascist? Welfare cutbacks, legislation to abolish compulsory union membership, the passage of a bill curtailing the legal right of dissidents to organize, efforts to ferret out and suppress those responsible for the bombing of public buildings in the center of large cities, the establishment of a professional army, moves to coordinate autonomous local police departments all these measures and others which represent the ordinary functioning of government in a society dominated by bourgeois social relations are described as "fascist," or at the very least as steps toward fascism, by many left-wing organizations. It is a curious fact that the willingness on the part of many leftists to throw around the "fascist" label is not shared by some of the groups in other countries where there is a lot more justification than here for use of the term. For example, the Movement of the Revolutionary Left (MIR) in Chile has stated, Properly speaking, what has been installed in Chile is not a fascist state, but rather a military or gorilla dictatorship with fascistic aspects. . . . It is not a fascist regime in the exact sense of the word for a variety of reasons. Its base of support does not come from a permanently mobilized mass movement. It does not have . . . the support of a crucial social bloc. ... It does not have a fascist party through which the dominant bourgeois sector articulates and centralizes its leadership of the process. The political police do not serve as the most powerful branch of the repressive apparatus. The Chilean military dictatorship . . . is far .from having the strength, vitality or potential of the fascist states of past decades." This clear statement, from one of the groups most widely and highly esteemed by the U.S. left, has had no deterrent effect in this country. There can be no serious objection if all that is involved is the use of a word "fascism" which is not meant to be taken scientifically but is simply intended to call forth a strong reaction from those hearing it. The fear is that more is involved. The indiscriminate use of a term which is meant to apply to a specific form of rule that arises in definite circumstances can and does obscure the reality of modern society and the forms of social motion which appear within it, including the emergence of a revolutionary social bloc. Current left thinking on fascism is shaped by lines that were worked out in the Third International (Comintern) following the death of Lenin, and especially in the early and middle nineteen thirties. The influence of that period has been transmitted to the present generation by means of three books: Fascism and Social Revolution by R. Palme Dutt, first published in June 1934, reprinted in several editions through the next two years, long out of print and now reprinted by Vanguard Press, the publishing house of the Communist Labor Party; Lectures on Fascism by Palmiro Togliatti, first delivered in Moscow in 1935 and now gathered and published by International, the Communist Party publishing house; and The United Front, consisting of the main report and closing remarks by Georgi Dimitrov to the Seventh Congress of the Comintern in August 1935 together with various speeches and articles written by him over the next two years, first published in 1938 and since reprinted by both the CP and the CLP. Of the three, Dimitrov's has had by far the greatest impact. It has never really been out of print, was a major influence on the thinking of the Black Panther Party at the time of the United Front Against Fascism Conference in 1969, has been read by the largest number of people. It is also the least valuable of the three books. Like most reports to Party and Comintern congresses during that period, it is lacking in any explanation of the considerations that led to the adoption of the current line and is limited to setting forth the official policy in a way that ensures its diligent implementation by Party members who are likely to do better when not encumbered by the realization that the official policy was selected from several conceivable alternatives.[*] Both the Dutt and the Togliatti books were written during that brief moment in 1934-35 when the Comintern line was in transit from "ultra-left" to right opportunist. Consequently, in accordance with the well-known principle that even a stopped clock is right twice a day, they come nearest of all the official Comintern pronouncements to an appreciation of the true origins and nature of fascism. Thus, they manage to avoid the sectarian exaggerations of the "third period"[**] without falling into the rightist deviations of the "popular front" period, during which the independent interests of the proletariat were totally liquidated within the alliance of all "democratic forces."The Dutt and Togliatti[***] books are not without serious flaws, however, and we shall mention a few in the course of this essay. But the first point that cries out for recognition is the irony contained in their current popularity. Whatever else Comintern policy in relation to fascism was, it was not a success. From 1921 up to the eve of World War II, to the rhythm of accelerating drum beats, the working class of one country after another witnessed its trade unions, established parties and cooperative societies fall before the advance of the fascists and their allies. The communists were not spared the general fate of the class; as Claudin puts it: During the gloomy spring of 1939, after Franco's entry into Madrid and Hitler's into Prague, the only substantial section of the Comintern that remained on its feet in Europe was the French party. Apart from this, only the small Communist parties of Scandinavia, Britain, Belgium, Holland and Switzerland, whose political impact was almost nil, remained legal. All the other European sections had been reduced to clandestine existence after suffering heavy defeats. Soon after this the French party was to undergo the same fate: and the Second World War would begin. . . . Thus, the Comintern had failed in the main aim it set itself at the outset of its existence to wrest the working class from reformism and organize it politically and tradeunion- wise on revolutionary principles." It is undeniably the case that the fortunes of the Communist parties picked up with the outbreak of the War. But by that time, the Dutt, Togliatti and Dimitrov books were gathering dust on the back shelves; and one bit of evidence to show how useless they were as a guide to the future can be seen in the fact that in those areas of Europe where fascism held sway and where the Soviet Army did not pass, the outcome of the War was neither of the alternatives envisioned in the title of Dutt's work. The Dutt, Togliatti and Dimitrov books represent, in a certain sense, an official blueprint of failure. Yet, a generation later, they are rediscovered and, what is more, enjoy a certain vogue. It is as if a doctor were to gain increased popularity owing to the fact that every one of his patients is known to have died directly following his treatment, or at the very least wound up as a quadriplegic! All three books answer the question What is fascism? by citing the famous definition put forward by the Thirteenth Plenum of the Executive Committee of the Comintern (1933): "Fascism is the open terrorist dictatorship of the most reactionary, most chauvinistic and most imperialist elements of finance capital." Since this is undoubtedly the most familiar definition, and can often be quoted verbatim by leftists who could not, if asked, furnish the name under which Adolf Schicklgruber achieved world renown, it seems a good idea to check any conclusions reached against that definition. Therefore, we shall return to it later on. This essay will attempt to consider, separately as much as possible, four topics relating to fascism. The first is under what conditions does it arise? All students agree that fascism makes its appearance at a time of crisis, a period in which the traditional methods of resolving social conflicts are no longer acceptable to any of the parties involved. The problem in analysis comes when the question is posed: at what stage of the crisis does fascism become a real possibility? Dutt writes that fascism appears at that stage when the breakdown of the old capitalist institutions and the advance of working-class movement has reached a point at which the working class should advance to the seizure of power, but when the working class is held in by reformist leadership. According to this view, fascism is "a species of preventive counter-revolution." This was the standard Comintern line. Thus, Dimitrov sees the drive toward fascism as a "striving to forestall the growth of the forces of revolution. . . ." Both Dutt and Dimitrov regard fascism as a defensive response on the part of the bourgeoisie; even when they speak of the fascist "offensive" it is clear that they view it as a counter-attack against the growing wave of the revolutionary offensive. This is not so obvious as it seems. In his book, Fascism and Dictatorship, Nicos Poulantzas writes: The beginning of the rise ot fascism presupposes a significant series of working-class defeats. These defeats immediately precede fascism, and open the way to it. . . . The meaning of this 'defeat' should be clarified. It was not 'the defeat' inflicted in a single day, but a series of defeats in a process marked by various steps and turns. The period of "relative stabilization" which followed the post-World War I revolutionary crisis in Europe is described by Poulantzas as a "significant weakening of the working class in the relation of forces" which, however, left intact most of the working class' economic gains made during the earlier period when it had the offensive. According to him, fascism was, in part, an attempt by the bourgeoisie to eliminate these gains which no longer corresponded to the real relation of class forces. To Poulantzas, then, Germany in the years 1929-33 is going through not an upsurge in the revolutionary process, but the last dying gasp of the crisis which the working class had failed to utilize properly in 1923. Trotsky's position combines elements of both. Writing in 1930, he agrees with the Comintern that the present situation represents "not . . . the conclusion of a revolutionary crisis, but just . . . its approach." At the same time, he points out that "The German Communist Party did not come on the scene yesterday . . . " and that its record of disasters from 1923 to the present is a factor that weakens the ability of the working class to resist fascism. What difference does it make to the analysis if fascism is seen as rising up as a possibility concomitantly with communism on the eve of the revolutionary wave, or if it is regarded as something like a jackal, stalking and finally bringing down the wounded proletarian lion? The difference is (I admit that this may be stretching too far) in the former case, fascism can be treated purely as the tool of the bourgeoisie, a tool which it wields more or less handily to beat back the workers' movement; in the latter case, fascism must be seen as a social phenomenon to some extent independent of the bourgeoisie, a phenomenon which arises out of the crisis of modern society and develops through the inter-action of a number of distinct causes over-determined, as it were. This brings me to the second topic I wish to take up: what is the relation of fascism to the bourgeoisie? The answer of the Comintern is clear and unmistakable: "Fascism is . . . a weapon of finance-capital . . ." (Dutt); "Fascism is the power of finance capital itself." (Dimitrov); ". . . it is the expression of the most reactionary sectors of the bourgeoisie." (Togliatti). The Comintern writers go to great pains to expose the direct links that finance capital established with the fascists prior to the latter's coming to power; they produce volumes of evidence to show the flow of money from the big bourgeoisie to the treasuries of the fascist organizations. All of this research is entirely irrelevant. The only points in a class analysis of fascism are to what extent do the fascists serve the interests of capital (or any of its sectors) and to what extent is that service merely a by-product of the circumstances under which the fascist regime happens to emerge in a particular time and place. "Totalitarian movements (here the writer is speaking of a phenomenon not exactly equivalent to fascism, but that does not matter for the present purposes) are mass organizations of atomized, isolated individuals." At the beginning of the period there is a revolutionary crisis (Italy 1920, Germany 1918-23) during which the working class shows itself unable to stand at the head of the efforts of the nation to reconstruct itself. At the critical moment it acts indecisively, and thus loses its moral authority over the middle sectors, who had rallied to it when it seemed to offer revolutionary solutions. The failure of the proletariat throws the masses, who have been torn from their moorings, into despair. The fascists arrive on the scene and proceed to organize that despair into a powerful force. "The success of totalitarian movements . . . meant the end of two illusions. . . . The first was that the people in its majority had taken an active part in government. . . . The second . . . was that these politically indifferent masses did not matter. . . ." The fascists combine the most violent denunciations of the existing order with a ferocious opposition to the Marxist organizations, accusing the latter of having proven their unfitness to head the nation, as they are guided by narrow self-interest and sectarian principles. Thus they are able to wield the homeless and the rootless among the populace, the people who have lost their sense of identification with any of the contending forces, into a solid force. At first the fascists limit themselves to attacks on the workers' organizations. They break up meetings, burn down headquarters, commit violence against outstanding workers' representatives. At this stage they are tolerated and even encouraged by the bourgeoisie, which sees them as a force to use against the left. As the social crisis deepens, the appeal of the fascists grows. While loudly proclaiming their revolutionary aims, they are in fact protected by the existing state, which lets their members off while jailing the workers who resist them. At a certain point the fascists become bolder in their aims, are no longer satisfied to act as a goon squad for the employers, but begin to. have ambitions to rule. They expand their activity, and may even enter into genuine popular struggles, as for example the Berlin transport strike of 1932, which they led jointly with the Communists. The bourgeoisie is confronted with a choice: on the one hand, sectors among the class (particularly heavy industry) want to utilize the fascists to settle accounts with the working class and also to shift the weight of authority among the ruling circles themselves; on the other hand, the fascists are an unknown quantity, a mass movement and, as such, not entirely predictable. The big capitalists ask for, and receive, guarantees from the fascists: the anti-capitalist propaganda is subtly shifted in favor of a campaign against "non-productive" capital; a fascist party chief who seems a bit too serious about the radical program is demoted. The bourgeoisie's mind is set at rest and the contributions flow freely again. All this does not take place without a great deal of agonizing and doubt among the bourgeoisie. However, the process is now getting out of control. The fascists have built a mighty mass movement, out of the dregs of society and, never quite out of mind, there stands the untamed proletariat, still capable of throwing up Soviets and workers' councils should the opportunity present itself. The matter is decided: the fascists carry out their "revolution" and march into power, carrying with them the hopes of the despairing masses and the best wishes of the bourgeoisie. Trotsky makes the shrewd observation that: The strength of finance capital does not reside in its ability to establish a government of any kind and at any time, according to its wish; it does not possess this faculty. Its strength resides in the fact that every non-proletarian government is forced to serve finance capital. " The fascists come into power and now begins an exceedingly complex series of maneuvers and readjustments. Their aims are directed first toward smashing the workers' organizations. At the same time, they are forced to rein in their own "left wing" those plebian forces who take at face value the promises of revolution against the "vested interests." There follow several years of twists and turns, wherein the fascist party is purged of those elements that brought it to power (the famous "Night of the Long Knives" in Germany in 1934). At the same time, the fascists flood the state apparatus, displacing the remnants of the old bourgeois parties, and also place their representatives on the boards of directors of the big corporations. While this leads to an expansion of the prerogatives of the fascists relative to the old bourgeoisie, it also brings the former under some semblance of control, and the fascist regime begins to assume the appearance of an ordinary regime of right-wing dictatorship. This is the classical pattern, and so far it does not contradict the notion of fascism as a tool of the bourgeoisie. If matters ended there, the Comintern interpretation would be relatively satisfactory. But matters do not end there. The fascists, while they have been forced by the relation of forces to bow to the wishes of the traditional bourgeoisie, have not lost their character as a "revolutionary" party. They are waiting for the proper opportunity to put their program into practice. The outbreak of war gives them that opportunity. As is the case in every country, war expands the autonomous power of the state. It makes possible the establishment of all sorts of supervisory boards and the like, which once again tilt the balance of forces back toward the fascist party. For Hitler, the outbreak of war was a golden opportunity to implement the Nazi program of the master race, beginning with the physical extermination of the mentally ill and advancing to the "final solution" of the Jewish question. Some of these measures are of no consequence one way or the other to the bourgeoisie. But some of them are definitely counter to its interests. For example, the diversion of trains for the transportation of Jews, at a time when German supply lines were dangerously strained, was not in the rational interests of the bourgeoisie. The execution of Polish and Jewish skilled workers, which was carried out on ideological grounds, did not serve the interests of the Krupps and Far-bens, who hoped to use those workers for production. Perhaps the most dramatic illustration of the contradiction between the fascist program and the rational needs of the bourgeoisie was Hitler's plan, in the event of Germany's defeat, to reduce the country to rubble, "to slam the door behind us, so that we shall not be forgotten for centuries." These are not the actions of a class which is motivated by the drive for profits; they are the actions of a party with a vision. It is true that the Nazis were unable to carry out their entire program; toward the end of the War, even such a top-level personality as Himmler began dismantling the death camps (without informing Hitler) as a step toward reestablishing a more normal situation and making possible negotiations with the West. But if the ideological fascists were unable to realize their entire program, so were the ordinary bourgeois unable to tame them entirely: it should not be forgotten that the famous attempt of the generals to assassinate Hitler which represented the "sane" wishes of the bourgeoisie failed, and led to wider purges of the state and a tighter Nazi grip on policy. These events cannot be explained by means of the Comintern formula for fascism as the dictatorship of the bourgeoisie. It is necessary to recognize the relative autonomy of the fascist movement in relation to all classes, as an important feature that distinguishes it from other rightwing governments. The observation by the contemporary Hungarian writer, Mihaly Vajda, is more accurate than the traditional Comintern view in describing the relations of fascism and the capitalist class. Vajda writes: that on the one hand fascism can only be accounted for if it is treated as a phenomenon of capitalist society, but that on the other hand it cannot be regarded as a movement which is actually launched by the ruling class, and that moreover it openly contradicts the interests of the ruling class in certain cases. The third point I wish to consider is the "chauvinism" of the fascists. Chauvinism is generally regarded as the extreme nationalism of an oppressor country. A careful study shows that fascism, in its German variety at least, was far beyond anything that had previously been recognized as nationalism. The aim of the Nazis was not the establishment of German supremacy, although they occasionally referred, for mass consumption, to that goal. The aim of the fascists was the establishment of the master race, which they insisted was just beginning to make its appearance, and which would be drawn from the "Aryan" elements of all the peoples of northern Europe. They repeated often that, for them, the conquest of the German state was simply a stage on the path to the reconstitution of Europe 'that fascism was a movement not a state. As Hannah Arendt points out, they treated Germany itself as a conquered nation, the first of all the nations of Europe to receive the benefits of their racial purification policies. It is no exaggeration at all to observe that fascism, far from being motivated by nationalist considerations, in fact tended toward internationalism not of the proletarian type, to be sure. Likewise with the label "imperialistic" that the Comintern used as part of its definition of fascism. The First World War was an imperialist war. As has been noted by a variety of observers, including W.E.B. DuBois and Lenin, it was a war for colonies, a war to conquer territories (or defend already-conquered ones) to which the conquering power would profitably export capital. The aim of fascism (particularly the German variant) in the Second World War was not the export of capital but instead the annexation of entire territories with their population and natural resources in other words, centralization of capital, the very opposite of export. Hitler's rule over Europe did not lead to the expansion of capital in the occupied areas, as would have been the case if capital were being exported to them, but to its reduction, as entire industries were dismantled and carted back to Germany and those that remained were reorganized to serve the needs, not of profit but of the war. If this was imperialism, it was a new stage and deserved to be recognized as such, something which the Comintern definition does not do. Lastly, with regard to the term "reactionary." That is a fairly fluid term, and it may seem unduly harsh to challenge a term so devoid of specific content- Nevertheless, it is part of the Comintern definition of fascism and should not be allowed to pass without scrutiny. If it means anything, the term "reactionary" applies to those who would go back, who would revert to more primitive social and technological conditions. It is precisely the unique character of fascism that it combined the crudest, the most oppressive, the most ahistorical conceptions of the human personality with the most modern methods of mass production and social engineering. The restructuring of the army, the mobilization of all the resources of Germany and the conquered territories, the adoption of the techniques of the Blitzkrieg, the coordination of military efforts with the pro-Nazi movements in every country these things shattered the traditional ideas of how things were done. They were supported by that sector of the bourgeoisie which was the most advanced, and were resisted by that sector which was the most reactionary the traditionalists, the old officer corps, the Prussian nobility. In his report to the Seventh World Congress, Dimitrov announced that, "The accession to power of fascism is not an ordinary succession of one bourgeois government by another, but a substitution of one state form of class domination of the bourgeoisie bourgeois democracy by another form open terrorist dictatorship."[*] The fourth topic I wish to take up is what is the character of this "open terrorist dictatorship?"[*] There can be no denying the terrorist character of the fascist regime terror on a scale previously unknown. But it is not merely the scale of terror that distinguishes fascism from other forms of dictatorship autocracy, military rule, etc. even when we allow that the expansion of terror has given it a "qualitatively" new aspect. Previous regimes aimed at the suppression of conscious opponents. Fascism, after the first few years of breaking up the opposition parties, moves toward the establishment of the totalitarian state. The characteristic of the totalitarian state is not merely suppression of the opposition, but total domination of the lives of the subjects. This is brought about in part through the use of terror. Even this terror has a special character it is no longer directed at individuals and organizations that have placed themselves in opposition to the regime, but is directed at large groups of the population that have given no particular reason to doubt their loyalty: Jews, Poles, Gypsies, the mentally ill, those with congenital defects, etc. The concentration camps were filled with people who were absolutely "innocent" in every sense except that they had the misfortune to fall into one of the targeted groups. The second feature of the totalitarian state is that it not merely suppresses the defense organizations of the proletariat; after having smashed up the proletarian organizations and having reduced the population to a grouping of atomized individuals with no ties of group interests, it then proceeds to reorganize these fragmented beings into mass organizations that reach into every sphere of life the workplace, the school, the community. It is not enough that opposition should be suppressed; the masses must be brought to cooperate with the new regime, to participate actively in its mass rallies, sport societies, re-education sessions. No form of autonomous activity can be permitted; art, music, sport and even chess are of value only to the extent they are "weapons." It is well known that the slogan that motivated the Communist Party in Germany right up to and beyond the coming to power of the Nazis was After Hitler, Our Turn! They consistently underestimated the possibility of a fascist victory (a mistake for which they later criticized themselves) but also, even after the victory, underestimated the seriousness of the defeat this entailed. As late as 1935, in his remarks at the Seventh World Congress, Dimitrov was still whistling in the graveyard about how "the Communist Party even in conditions of illegality continues to make progress, becomes steeled and tempered. . .," Of all the major figures in the leftwing movement of the time, only Trotsky, to my knowledge, had any appreciation of what the victory of fascism would mean to the working class. In words which all those who snarl when they hear the name "Trotsky" should be forced to read, he wrote, in 1931, before the victory of the Nazis: The coming to power of the National Socialists would mean first of all the extermination of the flower of the German proletariat, the destruction of its organizations, the eradication of its belief in itself and in its future. Considering the far greater maturity and acuteness of the social contradictions in Germany, the hellish work of Italian fascism would probably appear as a pale and almost humane experiment in comparison with the work of the German National Socialists. Retreat, you say, you who were yesterday the prophets of the "third period." Leaders and institutions can retreat. Individual persons can hide. But the working class will have no place to retreat to in the face of fascism, and no place to hide. If one were to admit the monstrous and improbable, that the party will actually evade the struggle and thus deliver the proletariat to the mercy of its mortal enemy, this would signify only one thing: the gruesome battles would unfold not before the seizure of power by the fascists but after it, that is, under conditions ten times more favorable for fascism than those of today. The struggle against a fascist regime by a proletariat betrayed by its own leadership, taken by surprise, disoriented, despairing, would be transformed into a series of frightful, bloody, and futile convulsions. Ten proletarian insurrections, ten defeats, one on top of the other, could not debilitate and enfeeble the German working class as much as a retreat before fascism would weaken it at the very moment when the decision is still impending on the question of who is to become master in the German household." To what extent did the fascist regime, even in its most completely realized form Nazism, succeed in subordinating all strata of society to its total domination? There is abundant evidence dealing with this question in relation to the big bourgeoisie, and there the answer seems to be not very much. As Guerin put it, "The fascist regime . ., never domesticated the bourgeoisie." It must be remembered, as an explanation of the fascist failure in this regard, that the German bourgeoisie, even though it was undergoing a crisis, was by no means a weak social formation. It is not inconceivable that, in other circumstances, where the bourgeoisie is mortally wounded, the fascist mob could succeed in bringing it under its domination or even eliminating it totally as a class distinct from the heads of the state and the fascist movement. Suppose, for a moment, a situation where the bourgeoisie was exhausted, divided, unable to command any longer the respect of the population, but where the working class is not sufficiently conscious and organised to rule as a class. Could a mob inflamed by radical slogans without class content come to power and proceed to expropriate the bourgeoisie while retaining the essential feature of bourgeois social relations, namely the domination of the living laborer by previously accumulated, congealed, dead labor? Perhaps "fascist" would not be the best term to apply to such a regime, but would it not exhibit many of the features of the fascist state? How would such a regime stay in power? Most likely, it would combine violent denunciations of the old system of private property, resting on the masses' bitter memories of private exploitation, with constant appeals for vigilance lest the old way be restored. It would strengthen the state apparatus, and scornfully dismiss appeals for free speech and press as opening the door for the class enemy to return. Lastly, it would mobilize the population by means of a constant and deafening clamor of propaganda, officially approved mass organizations in every sphere of life,. public rallies and demonstrations, supervised collective study and character re-molding, perhaps through some device like the Catholic confessional or ritual group discussions of individual errors. (I beg to remind the reader that all this is pure speculation, since no such regime ever has existed or could exist anywhere in the world.) Of course, for us, the more important question is the success of fascism in liquidating the working class. (Recall the words of Mussolini the working class when it is not organized is not a class but a mob.) The evidence here is sparse. It is obvious that Italian fascism never brought about the total atomization of the proletariat. The situation regarding Germany is not so clear. Several things indicate, however, that the fascist success was not as great as has been alleged. In the first place, there is the large number of German workers who found themselves in the camps. Based on what I said earlier, that the Nazi regime attacked the "innocent" as well as the "guilty," this cannot be offered as conclusive evidence. Second, the rapidity with which the German people set up autonomous institutions to regulate the distribution of allied relief food in the West immediately following the War provides some evidence that the germs of proletarian aspirations had not entirely been stamped out. It may very well be that the very speed of the occupation, especially in the east, where the Soviets moved immediately to establish their control over the police, functioned to prevent the emergence of more visible proof that the German proletariat had, indeed, survived the scourge of Nazism. To return to the official Comintern definition: I think I have demonstrated that every element in the definition is either mistaken, inadequate or subject to serious questioning. It should be laid to rest. * * * *The Dimitrov book, and the Seventh Congress generally, are (associated with the notion of the ("Popular Front," which was originally set out as a new "tactical orientation" but which very quickly became the keystone of CP strategy. This is not the place for a consideration of the methods of combatting fascism, which will be dealt with in a planned future article on revolutionary alliances. I cannot resist pointing out, however, that the Dimitrov book was published only one year before the Nazi-Soviet pact, when the line changed from the united front against fascism to the united front with fascism. That odd timing has not seemed to hurt the book's popularity.[return to text] **It was the so-called third period (1928-34) that contributed the immortal concept "social fascism" as the summary of the true nature of social democracy. The theoretical basis for this idiocy was most clearly articulated by Stalin when he declared that "Social-Democracy is objectively the moderate wing of fascism. . . . They are not antipodes, they are twins." (Works, vol. 6, page 294) This was regarded as somehow more "revolutionary" than the reasonable observation that fascism takes advantage of the reformist illusions fostered by the social democrats. Stalin's formula was endlessly repeated and elaborated, for example by Comintern chief Manuilsky, who declared, "All too obvious mistakes are being made among us: it is said that bourgeois democracy and fascism, social democracy and Hitler's party, are antagonistic." (Report to Eleventh Plenum, 1931) Actually, the line went beyond equating social democracy and fascism: the German CP was insisting up to 1932 that "our political line . . . is to deal the main blow to the SPD (Social-Democrats)." One fruit of this was the formation of a de facto bloc with the Nazis, as in the "Red Referendum" of 1931. (See Poulantzas, fn. p. 160)[return to text] ***The Togliatti book is of interest for reasons that have nothing to do with the subject under consideration. In The God That Failed Ignazio Silone recounts how he and Togliatti were the only delegates to a 1927 meeting of the Executive Committee of the Comintern who had the temerity to resist Stalin's request that a certain document written by Trotsky be condemned without having been read by any of those present. This sort of "bourgeois individualism" led to Silone's expulsion from the Italian CP in 1931. In these Lectures Togliatti, who was more pliable, quotes something written by "excomrade" Silone. Those familiar with the Comintern personnel policy, especially toward communists in exile from fascist countries, will appreciate the significance of Togliatti's departure from the norm.[return to text] There is no doubt that Dutt, for instance, was aware of the importance of "missed opportunity" in preparing the way for the advance of fascism. Thus, on page 126 he writes: "First, the revolutionary wave in Italy was broken . . . not by Fascism, but by its own inner weakness. . . . Second, Fascism only came to the front after the proletarian advance was already broken from within . . . harassing and slaughtering an army already in retreat." He never integrated this awareness into a general theory.[return to text] Togliatti recounts how the Fascist club responds to a complaint from a woman about her husband beating her by summoning the man to headquarters, warning him and ordering him to put a stop to such treatment. (Togliatti, op. cit., p. 143)[return to text] can be pointed out that internationalism does not have to assume a proletarian character. The Catholic Church is also internationalist. So was the Comintern when it called for the proletarians of all countries to identify their class interests with the state interests of the USSR.[return to text] *Perhaps some might observe a difference between this and Manuilsky's remarks of a few years earlier: "The fact that the bourgeoisie will be obliged to repress the workers' movement by fascist methods does not mean that the hierarchy will not govern as before (that is with the participation or support of the social democracy). Fascism is not a new governmental method distinct from the system of the dictatorship of the bourgeoisie. Anyone who thinks this is a liberal." (Quoted by Poulantzas, op. cit., page 149)[return to text] *Gus Hall, in his Introduction to Togliatti's Lectures, comments that "Fascism . . . especially tries to cover up the fact that it is 'the open dictatorship of the most reactionary section of monopoly capital.'" (page xi) Is he unconscious of the humor involved in "covering up" what is "open"?[return to text] 1. The MIR and the Tasks of the Resistance, Resistance Courier, Special Edition, Number 1, pages 53-4.[return to text] 2. F. Claudin: The Communist Movement, Monthly Review Press, 1975, pages 242-3.[return to text] 3. R. Palme Dutt: Fascism and Social Revolution, International, 1935, page 108.[return to text] 4. Ibid., page 113 (emphasis in original).[return to text] 5. Dimitrov: The United Front, International, 1938, page 9.[return to text] 6. N. Poulantzas: Fascism and Dictatorship, New Left Books, London 1974, page 139.[return to text] 7. L. Trotsky: The Struggle Against Fascism in Germany, Pathfinder, NY 1971, pages 59-62.[return to text] 8. H. Arendt: The Origins of Totalitarianism, Meridian, NY 1958, page 323. I cannot possibly recommend this book too enthusiastically, especially the third section, "On Totalitarianism."[return to text] 9. Ibid., page 312.[return to text] 10. Trotsky, op. cit., page 440.[return to text] 11. M. Vajda: Fascism as a Mass Movement, Allison & Busby, London 1976, page 8.[return to text] 12. Dimitrov, op. cit., page 12.[return to text] 13. Ibid., page 26.[return to text] 14. Trotsky, op. cit., page 125.[return to text] 15. D. Guerin: Fascism and Big Business, Pathfinder, NY 1973, page 9.[return to text]
<urn:uuid:0878732b-9866-41eb-add0-72deeef60f07>
CC-MAIN-2022-33
http://www.sojournertruth.net/commonmisconceptions.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571536.89/warc/CC-MAIN-20220811224716-20220812014716-00296.warc.gz
en
0.96622
8,140
2.515625
3
Chapter 4 — The Precession of the Equinoxes “Actually, the Egyptians do describe the Precession, but in a language usually written off as mythological or religious.” — GIORGIO DE SANTILLANA, Professor of the History of Science, MIT “There are years that ask questions and years that answer.” — ZORA NEALE HURSTON As Earth rotates around the sun, it wobbles slowly on its axis. This rotation, with its odd wobble, moves the axis with cyclical regularity over time through what most premodern cultures referred to as a four-stage cycle. They called these stages by various names — the four corners, the four directions, the four times, the four winds, the four royal stars — and related them to the four ministers, the four angels, the four mothers, the four sons, the four horsemen, the four kings, the four Guardians of Heaven, and so on. This movement of the direction of Earth’s axis takes place in increments of approximately seventy-two years as it moves around the sun, and through the twelve periods of time mapped by the zodiac. The movement may be visualized as a great “tree” or “tower” or “mill” above Earth, slowly “spinning”, “churning” or “grinding.” This slow spinning of the sky is called the precession of the equinoxes, and is the central mechanism behind the material in the previous chapters. In this chapter, we’ll examine the workings of this forgotten mechanism, and we’ll look at what it means for us and for the planet we live in. To understand the precession of the equinoxes, we must keep in mind that everything in the universe is in constant motion. We may have the illusion that we’re sitting still, reading the words on this page, that Earth is solid and stable beneath us, and that the stars are securely embroidered on the tapestry of the sky. The truth is, however, that we’re hurtling through the vastness of space on a spinning, wobbling, toppling ball, and every object we see in the sky is likewise whirling at unimaginable speeds. You are at this moment careening through space at speeds greater than that of a fired bullet: approximately 66,000 miles per hour, with no law enforcement officer to write a speeding ticket. For the purposes of this chapter, we’re going to imagine Earth as a gigantic top, spinning counterclockwise at a thousand miles per hour. Now, if a top is perfectly balanced and spun, it will appear motionless: it is vertical on its axis and, until the motion slows, you might imagine it is just a ball on a stick, planted in the ground. Earth, however, isn’t a perfectly spun top. For one thing, it is tilted on its axis at an angle that varies from 22.1 degrees to 24.5 degrees, as we can see in this diagram. The angle of Earth’s spin provides its residents in the north and south with seasons. When the northern part of Earth is angled away from the sun, it is, naturally, cooler. Thus people in North America and Europe have winter for half the year, while those in Argentina and South Africa experience summer. When the angle tips toward the sun, the seasons are reversed. Along the equator, the seasons are much less pronounced. The variation in the angle of Earth’s axis isn’t the only discrepancy in our colossal spinning top, however. There’s another, much more subtle shift, which only becomes evident if extremely precise calculations of time and position are made. As we’ve seen, Earth takes a year to move through each of the segments that we’ve labeled the zodiac. It moves through these in a counterclockwise motion. At the same time, though, a minute flaw in Earth’s orbit means that it doesn’t come to rest at precisely the same place at the turn of the new year. There’s a tiny shift, so minuscule that it requires keen observation using finely calibrated instruments over a period of decades to detect it. In fact, the shift that’s visible along the horizon is about the width of an outstretched pinkie finger approximately every seventy-two years. Nevertheless, although the shift is infinitesimal, over centuries and millennia it adds up. If you sat in precisely the same place for a millennium, staring at the horizon, the stars that rose directly in front of you in year one would rise a foot or so to the right of you (in a clockwise motion, as opposed to the counterclockwise spin of the yearly advancement) in year 1,000. And if you sat there long enough, they would eventually come full circle. For example, if the stars that rose in front of us on day one formed the constellation Leo the lion, then as the centuries passed, Leo would stalk off to the right, while Taurus the bull stampeded into focus, followed by Pisces the fish, and then the water bearer: Aquarius. And if you sat there long enough, Leo, with a roar, would return from your left and once again settle in front of your eyes (and in front of the Sphinx’s gaze). How long would you have to sit there to see Leo depart and return? Approximately 26,000 years — the number fluctuates somewhat over time. Our ancient ancestors calculated 25,920 years as the length of this forgotten cycle of time. Each sign of the zodiac takes 2,160 years to pass. This period of time spent in each sign was called by our premodern ancestors an “age” or “aeon”; for example, the Age of Leo (i.e., the 2,160 years during which the sun on the vernal equinox rose against the stellar background of the constellation of Leo). Note that the orientation of the Sphinx, the Giza pyramids and the Nile River on the ground that we looked in the previous chapter is a nearly precise reflection or “map” of the constellations of Leo and Orion (specifically Orion’s Belt). As above, so below. The brilliant ancient star watchers with their complex understanding of the earth and sky, as described in the previous chapter, noted four moments in each year when the cycles arrived at a turning point, thus providing the recurrent number four that underlies so many of the ancient stories and religions, as we saw at the beginning of the chapter. These moments (or “corners”), marked by the four Royal Stars, were the two solstices and the two equinoxes. Solstices are the days in the year that are the longest and shortest. In the Northern Hemisphere, they fall on December 21 (the winter solstice, and the shortest day of the year) and June 21 (the summer solstice, and the longest day of the year). The equinoxes are the other two corners of the year: the days when the night and day are of equal length. In the Northern Hemisphere, the spring equinox occurs on March 21 and the autumn equinox on September 22. In the Southern Hemisphere, of course, the dates are reversed. At the equinox, the sun will rise in a particular section of the sky. How¬ever, as the ancient astronomers noted, this section isn’t static: it shifts, as we’ve seen, by a fraction every year. Because our ancestors had divided the sky into twelve 2,160-year “houses” or “mansions,” each governed by an astrolog¬ical sign, they said that the sun was “housed” or “carried” or had a relation¬ship with each celestial character in turn: Leo, Pisces, Aquarius, and so on. Giorgio de Santillana says, “The sun’s position among the constellations at the vernal [spring] equinox was the pointer that indicated the ‘hours’ of the precessional cycle — very long hours indeed, the equinoctial sun occupying each zodiacal constellation for almost 2200 years” (de Santillana and von Dechend 1992, 59). Where are we in this great cycle? What time is it according to the Clock of Ages? Right now, at the spring, or vernal equinox, the sun rises between Pisces and Aquarius. We are coming to the end of one of the zodiac mansions. Hancock says, “We live today in the astrological no-man’s land at the end of the ‘Age of Pisces,’ on the threshold of the ‘New Age’ of Aquarius. Traditionally, these times of transition between one age and the next have been regarded as ill-omened” (Hancock 1996, 240). And we are also moving into the end of time as the precessional cycle itself comes to a close again. Let’s step back for a moment and look again at the larger context. We tend to imagine the sun as a relatively tiny cotton ball perched in the sky, but looks are deceiving. Imagine a huge fishbowl, filled with a million marbles. Pluck out one of those marbles and hold it next to the fishbowl. The marble is our Earth. The bowl is the sun. The sun is an unimaginably colossal, volatile, flaming ball, which periodically sends out plumes of fire so vast they could wrap around our Earth a dozen times. Its electromagnetic waves and gravitational field extend 15 trillion miles into space. Its gravitational field exerts a pull that’s magni¬fied when the planets also come into play. Scientists have long observed strange phenomena when the planets are in line. For example, when Jupiter, Saturn, and Mars are “in conjunction” (to use the ancient astrological term), shortwave radio frequencies become garbled. The New York Times mentions a “strange and unexpected correlation between the positions of Jupiter, Saturn and Mars … and violent electrical disturbances in Earth’s upper at¬mosphere. This would seem to indicate that the planets and the sun share in a cosmic-electrical balance mechanism that extends a billion miles from the center of our solar system” (Kaempffert 1951, 144). At incremental stages of the precessional cycle, through each house of the zodiac, Earth’s axis shifts in relationship to the exposure of the planet to new electromagnetic relationships (and gravitational forces) that are moving within the larger cosmic electrical balancing mechanism. Earth’s magnetic axis, incrementally, disengages from declining influences while engaging with new influences as it moves along the zodiac path. At specific points on the path, it experiences events of tension, adjustment, and realignment which are experienced as if we’re riding in a cart that has a slightly broken wheel, so that the cart is jolted each time the wheel rolls over the broken spot. These jolts, which happen at regular increments as Earth moves through the approximately 26,000-year cycle, are the catalyst for the convergence of Earth-changing catastrophic and disruptive events that we looked at earlier, and provide a context for the theory of Catastrophism. This periodic tension, adjustment, and realignment is the most pronounced at the beginning and ending of each zodiac period, during the transitional “in-between” time when the electromagnetic and gravita¬tional influences of the declining zodiac house overlap with those of the rising house. This in-between period is noted for terrestrial systems chaos. Approximately every 2,160 years (thirty incremental stages of seventy-two years each, and three stages of 720 years), Earth experiences a pronounced jolting, chaotic period as its magnetic axis readjusts itself to new astrophysical effects, in relationship to the four directions and the sun, as it proceeds through the houses of the zodiac. Pressure builds, peaks, and then Earth’s axis realigns. The most significant jolts takes place at the half¬way point of the precessional cycle and at its completion; in other words, at approximately 13,000 and, even more disruptively, at approximately 26,000 years in the precessional cycle. There are also lesser convergent jolting events approximately every 720 years +/- 50 years. These abrupt, earthshaking system adjustments are accompanied by all of the difficult events listed in Chapter 2: globally catastrophic climate instability, volcanic and seismic activity, ecosystem failure, species die-off, and social chaos. In a previous chapter, we saw that approximately 13,000 years ago, Earth experienced a period of profound cataclysmic upheaval. These jolts, occurring throughout the precessional cycle, are the central context of nearly all monumental ‘myths’ (including religious) sent forward in time by our ancient ancestors. Because our ancient ancestors took great care to track cataclysmic events, we have records of the precessional cycle and its effects — the atmospheric and geological changes — as well as their effects on civilization and living beings, reaching back into time. This dynamic cyclical pattern of approximately 26,000 years, now coming to an end once again, was known as the Great Year, Great Crossing, Great Return, Great Circle, Annus Magnus, Platonic Year, Solar Year, Eternal Return, Supreme Year, Wheel of Time, and other names. It was also described more subtly and coded into stories and riddles, symbols, geophysical constructions, and the mathematics and placement of architecture of nearly all advanced ancient and premodern cultures, extending back far into the depths of human civilization for tens of thousands of years. The earliest overt mention of the precession of the equinoxes in ancient literature comes from Hipparchus, the Greek astronomer, at around 150 BCE. There has been a general belief that Hipparchus discovered that “the celestial longitudes were different and that this difference was of a magnitude exceeding that attributable to errors of observation. He therefore proposed precession to account for the size of the difference” (Encyclopedia Britannica 1991, 5:937, 8). Hipparchus calculated the amount of precession to be 45 or 46 degrees of arc, which is quite close to the actual amount of 50.274 degrees modern astronomers have calculated. However, although Hipparchus’s achievement should not be trivialized, we now have evidence that other cultures around the world knew of the precession much earlier. Giving Hipparchus credit for “discovering” the precessional cycle is like giving Christopher Columbus credit for discovering America. The Chinese appear to have recognized and understood precession by at least 4500 BCE, and likely much earlier, and the Egyptians were aware of it from their earliest dynasties — from at least 3100 BCE, although many researchers and scholars now think the Egyptians inherited their knowledge of astronomy from a vastly older civilization. The Çatalhöyük complex in present-day Turkey symbolically expressed knowledge of precession as early as 6500 BCE and did the Gobekli Tepe complex as early as 9600 BCE. Mayan culture was aware of precession from at least 3114 BCE, and there’s strong evidence that they inherited their huge and precisely accurate body of astronomical knowledge intact from the much older culture known as the Olmec, which, as we saw, may have derived from Ancient Egypt. Babylonian and Assyrian, North American, Northern European, Oceanic, and Southeast Asian ancient cultures likewise had knowledge of this phenomenon from the earliest times, which they expressed symbolically, allegorically, and architecturally. It’s now thought by some paleo-astronomers that the exquisite Lascaux cave paintings in southwestern France that have been dated to approximately 18,000 BCE are records of the zodiac constellations, fixed stars, and solstice points. All of the constellations, with the exception of Aquarius and segments of Pisces, are represented by corresponding animals. It’s believed that the cave itself served as a gnomon (calendric device), with a ray of the sun penetrating the cave on the summer solstice and lighting up the painting of the Red Bull in the Hall of Bulls (it was the constellation Taurus (the bull) that dominated the summer solstice sky). An analysis of 130 caves in the immediate area has shown a common orientation to the sunrise and sunset at summer and winter solstice and spring and autumn equinox (Jègues-Wolkiewiez 2000). It’s highly unlikely that whoever possessed and implemented this range of technical knowledge to track and record the precise movements of the sun would be unaware of the precessional dynamics and effects. A more likely scenario is that they, 18,000 years ago, were very familiar with the precessional cycle by way of even more ancient oral records that had been handed down to them. Many of the ancient texts contain codified numbers, which often don’t seem to further the storyline. There’s nothing supernatural about these numbers; they are the mathematical measurements of time and space. Certain numbers in these records, as we’ve seen, are associated with the turning of Earth and the precession of the equinoxes. These are, according to Hancock, the time necessary to execute a one-degree shift in the position of the sunrise at the equinox; the time it takes for the sun to move through two houses of the zodiac; the time it takes for the sun to pass through one zodiacal segment; and a precessional “year” or “aeon.” The numbers associated with the above items are as follows: 12, 30, 36, 54, 72, 108, 360, 432, 540, 2,160, and 25,920. Whenever we find these numbers or their multiples in ancient texts, or represented in architecture, geophysical constructions, craft, ritual, or symbol, we should be on the alert for precessional symbolism. Let’s take a look at a few examples. The Tholos of Epidaurus The Tholos of Epidaurus in Greece is a good example of architecture that represents the approximately 26,000-year precessional cycle. The Egyptologist Jane B. Sellers has shown that the Osiris myth contains many of these precessional numbers. According to the myth, the year in the time of Osiris was 5 days shorter: it consisted of just 360 days, because the god of wisdom had won the extra 5 days from the goddess of the night sky in a betting game. The 360-day year consisted of 12 months, each 30 days long. Furthermore, the “evil” brother Seth chose 72 conspirators to help him kill Osiris. If we work with these numbers, simple multiplication gives us the following: 72 x 30 = 2,160, which is, we remember, the number of years in each astrological Age: the time the sun spends in a zodiacal house. Also, 360 x 72 = 25,920, which is the approximate period of the “Solar Year,” or the full cycle of the precession of the equinoxes. Intriguingly, the numbers codified within the Osiris story, when extrapolated to the precession, are much more accurate than those of Hipparchus. This suggests not only that the ancient astronomers were aware of precession, but that their instruments were more precise than those of later astronomers — and indeed, were not surpassed until Galileo in the seventeenth century. Recent research at the Borobudur ritual complex in Indonesia, believed to have been constructed in the ninth century CE, has revealed that the main stupa in the world’s largest Buddhist temple serves as a calendric gnomon that utilizes the shadow of the sun’s rays. A team of astronomy professors and students, along with a researcher from the National Aeronautics and Space Institute, published their findings in the proceedings of the Seventh International Conference on Oriental Astronomy in Tokyo, Japan, in September 2010. At the Arupadhatu level, the main stupa is surrounded by seventy-two stupas in a circular overlay that makes up the track at levels seven, eight, and nine, indicating knowledge of precession. We’ll see further along that this isn’t the only indicator of precession that’s encoded into the construction. Manichaean Text — Third Century Another good example of ancient precessional symbolism is found in Persian Manichean writings. Consider this passage: “Now for every sky he made twelve Gates with their Porches high and wide, every one of the Gates opposite its pair, and over every one of the Porches wrestlers in front of it. Then in those Porches in every one of its gates he made six Lintels, and in every one of the Lintels thirty Corners, and twelve Stones in every Corner. Then he erected the Lintels and Corners and Stones with their tops in the height of the heavens: and he connected the air at the bottom of Earth with the skies.” Andrew Collins in From the Ashes of Angels: The Forbidden Legacy of a Fallen Race describes the passage’s meaning: Multiply the 12 Gates with the six Lintels to each Porch and you get 72 — the number of years it takes for Earth to move 1 degree of a precessional cycle. Multiply this number with the 30 corners of each Lintel and you get 2,160 — the number of years in one complete precessional age. Multiply this number with the 12 stones in every Corner and you arrive at 25,920 — the number of years in one complete precessional cycle. Mayan “Long Count” Calendar The Mayan “Long Count” calendar uses vast increments of time that allowed the ancient Mesoamericans to reach far back into the past. In the calendar, one Katun is equal to 7,200 days. One Tun is 360 days. Two Tuns are 720 days. Five Baktuns are 720,000 days. And six Tuns are 2,160 days. Note the recurring numbers 72, 360, and 2,160. This calendar indicates that an immense time period of close to 5,000 years came to an end on December 12, 2012. Inscriptions centered around the calendar suggest that the shift between one period and another will engender upheaval and a new “kingship.” This coronation of a new king has traditionally been read as referring to a real Mayan ruler. However, it’s likely that the shift in paradigm and rise of a new presiding figure symbolically refer to the transition of our planet into a new zodiacal mansion. Exocarnation towers (now commonly referred to as dakhmas or Towers of Silence) were used in many Indo-Iranian cultures to dispose of the dead, dating back into prehistory. Bodies were placed on the roof of the tower, in slotted sections, exposed to the sun and the elements, and the bones were left to dry and turn to dust. The bodies were fed upon by vultures, which were the ultimate symbol of death and transformation, and a connection between Earth and the celestial realm. The vulture, as a symbol — similar to the Grim Reaper — was associated with the precessional cycle and its effects on life. Its circular flight pattern as it hovers above a food source was a living symbol of the precessional circle at the top of the “tree”, shown in the precessional tower image above (fig. 19), and representative of the loss of life associated with its rhythmical pattern of degeneration and regeneration. The vulture, as symbol, is found in Bronze Age Cretan ruins, in Northern European and Iranian mythology as the griffin, in the Americas as the crow, in Tibet (where exocarnation was a common practice until recently), and around the world as the raven. The vulture is also seen in very early Buddhist mythology: most notably Vulture Peak, another name for Grdhrakuta Mountain, an ancient exocarnation site and unique habitat of vultures. There are still thousands of vultures living in the crevices of the hill rocks. Vultures, as symbolic representations, are seen in ruins that date back to Göbekli Tepe complex in Çatalhöyük, where large murals have been found depicting vultures on top of towers feasting on the dead. In the following illustration of the roof of an ancient exocarnation tower located in Iran, if we count the number of segments going around the circle, we find 36 in the outer ring, and in each of the inner rings, segmented for bodies, we find 70 slots, with the path leading to the center (left side, in white) taking up the equivalent of two more slot spaces. (The numbers 70, 71, and 72 were all used by various ancient cultures to represent the same celestial mechanism — the modern measurement is 71.6, but this has fluctuated over time). We also see the four directions represented, as well as the nine realms of heaven, the five seasons of existence, and the cosmic turtle that are all found in ancient cosmologies around the world. Chinese Secret Society The Hung League is an ancient order similar to that of the Freemasons and Knights Templar in the West, and is considered to retain codes of an earlier body of thought. Like the Freemasons and the medieval alchemists, the members of the Hung League were forced to conceal and codify their wisdom, to avoid the ire of the ruling authorities. (The Chinese authorities have for millennia endeavored to suppress any organization that might threaten the ruling hegemony.) In the Hung League initiation ceremony, inductees must answer the mysterious question: “Do you know how many plants there were?” with “In one pot were 36 and in the other 72 plants, together 108 … the red bamboo from Canton is rare in the world. In the groves are 36 and 72. Who in the world knows the meaning of this? When we have set to work we’ll know the secret.” One Hundred and Eight Another precessional number that recurs consistently in ancient records is 108. This may seem a rather ungainly figure, until we realize that it is 36 (360 divided by 10) plus 72. Confirmation of this composition is contained in the ritual initiation ceremony for members of the Chinese Hung League described above. The number 108 occurs around the world as well, in the number of stanzas in the Rig Veda (10,800), in the 10,800 bricks in the Indian fire altar, in the Rosicrucian cycles of 108 years, and in the number of beads in Buddhist and Christian rosaries. The five gated avenues at Angkor Wat are each bordered by 108 statues (for a total of 540). Similarly, one of the themes of Homer’s Odyssey is the existence of the Proci, 108 suitors for the hand of Penelope, the missing Odysseus’s wife. And, the Book of Enoch has 108 chapters. Four Hundred Thirty-two Furthermore, 4 x 108 = 432, a number also found in the records, symbols, and architecture of ancient cultures in Asia, Europe, the Middle East, and South America and which was specifically related to celestial cycles. Graham Hancock and Robert Bauval in The Message of the Sphinx point out that: “Equally ‘impossible’ — at any rate for a people like the ancient Egyptians who are supposed to have known nothing about the true shape and size of our planet — is the relationship, in a scale of 1:43,200, that exists between the dimensions of the Pyramid and the dimensions of Earth.” Similarly, at Borobudur, the ritual complex in Indonesia that we looked at above, there are 432 Buddha statues at the Rupadhatu level of the construction. Another example, according to the Satapatha Brahmana, is that there are 10,800 stanzas in the Rig Veda, each with 40 syllables, for a total of 432,000 syllables. The number comes up as well in the third century BCE; Chaldean astronomer Berossos, recording the ancient history of Babylon, wrote that the period between its first “king,” Aloros, and the Babylonian flood was 432,000 years. The number even shows up in a book central to a modern religion. In the book of Genesis in the Bible’s Old Testament, if we count the number of years between the creation of “Adam” and the “Noah” flood, we get 1,656 years. Julius Oopert, a distinguished Assyriologist, presented a paper to the Royal Society, the British fellowship of distinguished scientists and engineers, on the “Dates in Genesis,” showing that 1,656 years = 86,400 weeks of 7 days; 86,400 divided by 2 = 43,200. When Jesuit monks first began proselytizing in China in the late 1500s, they recorded that the Imperial Library contained in its collections an extraordinary set of records, consisting of 4,320 volumes that were recorded to be a history of all knowledge. Charles Berlitz, an American linguist, wrote that this record includes a description of the effects brought about when “mankind rebelled against the high gods and the system of the universe fell into disorder” and “the planets altered their courses. The sky sank lower towards the north. The sun, moon and stars changed their motions. The Earth fell to pieces and the waters in its bosom rushed upwards with violence and overflowed the earth” (Berlitz 1989,126). And, in the dramatic Day of Ragnarök (the Doomsday of the Gods), found in the Grímnismál, one of the poems of the Norse Poetic Edda, the narrator says: Five hundred and forty doors there are to Wal-hall I ween. Eight hundred of the Chosen shall go out of each door at one time, when they go forth to fight the Beast. The number “Five hundred and forty” (also found at Angkor Wat) multiplied by “eight hundred” equals 432,000. Tying all these examples together, the cycle of the precession of the equinox, recorded by the ancients as lasting 25,920 years, is also connected to 432 because 432 x 60 = 25,920. Each of the 12 ages of the zodiac extend for a period of 2,160 years (for a total of 25,920 years), which connects 432 to 216, as does that fact that 216 x 2 = 432. 216 ÷ 2 = 108. And 108 x 4 = 432. For a fascinating and in depth examination of the encoded numerical and symbolic references to the precessional cycle in ancient texts, I refer readers to Hamlet’s Mill: An Essay Investigating the Origins of Human Knowledge And Its Transmission Through Myth by Giorgio de Santillana (a professor of the history of science at MIT) and Hertha von Dechen (a scientist at Johann Wolfgang Goethe-Universität). And now, at last, the grand fugue, with almost eerie, spine-tingling precision, begins to become apparent. The Egyptian Giza complex, Stonehenge, the Mayan Way of the Dead, Angkor Wat in Cambodia, the Borobudur complex in Indonesia, the exocarnation towers in Iran and India, Greek architecture: all these must be understood as references to a celestial movement so minute that it shifts in yearly increments about the width of a needle held at arm’s length, a movement that, nevertheless, periodically wreaks havoc on our planet. The Borobudur complex is a colossal representation of the precession of the equinoxes. The Giza Pyramid complex and the Sphinx, and the Mayan Way of the Dead, are magnificent calendric machines tracking the same phenomenon. Gilgamesh, Krishna, the biblical Adam, Samson, Prometheus, Hercules, Thor, Hamlet, Siddhartha, Jack, Rapunzel: all tell the story of this grand cycle of time that was known in nearly every ancient culture. The ancient oral traditions, written records, and folk tales from around the world preserve the celestial mechanism and data in story form, passing on information that’s critical to the sanity, wellbeing, and survival of the human community. These stories are history, advanced mathematics, and precise science preserved in a multifaceted technical language that ensured that this information would survive and be remembered for the greatest amount of time by the greatest number of people. And it worked. We should be grateful for their genius and dedication, employed for our benefit. Our ancient ancestors succeeded in passing this information, accumulated over tens of thousands of years, all the way to the end zone of the current grand cycle, although the message is now nearly completely mythified, mystified, and dismissed. According to this preserved message and confirmed by ancient calendars, we’re now in the tail end of the final zodiac segment of the circle, bringing the precessional cycle, and a world era, to a close. The serpent of time (Ouroboros, Quetzalcoatl, Wadjet, Aidophedo, Jormungandr) that lurks at the base of the tree of stars is beginning to eat its tail, as we move deeper into the Sixth Extinction. And even though our ancient and premodern ancestors invested enormous thought, time, and energy for many thousands of years to prepare us, we are asleep. We’ve forgotten what time it is, and we ignore the wisdom of the ancients at our peril.
<urn:uuid:efad4b11-65df-4a1d-b20b-305af121d5ee>
CC-MAIN-2022-33
https://no-solid-ground.medium.com/chapter-4-the-precession-of-the-equinox-for-dave-e21dd48041c2?source=read_next_recirc---------1---------------------e757d9b4_ed93_4c82_abf1_54395fb30d2d-------
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570871.10/warc/CC-MAIN-20220808183040-20220808213040-00296.warc.gz
en
0.943455
7,182
3.453125
3
Section 2 includes: Section 3 includes: Section 4 includes: When it comes to incorporating sea level rise projections into community planning initiatives, the path is not crystal clear, as there are many factors to consider. Our companion Application Guide for the 2022 Sea Level Rise Technical Report helps the reader wade through the considerations and arrive at what’s best for their community. When it comes to incorporating sea level rise projections into community planning initiatives, the path for each community, for each project, is not crystal clear, as there are many factors to consider. That’s what this application guide helps the reader do — wade through the considerations and arrive at what’s best for their community. This publication is a companion for the 2022 Sea Level Rise Technical Report, which represents the most up-to-date sea level rise information available. The application guide includes four sections: This technical report provides a synthesis of the most recent science related to sea level rise, and serves as a key technical input for the Fifth National Climate Assessment that is underway. The report does not provide guidance or design specifications for a specific project, but is intended to help inform federal agencies, tribes, state and local governments, and stakeholders in coastal communities about current and future sea level rise. The executive summary provides key messages, and the body of the report provides detailed, technical information about the data, modeling, and analysis behind the report’s findings. As with the previous 2017 technical report, global mean sea level rise scenarios are regionalized for the U.S. coastline. Data from the report are being incorporated into current and planned agency tools and services, and are immediately available in NOAA’s Sea Level Rise Viewer and NASA’s Task Force Projection Tool. Readers interested in accessing information for a particular community or region can look to these tools. Frequently asked questions and answers are available below. Tens of millions of people in the U.S. and hundreds of millions globally live in areas that are at risk of coastal flooding. Sea level rise does not act alone — rising sea levels, along with sinking lands, will combine with other coastal flood factors like storm surge, wave effects, river flows, and heavy rains to significantly increase the exposure of coastal communities, ecosystems, and economies. Sea level rise threatens infrastructure necessary for local jobs, regional industries, and public safety, such as roads, subways, drinking water supplies, power plants, oil and gas wells, and sewage treatment systems. Long-term sea level rise will affect the extent, frequency, and duration of coastal flooding events. High-tide flooding events that occur only a few times a year now may occur once a month, or once a week in the coming decades. These same water level changes may also increase coastal erosion and groundwater levels. Elevated groundwater levels can lead to increased rainfall runoff and compromised underground infrastructure, such as public utilities, septic systems, and structural foundations. Higher water levels also mean deadly and destructive storm surges, wave impacts, and rainwater are unable to drain away from homes and businesses. The two major causes of global mean sea level rise are the expansion of ocean water as it warms (thermal expansion) and the added water from land-based ice (e.g., mountain glaciers and ice sheets) as it melts. Both of these processes are driven by increased global temperatures that are associated with greenhouse gas emissions. At a local level, any vertical land motion that may be occurring — from either natural or anthropogenic factors — can cause changes in ‘relative sea level.’ There is strong evidence in the geologic record that global carbon dioxide levels, temperature levels, and sea levels have changed together through time. Human activities, especially emissions of greenhouse gasses, are the dominant cause of increasing global temperatures since the industrial revolution. When concentrations of greenhouse gasses in the atmosphere go up, temperatures also go up. As Earth warms, the ocean warms too, absorbing more than 90% of the increased atmospheric heat. When water warms, it expands — a process known as thermal expansion. This thermal expansion means that the volume of ocean water increases, which causes sea levels to rise. Rising air and ocean temperatures also cause glaciers and ice sheets to melt, which in turn increases the amount of water in the ocean. Both thermal expansion and melting land-based ice cause sea level to rise. Because the ocean is so large it retains heat for a long time. This means that even if global emissions and temperatures are reduced, sea levels will continue to rise for the coming decades and centuries. Global sea level rise is caused by expanding ocean water and melting of land-based ice (see Questions 2 and 3). Regionally, other factors are at play, which means that there are differences in the amount and speed of sea level rise that will be experienced in each region of the United States. Regional or relative sea level rise has three drivers: changes in the ocean’s characteristics (stereodynamics), changes in the land’s height (vertical land motion), and changes in land ice and Earth ( gravitational, rotational, and deformational changes). The first driver, stereodynamic sea level change, refers to changes in the ocean’s movement (circulation and currents) and its climate (temperature and saltiness). Trade winds and currents can push water higher, or lower, in different regions. Freshwater added from melting ice sheets and glaciers can shift ocean circulation patterns in different regions by changing the saltiness, temperature, and density of the water. Another reason for differences in regional sea level is vertical land motion. Across the U.S., land is sinking or rising at different rates and times, and this affects how high sea level rises in a region. Vertical land motion can be a result of geologic processes (e.g. the movement of tectonic plates); human activity, such as removing groundwater or fossil fuels from underground, which can cause the land to sink; or naturally-occurring sediment compaction and settling over time (e.g., subsidence in the Mississippi River delta). Changes in land ice and solid Earth – also called gravitational, rotational, and deformational changes – can also affect regional sea level. When ice sheets and glaciers melt or lose mass, this adds freshwater to the oceans and changes the gravity, deformation, and rotation of the Earth, which then contributes to higher sea level rise at locations farther away from the ice melting source than locations close by. These patterns of sea level rise are known as “fingerprints” and are the reason that ice mass loss from distant Antarctica will impact the U.S. coastline more so than ice mass loss from Greenland. Global mean sea level, or the average height of the ocean surface, has risen 6 - 8 inches (15 - 20 centimeters) since 1920. In the continental U.S., relative sea level has risen about 10 - 12 inches (25 - 30 centimeters) over the same period. Observational data from tide gauges and satellites also show that sea level rise, both globally and along the continental U.S., is accelerating, with more than a third of that rise having occurred in the past two and a half decades (see NOAA and NASA portals for altimeter-based global rates and NOAA for local tide gauge rates). Sea level rise scenarios represent possible future sea level changes in response to increasing greenhouse gas emissions and ocean and atmospheric warming. These scenarios allow people to consider future impacts and responses and ask “what if?” questions about the future to support planning and decision-making. Sea level rise scenarios are used to communicate how much sea level rise could occur, under what circumstances, and by when. They also show how sea level rise might occur globally and locally. Sea level rise scenarios are generally based upon climate model outputs. These climate models allow scientists to simulate different responses, such as how the ocean might continue to warm, where ice melts and major ice sheets dynamically respond, and where and how the additional water disperses around the world’s ocean and affects circulation patterns. These responses differ under models that use different bounding conditions associated with various amounts of greenhouse emissions and ocean and atmospheric warming projections. Thus sea level rise scenarios help us plan in the face of uncertainty by providing a range of possible futures that help represent a) potential future human-driven greenhouse gas emissions, and b) how earth’s physical processes will respond to increased temperatures. Shared socioeconomic pathways describe how society, economics, and demographics may change globally over the next century. Depending on which pathway our global community actually follows, the amount of warming — and hence the amount of sea level rise — could be very different. These shared socioeconomic pathways are used in the Intergovernmental Panel on Climate Change’s Sixth Assessment Report, released in 2021. Representative concentration pathways are another frame of reference for evaluating future climate changes and were used in the Intergovernmental Panel on Climate Change’s Fifth Assessment Report. Representative concentration pathways represent different amounts of net radiative forcing by the end of the 21st century — in other words, the extra heat trapped in the atmosphere due to future amounts of greenhouse gasses — but the pathway of emissions or social conditions to get there are not specified. In the 2017 sea level rise technical report, scenarios were related to representative concentration pathways. The 2022 report and data employ the underlying methods and output from the Sixth Assessment Report and their dependency on shared socioeconomic pathways, but focus more on how these scenarios relate directly to different amounts of end-of-century surface warming associated with the pathways (see Question 3). There are two types of uncertainty that are important to consider when thinking about future sea level changes: 1) uncertainty in representing or modeling the physical processes that cause sea level change known as process uncertainty, and 2) uncertainty in how human behavior will drive future emissions and ensuing warming known as emissions uncertainty. The suite of projections in this report captures both process uncertainty and emissions uncertainty. Process uncertainty is associated with how well we currently understand why sea level has changed in the past and how it will change in the future at specific times and locations. To capture process uncertainty in sea level rise projections, there is a range of uncertainty around each individual scenario (i.e., the low/17th%, median/50% and high/83rd% values for each particular scenario). The farther forward in time we move, the greater the uncertainty around each projection. Emissions uncertainty is captured in the range between the five global mean sea level rise scenarios: Low (1 foot; 0.3 meters), Intermediate Low (1.6 feet; 0.5 meters), Intermediate (3.3 feet; 1.0 meter), Intermediate High (4.9 feet; 1.5 meters), and High (6.6 feet; 2.0 meters). In other words, the range between the five sea level scenarios is closely connected to emissions uncertainty, while the range around a given scenario is associated with process uncertainty. In addition to process and emissions uncertainty, there is still scientific discussion and investigation underway on the potential for rapid ice sheet melt and collapse, sometimes referred to as low confidence processes. Currently there is no scientific consensus on whether rapid melt will occur and, if it does, what that process will look like. Given that it is possible, those processes are included in international and federal assessments. The possibility of rapid ice sheet melt is a significant driver in reaching the highest scenarios in the 2022 technical report. We know that throughout time, global sea levels have changed by tens to hundreds of meters over approximately 100,000-year cycles. And that in arriving at the stages of sea level equilibriums (the dips and humps of global sea levels through time), there is evidence of periods of rapid rise on the order of meters of rise over 100-year periods. Due to the “committed sea level rise” from continued warming of the ocean, over the course of the next several centuries the higher scenarios might be approached as “when, not if” if warming and emissions continue to rise unchecked. Or, put another way, there is uncertainty in how quickly the “dynamics” associated with ice sheets may affect future sea level rise, but not regarding whether or not global sea levels in the next several centuries will rise by at least a few meters. The report has three main components: The 2022 technical report includes five possible scenarios of global sea level rise by 2100: Low (1 foot; 0.3 meters), Intermediate Low (1.6 feet; 0.5 meters), Intermediate (3.3 feet; 1.0 meter), Intermediate High (4.9 feet; 1.5 meters), and High (6.6 feet; 2.0 meters). These same scenarios were in the 2017 technical report, but the Extreme (8.2 feet; 2.5 meters) scenario included in 2017 has been removed (see Question 14). The 2100 projections for each global scenario stayed the same, since science suggests this range of futures remains possible. However, the timing for different rates of rise for the different scenarios was updated based on new modeling and more realistic assumptions of Greenland and Antarctic ice sheet behavior based upon the Intergovernmental Panel on Climate Change Sixth Climate Assessment. A result is that there is less acceleration in the higher scenarios until about 2050 and greater acceleration toward the end of this century. This has two primary implications. First, despite maintaining the same target values and having the same range between scenarios in 2100, the range covered by the scenarios is smaller in the near term than in the 2017 report. Second, the likely (17th-83rd percentile) ranges of projections for each scenario before and after the 2100 time point used to define the scenarios are wider than in the 2017 report. A goal of the 2017 and 2022 technical reports is to examine the full range of plausible amounts of future global sea level rise, not just those rise amounts considered “likely.” Quantifying the “unlikely but possible” sea level rise response can be critical to help bound certain risk planning exercises. For global mean sea level, these hinge on potential physical changes in the major ice sheets, where possible rapid collapses would contribute to very large amounts of future sea level rise. These “unlikely but possible” outcomes must be acknowledged and accounted for in some types of planning (e.g., for major infrastructure investments that have a very long operational life cycle and/or are critical community lifelines.) For the 2017 report, these considerations led to the development of a set of global mean sea level scenarios that span the plausible range of sea level rise and are defined by a target value of rise in 2100. In the 2022 report, the same framework is adopted, and the following 2100 target values of sea level rise are used to differentiate the five scenarios: Low (1 foot; 0.3 meters), Intermediate-Low (1.6 feet; 0.5 meters), Intermediate (3.3 feet; 1 meters), Intermediate-High (4.9 feet; 1.5 meters) and High (6.6 feet; 2 meters). To create the global mean sea level scenarios and associated regional relative sea level values, the 2022 technical report used the last report in 2017 as a starting point. The scenarios from that report are updated with the most recent science and adopted analytical methods from the Intergovernmental Panel on Climate Change's (IPCC) Sixth Assessment Report, completed in 2021. The IPCC regularly convenes top researchers from around the world to synthesize the best available climate science. Both the technical report and the IPCC Sixth Assessment Report draw upon the latest sea level rise science. Because both reports depend upon the same underlying science, they are similar and consistent, but the global mean sea level scenarios lead to a different framing and structure. To generate the scenarios used in this report, the ensemble — or set — of projections in the Sixth Assessment Report that are tied to specific shared socioeconomic pathways (see Question 7) are filtered to identify subsets of pathways that are consistent with the scenario target values in 2100 (i.e., 0.3 meters, 0.5 meters, 1 meter, 1.5 meters and 2 meters). As in the Sixth Assessment Report, these scenarios are regionalized and then provided at individual tide gauge locations and for 1-degree grids along the global and U.S. coastlines. The median, 17th and 83rd percentile values are provided for each scenario at each tide gauge and grid location. Sea level along the contiguous U.S. coastline is expected to rise (considering alignment of both the observation-based trajectories and the scenarios in 2050), on average, 10 - 12 inches (0.25 - 0.30 meters) in the next 30 years (2020-2050). This will vary locally because of regional factors (see Question 4). Rise in the next three decades is anticipated to be, on average: With each passing year, improved observations and modeling help us get a clearer picture of how and when sea level is changing both globally and regionally (see Questions 3 and 4). The scenarios in the 2022 technical report are lower in the near-term decades than they were in the 2017 technical report because there is improved understanding of Antarctic and Greenland ice sheet dynamics (see Question 10). This improved understanding comes from additional observations, research, modeling, and expert elicitation efforts that indicate sea level rise will be slower in the next few decades than previously projected. The 2022 technical report removes the Extreme (2.5 meter) scenario because the probability of this scenario is now thought to be too low to merit inclusion. Since 2017, scientists have worked hard to study and develop better modeling of both Greenland and Antarctic ice sheets, and of how these two ice sheets generate different sea level changes across the globe because of how ice mass loss results in changes in gravitational, rotational, and deformation effects. The combination of these effects is referred to as “fingerprinting,” and Greenland and Antarctic ice sheets have very different “fingerprints” (see Question 4). Locations far away from a sheet see greater amounts of sea level rise when ice mass is lost from that sheet, so it really matters which sheet is melting, how much, and when. The 2022 report uses the latest science and multiple methods to characterize ice sheet processes in both Antarctica and Greenland, and the result is a more realistic projection of global mean sea levels over time. Due to improved characterization of Greenland’s potential increased contributions for the higher scenarios (Intermediate to High), this change results in projections of less rise along many U.S. East and Gulf locations through 2100. However, it is important to keep in mind that sea levels will continue to rise after 2100; the new projections just indicate we have more time to prepare. Sea level scenarios will continue to be refined as scientists increasingly observe and learn more about the details of dynamic earth system processes (e.g., Antarctic ice sheet response to temperature increases.) Additional data will help to reduce uncertainty. U.S. federal agencies monitor and assess key sea level rise source contributions globally and along U.S. coastlines, and this work can provide early indications of change in the trajectory of sea level rise, which can inform shifts in adaptation planning. The 2022 technical report further refines and narrows the possible range of scenarios from the 2017 report. Assessment reports like this are the best resource for staying up-to-date on the latest changes to the sea level rise scenarios and why those changes have occurred. These reports are anticipated about every five years. Observation-based extrapolations, or sea level rise “trajectories,” are estimates of relative sea level rise out to 2050. These are built by analyzing regional sets of tide gauge data. To create them, the rate and acceleration of sea level rise from 1970 to 2020 is calculated from sea level rise observations from regional sets of tide gauges. Filtering is done to remove some effects of natural variability that can bias trend characterizations, such as El-Nino/La-Nina cycles. For the global sea level rise extrapolation, satellite-based water elevation data (altimetry) was also included. Included in the 2022 technical report for the first time, observation-based extrapolations are provided for global sea level and eight coastal regions (the Northeast, Southeast, Eastern Gulf, Western Gulf, Southwest, Northwest, Hawaiian Islands, and the Caribbean). Separate extrapolations are also provided for the southern and northern coasts of Alaska and the Pacific islands but caveated with greater uncertainty due to variations in land elevation and underlying regional sea level rise processes. These observation-based extrapolations are very similar to the model-based projections through 2050, and therefore serve as a further line of evidence for the confidence in the near-term trajectory of sea level rise. Or, put another way, with continued global heating that is expected, there is strong reason to suspect that the current acceleration in sea level rise will continue, and this response is similar in both the observation trends and the modeled scenarios. The technical report uses the term extreme water levels to refer to water levels experienced during a wide range of flooding events, from common events that happen ten times a year to rare events such as a flood with a 1% annual chance of occurring. The extreme water levels are used to assess current and future flood exposure within the coastal floodplain considering future sea level rise using NOAA’s height-severity categories of minor, moderate, and major high tide flooding. In addition to long-term sea level rise, many different physical processes can affect coastal water levels on much shorter time scales, such as winds and storm surge, tides, and waves. Extreme water levels in this report are specifically those measured by NOAA tide gauges in mostly protected areas, and therefore reflect still water levels without direct wave influences. The extreme water levels generally relate to when coastal water levels exceed specific elevation thresholds related to local flooding hazards. These elevation thresholds are often reflective of flooding events ranging from bathtub-like “nuisance” flooding that is tidally driven to destructive storm-surge flooding whose impact footprint is specific to that storm. The extreme water levels as defined in this report have probabilities, or likelihoods, of occurring in a given year that are based on statistical analysis of regional sets of historical tide gauge measurements, called a regional frequency analysis. Results based on regional frequency analysis typically suggest that higher water levels are more probable than results based upon a single tide gauge data record. This is because regional sets of data observation better capture spatially the overall probability of a high-water event occurring from a passing storm. The perspective from a single gauge can be rather limited due to storm track variability (storm missed the tide gauge) or short data records (important storms occurred prior to the gauge installation). The regional frequency analysis extreme water level probabilities are different from those produced and used by FEMA, since FEMA-based estimates often include synthetic storm simulations (i.e., consider storms that could happen under today’s climate but might not yet have happened) and high water marks not necessarily directly measured by a local tide gauge. The different methods produce different probabilities for low frequency flooding (e.g., a 100-year flood), but for more frequent events such as high water levels occurring every few years, the two sets of probabilities are quite similar. This report considers extreme water levels that span from rare events (1% annual chance of occurring) to more common events (10 times/year). Specifically, the extreme water levels are used to assess current and future flood exposure within the coastal floodplain considering future sea level rise using NOAA’s height-severity categories of minor, moderate, and major high tide flooding. NOAA high tide flooding thresholds broadly define water levels where U.S. infrastructure becomes impacted. High tide flooding heights are calibrated to impact levels used in weather forecasting to trigger emergency responses and are considered the best tangible way to communicate the impacts of extreme water levels today and in the future to the public. Minor high tide flooding, flooding about 2 feet (0.6 meters) above average high tide, is disruptive to communities where it occurs (e.g., stormwater backups and road closures), whereas moderate flooding, about 3 feet (0.9 meters) above average high tide, tends to cause more damage (e.g., to homes or businesses) and major flooding, which is about 4 feet (1.2 meters) above average high tide, is often quite destructive, requiring post-event repairs/rebuilding and sometimes evacuations. The report explores how the annual frequencies of high tide flooding are expected to change by 2050 considering the local sea level rise scenario that closely aligns with the rise associated with the regional observation-based extrapolations. The concept of a flood regime shift is used to describe how the annual flood frequency associated with a particular coastal flood type (i.e., NOAA minor, moderate, and major high tide flooding) changes to that of another because of sea level rise. For example, by 2050 the annual frequency of NOAA moderate high tide flooding is expected to occur on average along the U.S coastline at a frequency greater than the NOAA minor high tide flooding events occur today. Or put another way, after about 1 foot (0.3 meters) of sea level rise that is expected to occur on average along the U.S coastline, tides and storm surges that today cause minor and moderate high tide flooding will cause moderate and major high tide flooding. Federal agencies have been updating tools with the new sea level rise information, and will continue to do so. The following links include access to report data and updated tools: A companion Application Guide is available to help users apply and integrate the report into local planning and adaptation decisions. The guide, penned by professionals with expertise on applying sea level rise to local level planning, helps readers wade through considerations and arrive at what’s best for their community. We also have several additional resources available for understanding and applying the updated sea level rise projections.
<urn:uuid:27eb5617-b3bf-4e13-819b-981759c0d1f9>
CC-MAIN-2022-33
https://oceanservice.noaa.gov/hazards/sealevelrise/sealevelrise-tech-report-sections.html?ipid=promo-link-block2
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573029.81/warc/CC-MAIN-20220817153027-20220817183027-00296.warc.gz
en
0.919933
5,326
2.625
3
Mental health literacy has been defined as 'knowledge and beliefs about mental disorders, which aid in their recognition, management, and prevention' (p396)1. Depression literacy is a specific type of mental health literacy and is defined as the ability to recognize depression and make informed decisions about treatments for depression2. Jorm et al proposed that like health literacy, people with high mental health literacy would have similar knowledge and beliefs about mental disorders and treatment for mental health and medical professionals (eg use similar names to describe symptoms and see medical and mental health professionals as appropriate treatment providers)3. The prevalence and impact of mental health and depression literacy has been studied extensively in Australia3-10 and also in Canada, India, and among African-American clergy2,6,7,11-14. The study of mental health literacy began in 1995 when Jorm et al3 explored the mental health literacy of a representative sample of Australians, using vignettes depicting a person experiencing the symptoms of a major depressive disorder and schizophrenia. That study revealed that only 39% of Australians could correctly label the depression vignette, and that doctors, counselors and a close friend and family were most frequently cited as sources of help for the symptoms described in the vignette. Partly due to these findings, the Australian government launched a national depression initiative aimed at improving mental health literacy8,9,15. A follow-up study of mental health literacy revealed that the ability to correctly label a vignette describing depressive symptoms had increased from 39% to 67% when the depression initiative was in its third year7. Studies have also shown that rates of depression literacy vary among populations2,6,7,11 and there is evidence that men have lower mental health literacy compared with women2,16. There is a paucity of research on the depression literacy of rural Americans. This gap in research is particularly problematic given that people living in rural areas have lower rates of specialty mental health service utilization, despite experiencing equivalent prevalence of mental disorders to those in more urban areas17-19. The research on mental health literacy in rural populations has primarily been conducted in Australia, where researchers have not found a rural-urban difference in the ability to identify depression; however, it was found that those living outside major cities regarded counselors or psychologists as less helpful than those who lived in major metropolitan cities4. Research has demonstrated that due to a shortage of mental health specialists in rural areas, primary care and informal networks such as the religious community have become the 'de facto' mental health system20,21. Additionally, rural populations have been found to have lower perceived need for and utilization of specialty mental health providers (eg counselors) than urban populations4,21,22. Due to these unique factors in rural help-seeking, this study also examined how depression literacy is related to perceived need for and utilization of a doctor, counselor, or religious leader. Perceived need is the belief that treatment is needed23. Because perceived need is a personal, subjective judgment it does necessarily correspond with the evaluation and diagnosis of a health professional24. A person does not decide to seek treatment until they perceive the need to do so (unless they are persuaded by someone else). Research on the rural population supports the idea that perceived need is important in the decision to seek mental health treatment. For example, outreach activities targeted to rural populations are not successful unless they can convince individuals that treatment is needed21,25. In addition to perceived need, this study explores the relationship of depression literacy with utilization of a medical provider, mental health specialist, and religious leader for emotional problems. Knowledge about symptoms and illness processes can potentially impact on when and whether people seek help. In the health literacy literature, low health literacy has been associated with delayed help-seeking for prostate cancer, while improved health literacy has been associated with increased utilization of mammography services26-28. Similarly, lack of mental health literacy has been associated with delays in seeking treatment for mental health issues16. The choice of a medical, specialty mental health, and religious leader or alternative as variables in this study stemmed from previous research that rural populations often rely on medical and alternative healer providers and utilize specialty mental health services less than their urban counterparts4,20-22. Exploring these relationships may increase understanding of how knowledge and labeling of symptoms influences utilization decisions for rural populations. The primary purpose of this study was to determine rates of depression literacy and variables related to depression literacy in a rural, mid-southern US sample. In addition, a person's ability to label depressive symptoms accurately was assessed as related to perceptions of need and utilization of treatment from a: (i) primary care provider; (ii) counselor or therapist; and (iii) preacher or pastor. Because depression literacy has been hypothesized in the literature to increase concordance of professional and public opinions about need for mental health treatment3, it was hypothesized that: - Those with higher depression literacy will have a higher perceived need for a doctor or counselor than a religious leader. - Those with high depression literacy will be more likely to have sought help from a doctor or counselor than a religious leader. The present study incorporated a definition of 'rural' posed by Crumartie and Bucholts29, along with consideration of population size, adjacency to urban areas, and economic influence to distinguish rural participants. Specifically, 'rural' was defined as living in a town of less than 5500 people that is situated outside the major commuting and economic patterns of the metropolitan area (ie living at least 30 min outside major cities with populations ≥30 000). Participants were recruited at local rural grocery stores to increase the likelihood that their economic activity was centered in the rural community. Further information about the definition of the sample is available elsewhere30. Recruitment and sample Participants were recruited inside and outside local grocery stores in two towns with populations less than 5500. These towns were located 45 min and 2 hours from major metropolitan areas. Research assistants sat at a table containing posters, flyers, and information about the study. People passing by who showed an interest in the study were asked if they would like to help with a survey from the University of Arkansas. The research protocol was explained and people who were interested signed up for the study by providing their name, telephone number, and the best time to be called. Each person completing the survey was mailed a US$15 gift certificate to the local grocery store. Only people who showed interest in the study were asked to participate. The recruitment materials and study methods were approved by the University of Arkansas Institutional Review Board (IRB) and steps were taken to ensure the confidentiality of participants (eg participants were not required to provide their family name and were given the choice to call to sign up rather than leave their contact information with the recruiters). This recruitment strategy was used to gain access to a rural community sample that conducts its business within the local community, and is similar to the Rost et al approach to recruitment25. Unlike in urban areas, most rural areas only have one grocery store, and according to the Center for Rural Affairs (a non-profit Nebraska-based organization), commerce through the local grocery stores consists mostly of people who live and work within these communities. Those who work outside the community, or commute, are more likely shop in the areas where they work31. A total of 268 participants were recruited (34.8% male, 64.8% female), and of 39% (n = 105) were successfully contacted by telephone and agreed to participate (35% of the women and 46% of the men who were initially recruited: χ2 = 2.9, df = 1, p = 0.09). Due to difficulty recruiting men for the study and in order to have a representative sample, there was increased effort made to have recruited men complete the survey. Most of the participants lived in the counties that contained the grocery store they visited (87.3%); the remainder indicated they lived in neighboring counties. Some of those recruited were not contacted due to completion of data collection (n = 40). Three attempts were made to reach each participant by telephone and messages were left when possible. No further contact attempts were made for those who were not reached after 3 attempts (n = 89), for those recruited but declined to participate (n = 3), or for participants whose telephone number was incorrect or the number was disconnected (n = 31). Of the 105 people contacted who completed the interview (44% male); six were excluded from the sample because their reported zip (postal) code was for a city with a population more than 5500. Therefore, the total number of participants included in the final sample was 99, of which 43% were male (Fig1). Figure 1: Consort diagram. Participants who were contacted and gave verbal informed consent over the telephone before beginning the survey. The survey took approximately 15 min to complete. Sociodemographic information: A variety of sociodemographic variables were assessed during the interview including age, sex, ethnicity, and educational attainment (Table 2). Depression literacy: Depression literacy was assessed with a vignette created by Rost et al25 that describes a person experiencing symptoms consistent with a major depressive episode (Fig2). The vignette was modified to direct address style but otherwise retained all essential elements of the story. The vignette was created to be used in both rural and urban samples and therefore was not modified to reflect a rural lifestyle. After the vignette was read to participants they were asked whether they thought the situation described in the vignette was a problem for them, and those who answered 'yes' were asked what they would call the problem. Participants were considered to have high depression literacy if they: - Reported that the situation described in the vignette was a problem. - Accurately labeled the problem 'depression' (or used some form of the word). Participants were considered to have low depression literacy if they: - Did not consider the situation a problem. - Provided any other label for the vignette (eg stress). Figure 2: Vignette used to determine depression literacy (adapted from Rost et al24). Coding for the high and low depression literacy variables was conducted by the first author (TD) and was also coded independently by a contributor to the manuscript. All answers that included 'depression', 'depressed', 'depressive' were coded as high depression literacy. Any discrepancies were discussed and resolved. All answers coded 'not a problem' and any other label that did not include a form of the word 'depression' were coded as low depression literacy. If a participant provided more than one label, their response was coded as high depression literacy if any of those labels met the criteria; otherwise the response was coded as low depression literacy. Depression symptoms: Self-reported symptoms of mental illness in the past year were assessed using a modified version of the Brief Symptom Inventory-18 (BSI-18)32. The BSI-18 has been demonstrated to have good internal consistency, with Cronbach's α = 0.89 and item-to-item correlations of 0.37-0.7033. For the current study, only 17 of the 18 questions were used because the question related to suicidal ideation was excluded (due to IRB concerns about non-clinician students asking participants about suicidal ideation over the telephone). To expand the time period for measured symptomology, the BSI-18 instructions were also modified by asking people to endorse whether they had experienced symptoms in the past year, rather than the past week. For the purposes of this study, the Depression Index score was computed according to the BSI-18 instructions, adjusting for the omission of one item32. Despite the modifications made to the measure, the scoring allows for the omission of one item per scale and the internal reliability of the Depression Index score was good (α = 0.84)30. Perceived need for services: To measure perceived need for services, participants were asked to imagine they were experiencing the symptoms in the vignette. A question then assessed if the participants believed they should seek help from: (i) a doctor; (ii) a counselor or therapist; and (iii) a religious leader. Each response was recorded on a 4 point scale from 'definitely yes' to 'definitely no'. Higher scores indicated greater perceived need for that service30. Service utilization: Lifetime use of medical professionals, specialty mental providers, and religious or alternative healers for emotional, mental health or substance abuse problems was measured using a series of questions from the National Survey of American Life34. Mental health service utilization was defined as seeking the help of a psychiatrist, psychologist, social worker, or counselor for an emotional or substance abuse problem at some point in the past. Medical service utilization was defined as seeking the help of a primary care physician or specialty physician (eg obstetrician, cardiologist) for an emotional or substance abuse problem at some point in the past. Utilization of a religious leader was defined as seeking the help of a religious leader or alternative healer for an emotional or substance abuse problem at some point in the past30. The relationship between high depression literacy and various demographic, psychiatric, utilization, and perceived need variables was explored by conducting bivariate analyses. Bivariate analyses (Χ2, Fisher's exact tests, and t-tests) were conducted to examine the relationships among the variables measured. Multivariable logistic regression analyses were conducted to better understand how variables related to depression literacy and to test hypotheses 1 and 2 examining how demographic and depression literacy related to perceived need for and utilization of services for emotional problems. The regression analyses predicting depression literacy and perceived need were conducted with the entire sample (n = 99), while the analysis predicting service utilization was limited to participants who endorsed lifetime help-seeking for an emotional problem (n = 45). Statistical significance was set at an α level of 0.05; however, due to the exploratory nature of this study, only predictors with odds ratios above 1.5 or below 0.5 were considered relevant to the findings of this study. The final sample (N = 99) was 43% male and 92% self-identified as White. Due to the low number of racial and ethnic minorities in this sample, race was not included in the regression analyses. The average age was 45.4 years (SD = 15.3); 57% reported an annual household income of less than US$25,000 and 18% had less than a high school education. Detailed demographic information is provided in Table 1 and elsewhere30. Table 1: Sample demographic information (N = 99)32 First, the prevalence of high depression literacy was determined the (eg the percentage calculated of those who both stated that the vignette would be a problem for them and accurately labeled the vignette) and the bivariate and multivariable relationships were explored among depression literacy, sex, age, income and education. Results revealed that only 53% of the total sample had high depression literacy. A large sex difference in depression literacy emerged, with 68% of women accurately identifying the symptoms as depression compared with only 35% of men (Χ2 = 12.15, p = 0.01). Education, age, and income (calculated as less than US$25,000 in annual income) did not have significant bivariate relationships with depression literacy. Additionally, depression symptoms on the BSI-18 (high depression literacy M = 4.74, SD = 6.66, n = 51; low depression literacy M = 3.02, SD = 5.34, n = 47) did not have a significant bivariate relationship with depression literacy (Table 2). To better understand the relationship among these variables, a logistic regression analysis was conducted with age, sex, education (< high school) and income (< US$25,000/ year) as the independent variables and depression literacy as the dichotomous dependent variable. Results revealed sex (OR = 4.28, 95% CI = 1.68-11.60, p = 0.003), continued to have a significant relationship with depression literacy. Additionally, education had a significant relationship with depression literacy (OR = 0.25, 95% CI = 0.07-0.84, p = 0.03; Table 2) with those with less than a high school education being more likely to have low depression literacy. Table 2: Bivariate and mulitvariable analyses by depression literacy status32 Next, the bivariate associations were explored between depression literacy and perceived need for and utilization of services for emotional problems (Table 3). There was not a statistically significant difference in the relationship of depression literacy and perceived need for a doctor or a counselor. There was a statistically significant relationship between depression literacy and perceived need for a religious leader with those with higher depression literacy being more likely to perceive a need for a religious leader and than those with low depression (Χ2 = 4.10, 0.04; Table 3). Results for utilization of services for emotional problems revealed that there was not a statistically significant relationship between depression literacy and utilization of a doctor or a mental health specialist for emotional problems. However, there was a statistically significant relationship among depression literacy and the utilization of a religious leader for an emotional problem (Χ2 = 4.68, 0.03). Those with high depression literacy were more likely to have sought help from a religious leader than those with low depression literacy (Table 3). Table 3: Differences in perceived need and past utilization of mental health services for those with high and low depression literacy To test the first hypothesis that having higher depression literacy would be significantly associated with perceived need for a doctor and counselor but not a religious leader, a three logistic regression model was conducted predicting perceived need for a doctor, counselor, and religious leader with common predictors of perceived need (age, sex, income, and education) and depression literacy as predictor variables (Table 4). Contrary to the first hypothesis, results of the first and second regression equations revealed that none of the predictor variables in the equation were significant predictors of perceived need for a doctor or a counselor. After controlling for demographic and symptom variables, depression literacy was not a significant predictor of perceived need for a religious leader (Table 4). Table 4: Logistic regression models predicting perceived need for a doctor, counselor and religious leader To test the second hypothesis that having higher depression literacy would predict utilization of a medical provider and specialty mental health provider but not a religious leader or alternative healer, a three logistic regression model was conducted predicting past utilization of a medical provider, specialty mental health provider and religious leader or alternative healer with common predictors of utilization (age, sex, income, depression symptoms and education) and depression literacy as predictor variables; Table 5). Results of the first regression equation revealed that participants with more education were more likely to utilize a medical professional for an emotional problem (OR = 21.98, CI = 2.01-674.45, p = 0.03). Contrary to the second hypothesis, depression literacy was not significantly associated with utilization of a medical provider. The results of the second regression equation revealed that none of the independent variables was a significant predictor of utilization of a specialty mental health provider for emotional problems (Table 5). Results of the third regression equation revealed that depression literacy (OR = 10.08, 95% CI = 1.74-92.79, p = 0.02) was a significant predictor of utilization of a religious leader or alternative healer for emotional problems (Table 5). Table 5: Logistic regression models predicting utilization of a medical professional , mental health professional, and a religious leader or alternative healer among participants that endorsed lifetime help seeking for emotional problems32 The present study revealed that, while the vast majority of participants described the situation in the vignette as a problem, only 53% of respondents were able to accurately label symptoms described in the vignette as 'depression' or use some form of the word to describe it. This percentage is higher than the 39% and 48% reported in rural and urban Australian samples in 1995 and 20011,10. However, the percentage found in this sample is lower than 76% reported for a Canadian sample in 20062, 68% in an Australian sample and 81% in a rural Australian sample in 20046,7. This suggests that depression literacy has improved with time in some areas of the world but that some rural Americans may have lower depression literacy than rural residents in other parts of the world. Consistent with previous research, men in this study had lower rates of depression literacy than women2 and this effect remained after controlling for the age, education, income, and depression symptoms of the sample. Also consistent with previous literature6 was that participants who personally experienced symptoms of depression in the past year had higher depression literacy than those who did not experience depression. This is consistent with previous findings that a greater familiarity with depression, either from personal experience or knowing others who have suffered from depression, is associated with higher depression literacy5. Higher education levels were also associated with higher depression literacy in the present sample. In contrast to the stated hypothesis of this study, higher depression literacy was not significantly associated with perceived need for and utilization of a medical professional or mental health professional. In fact, both perceived need for and past utilization of a religious leader for emotional problems had strong bivariate associations with depression literacy, and depression literacy continued to have a significant relationship with utilization of a religious leader in multivariable analyses controlling for age, sex, education, income, and depressive symptoms. This association has not been reported in previous research. These interesting findings may be related to several cultural and environmental factors in the rural area studied. First, it may be that cultural beliefs about depression include a spiritual component, prompting the perceived need for a religious leader, and that other explanations (eg following stress or due to fatigue) do not. It may also be that religious leaders in rural areas of the USA see counseling as an important part of their role and provide psycho-education and counseling for those in the community, prompting people in these communities to perceive a need to talk to religious leaders when they encounter depression specifically. This idea is supported by a study that found rural African American clergy have high depression literacy35. In contrast, research on services provided by rural and urban churches found that predominately African-American churches provide more mental health and social services than predominately White churches36; this, however, does not preclude a rural, predominately White sample from seeking mental health services from a religious leader. Because of the preliminary nature of these findings, further research should be conducted to look at the specific relationship of depression literacy and the use of religious leaders for emotional problems in rural areas. Specifically, the relationship between being able to recognize symptoms of depression and choice of help-seeking providers should be explored in a larger sample. Also, qualitative studies of knowledge about and attitudes to mental health and religion should be explored to better understand this path of help-seeking. If rural Americans' beliefs about depression involve spirituality and religion in such a way that they are more likely to seek a religious leader when they recognize depression it will be important to: (i) include information about religion and spirituality in treatments for depression in this population; and (ii) provide education and assistance to rural religious leaders to improve the likelihood that they will link patients to appropriate services or provide services that will lead to improvement. This study has some important limitations. First, mental health service use was measured as past lifetime utilization and causal directions of mental health literacy to service use cannot be determined from the analysis. It is possible that a previous encounter with a helping professional, such as a religious leader who provided psycho-education about depression, may have increased depression literacy rates in participants, rather than depression literacy predicting willingness to seek help for emotional problems. Prospective studies can help address this limitation. Second, although all of the participants in the study resided in towns with populations less than 5500, the sample was from a mid-southern US state with a predominately White population and may not be representative of the many rural or frontier areas in the USA; therefore, generalizations from this study to other rural populations should be made with caution. Also, recruiting outside grocery stores may lead to limited the generalizability of the sample due to selection bias (those who shop at the local grocery store and sign up for studies may be different from those who do not)31. However, as discussed, local grocery stores are considered social centers in rural areas and the study was attempting to recruit the type of people who would do their grocery shopping in a local grocery store and not commuters who do not have such ties to the local community. Third, the survey did not employ a comparison group of urban American residents. Without this comparison group, it is impossible to determine to what extent the relations observed among the variables are unique to rural residents. Future studies should expand the study population to identify mental health literacy in other American populations. Finally, although the methods of this study are consistent with other measures of depression literacy, and studies have shown that this type of measurement can be used to measure changes in depression literacy1,4,5,10, the use of a hypothetical situation to simulate preferences and choices for treatment may not mirror the actual perceptions and choices people make when they are experiencing similar symptoms. This study contributes to the current literature by investigating mental health literacy in a community sample of rural Americans. The results confirm previous findings that men have lower depression literacy compared with women. As such, campaigns to educate the public about mental illness, such as those launched in Australia, may do well to target men in particular8,9,15. The results of this study also suggest that rural Americans have lower depression literacy than populations in other developed countries. Finally, the connection of depression literacy and utilization of religious leaders should be further examined because these findings could have implications for addressing disparities in service utilization in rural populations through the religious community. The authors acknowledge Tara McGahan MA, Arthur Andrews MA, Dara Dossett, and Megan Alexander for their help in recruitment, data collection, and data entry for this project. The authors also acknowledge Tia Carraway MS for her assistance in statistical analyses. This project was completed as part of the first author's dissertation study and was funded by an internal grant from the Marie Howell's Endowment to the University Of Arkansas Department of Psychology. 1. Jorm AF, Korten AE, Jacomb PA, Christensen H, Rodgers B, Pollitt P. "Mental health literacy": a survey of the public's ability to recognise mental disorders and their beliefs about the effectiveness of treatment. Medical Journal of Australia 1997; 166(4):182-186. 2. Wang J, Adair C, Fick G, Lai D, Evans B, Perry BW et al. Depression literacy in Alberta: findings from a general population sample. Canadian Journal of Psychiatry 2007; 52(7): 442-449. 3. Jorm AF. Mental health literacy. public knowledge and beliefs about mental disorders. British Journal of Psychiatry 2000; 177: 396-401. 4. Griffiths KM, Christensen H, Jorm AF. Mental health literacy as a function of remoteness of residence: an Australian national study. BMC Public Health 2009; 9: 92. 5. Eckert KA, Kutek SM, Dunn KI, Air TM, Goldney RD. Changes in depression-related mental health literacy in young men from rural and urban South Australia. Australian Journal of Rural Health 2010; 18(4): 153-158. 6. Bartlett H, Travers C, Cartwright C, Smith N. Mental health literacy in rural Queensland: results of a community survey. Australian and New Zealand Journal of Psychiatry 2006; 40(9): 783-789. 7. Jorm AF, Christensen H, Griffiths KM. The public's ability to recognize mental disorders and their beliefs about treatment: changes in Australia over 8 years. Australian and New Zealand Journal of Psychiatry 2006; 40(1): 36-41. 8. Jorm AF, Christensen H, Griffiths KM. Changes in depression awareness and attitudes in Australia: the impact of Beyondblue: the national depression initiative. Australian and New Zealand Journal of Psychiatry 2006; 40(1): 42-46. 9. Jorm AF, Christensen H, Griffiths KM. The impact of Beyondblue: the national depression initiative on the Australian public's recognition of depression and beliefs about treatments. Australian and New Zealand Journal of Psychiatry 2005; 39(4): 248-254. 10. Wright A, Harris MG, Wiggers JH, Jorm AF, Cotton SM, Harrigan SM et al. Recognition of depression and psychosis by young Australians and their beliefs about treatment. Medical Journal of Australia 2005; 183(1): 18-23. 11. Tieu Y, Konnert C, Wang J. Depression literacy among older Chinese immigrants in Canada: a comparison with a population-based survey. International Psychogeriatrics 2010; 22(8): 1318-1326. 12. Kermode M, Bowen K, Arole S, Joag K, Jorm AF. Community beliefs about treatments and outcomes of mental disorders: a mental health literacy survey in a rural area of Maharashtra, India. Public Health 2009; 123(7): 476-483. 13. Kermode M, Bowen K, Arole S, Pathare S, Jorm AF. Attitudes to people with mental disorders: a mental health literacy survey in a rural area of Maharashtra, India. Social Psychiatry and Psychiatric Epidemiology 2009; 44(12): 1087-1096. 14. Stansbury K, Schumacher M. An exploration of mental health literacy among African American clergy. Journal of Gerontology and Social Work 2008; 51(1-2): 126-142. 15. Highet NJ, Luscombe GM, Davenport TA, Burns JM, Hickie IB. Positive relationships between public awareness activity and recognition of the impacts of depression in Australia. Australian and New Zealand Journal of Psychiatry 2006; 40(1): 55-58. 16. Thompson A, Hunt C, Issakidis C. Why wait? Reasons for delay and prompts to seek help for mental health problems in an Australian clinical sample. Social Psychiatry and Psychiatric Epidemiology 2004; 39(10): 810-817. 17. Gamm LD, Hutchison LL, Dabney BJ, Dorsey AM. Rural healthy people 2010: a companion document to healthy people 2010. College Station, TX: The Texas A & M State University System Health Science Center, School of Rural Public Health, Southwest Rural Health Research Center, 2003. 18. Hauenstein EJ, Petterson S, Rovnyak V, Merwin E, Heise B, Wagner D. Rurality and mental health treatment. Administration and Policy in Mental Health 2006; 34(3): 255-267. 19. Kessler RC, Chiu WT, Demler O, Merikangas KR, Walters EE. Prevalence, severity, and comorbidity of 12-month DSM-IV disorders in the national comorbidity survey replication. Archives of General Psychiatry 2005; 62(6): 617-27. 20. Weaver AJ. Has there been a failure to prepare and support parish-based clergy in their role as frontline community mental health workers: a review. Journal of Pastoral Care 1995; 49(2): 129-47. 21. Fox J, Merwin E, Blank M. De facto mental health services in the rural south. Journal of Health Care for the Poor and Underserved 1995; 6(4): 434-468. 22. Gamm LG, Stone S, Pittman S. Mental health and mental disorders - a rural challenge: a literature review. In: Rural healthy people 2010: a companion document to healthy people 2010. College Station, TX: Texas A & M State University System Health Science Center, School of Rural Public Health, Southwest Rural Health Research Center, 2003. 23. Maddux JE, Brawley L, Boykin A. Self-efficacy and health behavior: prevention, promotion, and detection. In: JE Maddux (Ed.). Self-efficacy, adaption, adjustment, theory, research, and application. New York: Plenum, 1995. 24. Rost K, Smith GR, Taylor JL. Rural-urban differences in stigma and the use of care for depressive disorders. Journal of Rural Health 1993; 9(1): 57-62. 25. Mojtabai R, Olfson M, Mechanic D. Perceived need and help-seeking in adults with mood, anxiety, or substance use disorders. Archives of General Psychiatry 2002; 59(1): 77-84. 26. Bennett CL, Ferreira MR, Davis TC, Kaplan J, Weinberger M, Kuzel T et al. Relation between literacy, race, and stage of presentation among low-income patients with prostate cancer. Journal of Clinical Oncology 1998; 16(9): 3101-3104. 27. Davis TC, Berkel HJ, Arnold CL, Nandy I, Jackson RH, Murphy PW. Intervention to increase mammography utilization in a public hospital. Journal of General Internal Medicine 1998; 13(4): 230-233. 28. Berkman ND, DeWalt DA, Pignone MP, Sheridan SL, Lohr KN, Lux L et al. Literacy and health outcomes. Evidence Report/Technology assessment no. 87; ARHQ pubn no 04-E007-2. Rockville, MD: Agency for Healthcare Research and Quality, 2004. 29. Cromartie J, Bucholtz S. Defining the "rural" in rural America. Amber Waves 2008; 6(3): 28-34. 30. Deen TD, Bridges AJ, McGahan TC, Andrews AA. Cognitive appraisal of specialty mental health services and their relation to mental health service utilization in the rural population. Journal of Rural Health 2011; (in press). 31. Bailey JM. Rural grocery stores: importance and challenges. Lyons, NE: Center for Rural Affairs Rural Research and Analysis Program; 2010 October 2010. 32. Derogatis LR. BSI-18: Administration, scoring and procedures manual. Minneapolis, MN: National Computer Systems, 2000. 33. Zabora J, Brintzenhofe-Szoc K, Jacobsen P, Curbow B, Piantadosi S, Hooker C et al. A new psychosocial screening instrument for use with cancer patients. Psychosomatics 2001; 42(3): 241-246. 34. Wang PS, Lane M, Olfson M, Pincus HA, Wells KB, Kessler RC. Twelve-month use of mental health services in the united states: results from the national comorbidity survey replication. Archives of General Psychiatry 2005; 62(6): 629-640. 35. Stansbury KL, Brown-Hughes T, Harley DA. Rural African American clergy: are they literate on late-life depression? Aging and Mental Health 2009; 13(1): 9-16. 36. Blank MB, Mahmood M, Fox JC, Guterbock T. Alternative mental health services: the role of the black church in the south. American Journal of Public Health 2002; 92(10): 1668-1672.
<urn:uuid:0c34bde4-1a39-4ba6-acd6-73e095e7b4b6>
CC-MAIN-2022-33
https://www.rrh.org.au/journal/article/1803
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00095.warc.gz
en
0.946697
7,345
3.453125
3
The story of Muziris starts from early 3000 BC when Babylonians, Assyrians and Egyptians came to the Malabar Coast in search for the spices. Later these Middle-East groups were joined by Arabs and Phoenicians. And gradually Muziris in Kodungallur entered into the cartography of World trade map. Then onwards Muziris holds the key to a good chunk of Kerala's ancient history now the ancient trade route. Muziris was a port city, among the earliest of its kind in the world. Spice City to the ancient reporters, Muziris was also known as Murachipattanam. In Ramayana, Murachipattanam is the place where Sugreeva's (one of the Monkey King) sleuths scurried through while looking for the abducted Sita. When Kerala established itself as a major center for spice, it was the ancient port of Muziris that emerged as its hub. Sangam literature describes Roman ships coming to Muziris laden with gold to be exchanged for pepper. According to the first century annals of Pliny, the Elder and the author of Periplus of the Erythrean Sea, Muziris could be reached in 14 days' time from the Red Sea ports in Egyptian coast purely depending on the monsoon winds. However, tragedy struck in 1341, when the profile of the water bodies in the Periyar River basin on the Malabar Coast underwent a major transformation - and Muziris dropped off the map due to flood and earthquake. However, the remnants of the port and its erstwhile glory still remain as reminders of an eventful past. They are being conserved and preserved for future generations through one of India's largest conservation projects - the Muziris Heritage Project. Supplementing the Muziris heritage sites are 21 museums and other landmarks that aim to educate people about 2000 years of Kerala history. The Muziris Heritage Project is one of the biggest conservation projects in India, where the state and the central governments have come together to conserve a rich culture that is as old as 3000 years or more. This region forms a part of the heritage tourism circuit between North Paravur in Ernakulam and Kodungallur in Thrissur. Shrines, forts, palaces, seminaries, cemeteries, boatyards and markets spread over the municipality of North Paravur to the municipality of Kodungallur will be preserved accordingly. Various performing arts that represent the non-physical aspect of the Muziris region are also under the process of conservation. In the initial phase of the project, four of the 27 museums have been opened to the public- the Paliam Nalukettu, Paliam Dutch Palace, the Chendamangalam Jewish synagogue and the Paravur Jewish synagogue. Two archaeological sites, Pattanam and Kottappuram where archaeological excavations and explorations are being undertaken will also be in focus. Many artifacts of interest have been unearthed at various sites in North Paravur-Kodungallur region of Kerala, through excavations as part of the Muziris Heritage Project .Utensils, clothes, coins, agricultural tools and inscriptions on plates or papyrus, along with folklore, tell us about the lives of the people of that time. Muziris has the distinction of having yielded a complete human skeleton for the first time in India, from the Kottappuram fort area. Some of the items excavated here include Chinese coins, Chinese inscriptions, and pieces of decorated porcelain, West Arabian pottery pieces, iron nails, bullets, stone beads, 17th century Dutch coins and tiles. These will eventually go into the museums to be set up. Pattanam is about one-and-a-half kilometres from Kodungallur on the North Paravur route in Kerala and the Kerala Council for Historical Research (KCHR) has undertaken a massive research project here. One of the first ever multi-disciplinary excavations undertaken by the Govt of Kerala, the main objective of these excavations was to identify an early historic urban settlement and the ancient Indo-Roman port of Muziris on the Malabar Coast. The structures unearthed indicate the same and they suggest that the site was first occupied by the indigenous Iron Age people. A large number of objects of Roman origin have been unearthed from the site. This serves as evidence to the extensive maritime contacts of this region during the early historic period. The excavation carried in Kottappuram area has unearthed a Portuguese Fort and numerous remnants of other past cultures. The Kottappuram fort, also known as Kodungallur fort was built in 1523. Chinese wares, red slipped ware other pottery artifacts and iron objects are also included in the list of Kottappuram findings. Other parts of the Muziris Heritage Project that awaits the excavation are Cheraman Parambu, Kottayil Kovilakam, Pallippuram fort among others. Tracing back to the history of Muziris is not an easy task. Ancient literature provides some vital clues in this regard. Early Tamil literature known as Sangam Literature and the Greco-Roman accounts are clear in linking this port town with the early Cheras. Present Chendamangalam, in the Muziris heritage region, and the original name of which was Jayanthamangalam, named after the Pandyan King Jayanthan, supports the view that Pandyan sway extended up to Periyar in the 7th century AD. The fact that 10 out of 13 important Vaishnavite temples of Malanadu were situated south of river Periyar in the 9th century indicates the Pandyan influence in the region during the time of Jatila Parantaka (765-815) who claimed to be a Parama Vaishnava. Part of Malabar, south of Kerala, was under the sway of the Pandyans of Madura. In the first century AD, Pliny has recorded that Neacinda in the Pamba valley was in the domain of the Pandyans. Musiri was subjected to attacks from the pirates of Nitrias. And the attack of the Nitrians must be in reprisal to the conquest of Musiri by the Chera King. And the Tamil literature Agom 2 says that Utian Cheral was the first Chera king whose territory is said to have extended up to the Western sea. In the Tamil literature, Agom 149 there is a statement that a Pandya King invaded Kodungallur of the Cheras with a large elephant force. This means that the Pandyan Kingdom extended up to the river Periyar at that time. The much quoted Akan Anooru poem 149 mentions that the well build crafts of the Yavanas or Yona came on the Periyar. PuRan Anooru 57 poem mentions that a Pandya Vantan besieged the port of Muciri/Muziris. PuRan Anooru 343 graphically describes the backwater scenario around Muciri/Muziris. Patirrupattu 55.4 mentions about the 'Bantar' where the ornaments that came through sea are stored. Many a literary reference can be found on Chendamangalam. In the 'Kokilasandesa 'of Uddanda Sastrigal, the place is referred to as 'Jayantamangalam', which may have been the Sanskrit version of Chendamangalam. In the 'Kokilasandesa', a love message is sent by the hero from Kanchi through a parrot to the heroine who resides at Chendamangalam. The place has been noted for its opulence and the temple of Vishnu. Another reference to the Vishnu temple can be seen in the Vishnu Vilasa Mahakavya by Ramapanivada who wrote it under the patronage of the Paliathachan Ramakubera. Muziris was the 'first emporium of India' for the Romans, where the ships of the Yavanas arrived in large numbers and took back pepper, and other products in exchange of gold. Evidence from a papyrus in the Vienna museum, speaks of trade agreement between Muziris and Alexandria, following a trade agreement between a trader from Muziris and a trader from Alexandria. All these references indicate that a substantial amount of trade flourished between India and the Greco Roman world that passed through Muziris. The Paliam family owned a good collection of manuscripts in Malayalam and Sanskrit. This later became part of the Kerala University Manuscript Library, when the family partition took place. The rare and important drama in Sanskrit from this collection named 'Bhagavadajjukiyam' used to be performed in Chendamangalam Siva temple. The most important port cities in the early centuries of Christian era, as seen from ancient records were Naura of Periplus or Naravu of Sangham poetry which may be modern Valapattanam. Further south was Thondi of Sangham poetry or Tyndis of western geographers which must be Kadalundi or Beypore of today. Then comes the most important port city of Malabar, Muziris of the Westerners, Musiri of Sangham poetry and Kodungallur of today. In the first century, the country consisted of three political divisions. The author of Periplus and Pliny (1st century AD) has recorded that at that time Thondi and Muziris were under the rule of Keralaputras who were none other than the Cheras of Karur. From the statement of Pliny, it would appear that the Cheras who were foreigners took possession of the West Coast only recently. This view is supported by the statement that Musiri was at that time subjected to attacks from the pirates of Nitrias. Nitrias of Pliny and Nitran of Ptolemy is the modern port of Mangalapuram, upon the mouth of river Netravadi. That was the principal port of Thondi and Musiri until the Cheras took possession of them in the first century. This view is further supported by the Sangham poetry. The contests for ocean supremacy continued for centuries. The Cheras established out-posts at Kodungallur and Thondi and made that part of Malayalam a province of the Chera country, called Kudanadu. Kudanadu means Western province. Later, Kodungallur became the capital of Kudanadu where from the Chera princes ruled until the end of 10th century. The Perumals-Rulers - were all raised to that position after serving the Chola Empire as commanders, military governors or petty kings. The installations of the first Perumal took place in 887 AD at Thirunavai. He was Thanu Ravi. The reign of Thanu Ravi cannot be earlier than the last quarter of the 9th century. Thanu Ravi and many of his successors were Cheras. Bhaskara Ravi who issued Jewish copper plate from Musiri was not a resident of that place. It is clearly stated in the document that he was only camping at that place when he issued the document. Mushikas and Venadu Kings were subservient to Karnataka and the Pandyan rulers respectively, the central region of Malainadu viz, Kerala, continued as a province of the Chera Kingdom. Hence Malanadu got the name Kerala and the institution of Perumal came to be known as Kerala Perumal. As Eastern Cheras were vassals of Karnataka, they were known as Kongu Cheras and that part of the Chera country came to be known as Kongunadu. This region which was the hinterland of Musiri and Thondi, was the cause of importance and glory of the two port cities. With the loss of that territory, the primacy of Musiri was lost and Thondi faded into oblivion. Once the Palace of the Perumals - Chera rulers of Kodungallur, Kerala- was at the Cheraman Parambu. Till it has been included in the prestigious Muziris Heritage project, Cheraman Parambu was a deserted place. The word Perumal means Chief. Perumals in the Chera rulers of Kodungallur is also known as Kulasekharas of Mahodayapuram. The title of Perumal was not hereditary and that each Perumal had a different capital. Kerala was the Kingdom between Mushika country and Kupaka country alias Venadu. The Kulasekharas of Mahodayapuram were the protectors of the Brahmin settlements of Malainadu, but they never enjoyed any supremacy over Venadu or Mushika Kingdom. We know only of three Kulasekharas. The first is Ravi Kulasekhara, the patron of Sankara Narayana, the author of Laghu Bhaskareeya Vyakhya. Ramakulasekhara, the patron of Yamaka poet Vasudeva Bhattathiri is another. The third is Kulasekhra Varma, the dramatist. Kannadikas were known as Kongus in the Tamil country. Although the Cheras of Kodungallur continued their relation with the Kongu Cheras, Kerala became the weakest of the three Kingdoms of Malainadu. As the Kongus were Jains, the Brahmins from Kongunadu migrated to Kerala and set up settlements up to Pamba Valley. The Cheras of Kodungallur who patronized the Brahmin immigration, thus extended their influence up to the Pamba Valley through the Brahmin settlements. History of Paliam should be read along with the history of Kerala. Muziris Heritage project promotes the importance of the history of Paliam. During the period following the break-up of the Kulasekhara Empire in 1102 AD, Kerala lost its political unity. A number of independent Swarupams (States) rose in different parts of the country. The Perumpadappu Swaroopam had its seat at Chitrakutam in the Perumpadappu village in Vanneri in Malappuram till the end of the 13th century but its Chief had a palace of his own at Mahodayapuram in Thrissur. When the Zamorin of Calicut invaded Valluvanad in the latter half of the 13th century the Perumpadappu Swarupam abandoned the Vanneri palace and migrated to Mahodayapuram on a permanent basis. It continued to have its capital at Mahodayapuram till about 1405, when it was transferred to Cochin.Relationship between Paliam and Kochi was there from the earlier period. Along with Perumpadappu chief, Paliath Achan also started living in Thiruvanchikulam. Today, the members of the Paliam family live at Chendamangalam, and in many other parts of India and abroad as well. The rich and historic tradition of the family keeps them close together even today. The fact that Paliathachans held the position of Prime Minister in erstwhile Cochin State in Kerala for more than 150 years proves their historical background. And for the same reason, Paliam holds a significant position in the Muziris Heritage Project. Ms. M.Radhadevi Retd. Professor of Maharajas College, Ernakulam in Kerala and a member of the Paliam family has contributed information on the family in the 'Paliam Info'. Ms. Radhadevi writes in detail about the three eminent 'Achans' - Komi Achan 1, Komi Achan II and Govindan Achan who were the three most remarkable figures in the history of Paliam. Komi Achan I, is supposed to have gone to Colombo seeking Dutch help and signed a treaty with them, thus setting the beginning for a long Paliam Dutch friendship. Komi Achan II was a daring adventurer and is believed to have mastered many languages and the use of weapons. Govindan Achan well-known as Govindan Valiachan was the last to hold the office of the Prime Minister. It was he who retrieved the lost picture of Virgin Mary and permitted the islanders to install it at Vallarpadom, Kochi. Until recently, the practice of keeping alight the 'kedavilakku' donated by Achan to the Vallarpadom church, with oil taken from Paliam continued. Roman trade at Muziris had special peculiarities. During the first centuries, roman trade was carried on by extraordinary big vessels, whose size was comparable with the big ships. Such unusual size was required for the volume and the weight of pepper which were imported from the Malabar Coast. Muziris flourished in ships coming from Roman Egypt. Ships bound for Muziris sailed according to a fixed timetable, already traditional in AD 51. The famous "Muziris papyrus" is a loan contract. In the first decades of Indo-Roman trade at Muziris, in order to exactly define legal responsibility in case of shipwreck, maritime loan contracts for Muziris must have explicitly specified that borrower would leave India by a particular date. The value of a cargo of a very big vessels sailing back from Muziris could thus be enormous. The Kodungallur Gurukulam was an inexhaustible mine of knowledge; it could very well be called the first university of Kerala. In any discussion of the importance of the Muziris Heritage Project, the greatness of this noble institution will have a prominent place. The Kodungallur Gurukulam was a centre of excellence as far as scholarship was concerned. Grammar, sculpture, Vedanta, astronomy and the medical sciences were some of the subjects handled very efficiently here. Scholars like Vidwan Kunjirama Varma, Kochunni Thampuran, Kunjan Thampuran were among the teachers who gave training here. The students had the freedom to opt for the subjects of their choice. In those days, a poet's association known under the famous label 'Kodungallore Kalari' was in operation with the Kovilakam as its centre of functioning. The Kodungallur in the Muziris Heritage region echoes with tales of its illustrious past. The Kodungallur royal family (Kovilakam) produced a new school of poetry. The most towering figure among the poets of the Kodungallur Kovilakam was Kunjikuttan Thampuran. He was one of the best and talented writers of Malayalam literature. Kovilakam always had gifted members and they made their imprints in the literary history of Kerala. Kodungallur Kovilakam was known as the 'Nalanda of Kerala' and it was an abode of preceptor and had a great tradition of imparting knowledge in different topics like literature, science and art. Even foreign scholars were inmates in the Kovilakam. Knowledge sharing tradition of Kovilakam from the period of Kodungallur Vidwan Ilaya Thampuran (1800-1851) was well known. His disciple, Kumbhakonam Krishna Sastrikal later became a great grammarian. Valiya Thampuran and Ilayala Thampuran were exponents in astrology. Goda Varma Thampuran was another famous member of the Kovilakam. Vidwan Kunjirama Varma Thampuran (1850-1917) was a poet and grammarian. Kochunni Thampuran was an exponent in astrology and architecture and Kunjan Thampuran was an expert in dialectics. Cheriya Kochunni Thampuran was a poet and Bhatta Sree Goda Varma Thampuran was an expert in legal science and all these members hailed the glory of Kodungallur Kovilakam time to time. Study of Jewish settlements is an integral part of Muziris Heritage Project. The Jewish immigration to Kerala was the direct effect of the early commercial contacts with Israel. According to tradition some 10,000 Jews came to Kerala coast in 68 AD in order to escape from religious persecution at home. They landed first at Muziris and founded a settlement. The Jews developed in to a prosperous business community with the generous patronage of the native rulers. They enjoyed a high standing in society till the arrival of the Portuguese who persecuted them and compelled them to leave Kodungallur for Kochi in 1565 temporarily. Jewish community became a force to be reckoned with in the social, economic and the political life of Kerala. Apart from the fact that the services of the Jews were necessary for the economic development of the Chera Kingdom, and particularly for the commercial prosperity of Muziris, their unstinted support and co-operation had become an imperative need in the Cholas to the territorial integrity and independence of the Chera Kingdom. Apart from the historic, cultural and aesthetic importance, Jewish Synagogue at Chendamangalam also has great potentiality as a tourist destination. At Muziris, trade and religion grew together. Jews had settled in Paravur and Kodungallur regions and though they all but faded away, both the market in Paravur and the two synagogues still exist. The lives of the Jews and the monuments that tell their history have an important place in the Muziris Heritage Project. Though there are no Jews living in this area, the region was sacred for them. The Hebrew tomb-stone inscription of Sarah Bat Israel stands in front of the Chendamangalam Jewish synagogue. The stone inscription from 1269 CE was erected on a concrete column with an additional slab that says the Govt of Kochi erected it in 1936. The epigraph in Hebrew says, 'here rests Sarah Bat Israel, who died and joined her creator on (day) (month) and (Year)'. It forms part of the Jewish custom to erect a Hebrew written stone with the Hebrew date on a dead person's tomb. The Jewish cemetery situated a little away from the synagogue on the hillock Kottayil Kovilakam and near the Sree Krishna Temple, the Muslim mosque and the Christian church is ample evidence to the religious harmony.
<urn:uuid:4108e6e5-c9a8-46dc-ba50-31513cd48818>
CC-MAIN-2022-33
http://www.muzirisheritage.org/history.php
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570741.21/warc/CC-MAIN-20220808001418-20220808031418-00097.warc.gz
en
0.964321
4,714
3.203125
3
Fainting is a sudden, temporary loss of consciousness that usually results in a fall. Healthcare professionals often use the term "syncope" when referring to fainting, because it distinguishes fainting from other causes of temporary unconsciousness, such as seizures (fits) or concussion. In most cases, when a person faints, they'll regain consciousness within a minute or two. However, less common types of fainting can be medical emergencies. Dial 999 to request an ambulance if a person who has fainted doesn't regain consciousness within two minutes. What causes a faint? To function properly, the brain relies on oxygen that's carried in the blood. Fainting can occur when the blood flow to the brain is reduced. Your body usually corrects reduced blood flow to the brain quickly, but it can make you feel odd, sweaty and dizzy. If it lasts long enough, you may faint. There are various reasons for a reduction in blood flow to the brain, but it's usually related to a temporary malfunction in the autonomic nervous system. This is the part of your nervous system that regulates the body's automatic functions, including heartbeat and blood pressure. This type of fainting is called "neurally mediated syncope". Neurally mediated syncope can be triggered by emotional stress, pain or prolonged standing. It can also be caused by physical processes such as coughing, sneezing or laughing. W hat to do if you or someone else faints If you feel you're about to faint, lie down, preferably in a position where your head is low and your legs are raised. This will encourage blood flow to your brain. If it's not possible to lie down, sit with your head between your knees. If you think someone is about to faint, you should help them to lie down or sit with their head between their knees. If a person faints and doesn't regain consciousness within one or two minutes, put them into the recovery position. To do this: place them on their side so they're supported by one leg and one arm open their airway by tilting their head back and lifting their chin monitor their breathing and pulse continuously You should then dial 999, ask for an ambulance and stay with the person until medical help arrives. Treatment for fainting In most cases, a person will return to normal within a few minutes of fainting, and no further treatment will be needed. If a person experiences repeated episodes of fainting, it's important for a healthcare professional to investigate the cause. Treatment for fainting will depend on the type you're experiencing. In many cases of neurally mediated syncope, no further treatment is needed. If you've had a fainting episode, advice to deal with possible future fainting episodes includes: avoiding triggers – such as hot and crowded environments, or emotional stress spotting the warning signs – such as feeling lightheaded lying down to increase blood flow to the brain Symptoms of fainting When you faint, you'll feel weak and unsteady before passing out for a short period of time, usually only a few seconds. Fainting can occur when you're sitting, standing, or when you get up too quickly. You may not experience any warning symptoms before losing consciousness, and if you do it may only be for a few seconds. You may experience the following symptoms just before losing consciousness: This will usually be followed by a loss of strength and consciousness. When you collapse to the ground, your head and heart are on the same level. This means that your heart doesn't have to work as hard to push blood up to your brain. You should return to consciousness after about 20 seconds. Dial 999 and ask for an ambulance if someone faints and doesn't regain consciousness within two minutes. After fainting, you may feel confused and weak for about 20-30 minutes. You may also feel tired and not be able to remember what you were doing just before you fainted. Fainting or stroke? Fainting can sometimes be mistaken for a serious medical condition, such as a stroke. A stroke is a medical emergency that occurs when blood supply to the brain is interrupted. You should dial 999 immediately and ask for an ambulance if you think that you or someone else is having a stroke. The main symptoms of stroke can be remembered with the word FAST, which stands for Face-Arms-Speech-Time (see below). You should also dial 999 and ask for an ambulance if someone faints and doesn't regain consciousness after a minute or two. a sudden, clammy sweat nausea (feeling sick) fast, deep breathing blurred vision or spots in front of your eyes ringing in your ears Face – the face may have fallen on one side, the person may not be able to smile, or their mouth or eye may have drooped. Arms – the person may not be able to raise both arms and keep them there, due to weakness or numbness. Speech – the person may have slurred speech. Time – it's time to dial 999 immediately if you see any of these signs or symptoms. Causes of fainting Fainting (syncope) is caused by a temporary reduction in blood flow to the brain. Blood flow to the brain can be interrupted for a number of reasons. The different causes of fainting are explained below. Autonomic nervous system malfunction Fainting is most commonly caused by a temporary malfunction in the autonomic nervous system. This type of fainting is sometimes known as neurally mediated syncope. The autonomic nervous system is made up of the brain, nerves and spinal cord. It regulates automatic bodily functions, such as heart rate and blood pressure. An external trigger – such as an unpleasant sight, heat or sudden pain – can temporarily cause the autonomic nervous system to stop working properly, resulting in a fall in blood pressure and fainting. It may also cause your heartbeat to slow down or pause for a few seconds, resulting a temporary interruption to the brain's blood supply. This is called vasovagal syncope. Coughing, sneezing or laughing can sometimes place a sudden strain on the autonomic nervous system, which can also cause you to faint. This is called situational syncope. The autonomic nervous system can also sometimes respond abnormally to upright posture. Normally, when you sit or stand up, gravity pulls some of your blood down into your trunk (torso) and your hands and feet. In response, your blood vessels narrow and your heart rate increases slightly to maintain blood flow to the heart and brain, and prevent your blood pressure dropping. This results in a slight increase in blood pressure. However, occasionally, standing or sitting upright can interrupt the blood supply to the heart and brain. To compensate, the heart races and the body produces noradrenaline (the "fight or flight" hormone). This is known as postural tachycardia syndrome (PoTS), and can result in symptoms such as dizziness, nausea, sweating, palpitations and fainting. Low blood pressure Fainting can also be caused by a fall in blood pressure when you stand up. This is called orthostatic hypotension and it tends to affect older people, particularly those aged over 65. It’s a common cause of fallsin older people. When you stand up after sitting or lying down, gravity pulls blood down into your legs, which reduces your blood pressure. The nervous system usually counteracts this by making your heart beat faster and narrowing your blood vessels. This stabilises your blood pressure. However, in cases of orthostatic hypotension, this doesn't happen, leading to the brain's blood supply being interrupted and causing you to faint. Possible triggers of orthostatic hypotension include: dehydration – if you're dehydrated, the amount of fluid in your blood will be reduced and your blood pressure will decrease; this makes it harder for your nervous system to stabilise your blood pressure and increases your risk of fainting diabetes – uncontrolled diabetes makes you urinate frequently, which can lead to dehydration; excess blood sugar levels can also damage the nerves that help regulate blood pressure medication – any medication for high blood pressure, and any antidepressant, can cause orthostatic hypotension neurological conditions – conditions that affect the nervous system, such as Parkinson’s disease, can trigger orthostatic hypotension in some people Heart problems can also interrupt the brain's blood supply and cause fainting. This type of fainting is called cardiac syncope. The risk of developing cardiac syncope increases with age. You're also at increased risk if you have: narrowed or blocked blood vessels to the heart (coronary heart disease) chest pain (angina) had a heart attack in the past weakened heart chambers (ventricular dysfunction) structural problems with the muscles of the heart (cardiomyopathy) an abnormal electrocardiogram (a test used to check for abnormal heart rhythms) repeated episodes of fainting that come on suddenly without warning See your GP as soon as possible if you think your fainting is related to a heart problem. Reflex anoxic seizures (RAS) Reflex anoxic seizures (RAS) is a type of fainting that occurs when the heart briefly pauses due to excessive activity of the vagus nerve. The vagus nerve is one of 12 nerves in your head. It runs down the side of your head, passes through your neck, and into your chest and abdomen. RAS tends to be more common in small children and often occurs when they're upset. The website of the Syncope Trust And Reflex anoxic Seizures (STARS) has more information about RAS. In some cases of fainting, you'll need to see a healthcare professional after the fainting episode to investigate whether there's an underlying health condition. Your GP will be able to diagnose the cause and determine whether further investigation and treatment are needed. When to see your GP Most cases of fainting aren't a cause for concern and don't require treatment, but you should see your GP if you're at all concerned. You should also see your GP after fainting and you: have no previous history of fainting experience repeated episodes of fainting injure yourself during a faint have diabetes – a lifelong condition that causes your blood glucose level to become too high have a history of heart disease – where your heart's blood supply is blocked or interrupted experienced chest pains, an irregular heartbeat or a pounding heartbeat before you lost consciousness experienced a loss of bladder or bowel control took longer than a few minutes to regain consciousness During an assessment after a fainting episode, your GP will ask about the circumstances surrounding your fainting episodes and your recent medical history. They may also measure your blood pressure and listen to your heartbeat using a stethoscope. If your GP thinks your fainting episode may have been caused by a heart problem, they may suggest that you have an electrocardiogram (ECG). An ECG records your heart's rhythm and electrical activity. A number of small, sticky patches called electrodes are placed on your arms, legs and chest. Wires connect the electrodes to an ECG machine. Every time your heart beats, it produces tiny electrical signals. The ECG machine traces these signals on paper, recording any abnormalities in your heartbeat. An ECG is usually carried out at a hospital or GP surgery. The procedure is painless and takes about five minutes. Carotid sinus test If your GP thinks that your fainting episode was associated with carotid sinus syndrome, they may massage your carotid sinus to see whether it makes you feel faint or lightheaded. The carotid sinus is a collection of sensors in the carotid artery, which is the main artery in your neck that supplies blood to your brain. If the carotid sinus massage causes symptoms, it may indicate that you have carotid sinus syndrome (see causes of fainting for more information). Blood tests may be carried out to rule out conditions such as diabetes or anaemia (a condition where the body doesn't produce enough oxygen-rich red blood cells). Your GP may measure your blood pressure while you're lying down, and again after you stand up. You may have orthostatic hypotension if your blood pressure falls after you stand up. If you have orthostatic hypotension, you may be asked further questions to help determine the cause. For example, it can sometimes occur as a side effect of taking some medications. If tests reveal an underlying cause of your fainting, such as a heart problem or orthostatic hypotension, your GP may recommendtreatment for fainting. Treatment for fainting (syncope) will depend on the type of fainting you experienced and whether there's an underlying cause. There are steps you should take if you think that you or someone around you is about to faint, and if someone has fainted. If someone has fainted If a person faints and doesn't regain consciousness within two minutes, put them into the recovery position. To do this: place them on their side so they're supported by one leg and one arm open their airway by tilting their head back and lifting their chin monitor their breathing and pulse continuously After putting the person in the recovery position, dial 999, ask for an ambulance and stay with them until medical help arrives. If you or someone else is about to faint If you know or suspect that you're going to faint, lie down, preferably in a position where your head is low and your legs are raised. This will encourage blood flow to the brain. If it isn't possible to lie down, sit with your head between your knees. If you think that someone else is about to faint, you should help them to lie down or sit with their head between their knees. Treating the underlying cause When you visit the GP after a fainting episode, they'll investigate the type of fainting you experienced, and whether there's an underlying cause. If an underlying cause is found, treating it should help to prevent further fainting episodes. For example, if you're diagnosed with type 2 diabetes, you may be advised to take regular exercise and eat a healthy, balanced diet to help control the condition. If you're diagnosed with a heart condition, you may need further tests and treatment. For example, several different medicines can be used to treat heart disease (where your heart's blood supply is blocked by a build-up of fatty substances in the main blood vessels). Treating fainting associated with the nervous system Most fainting episodes are associated with a temporary malfunction of the autonomic nervous system, which regulates the body's automatic functions, such as heartbeat and blood pressure. This type of fainting is called neurally mediated syncope. Treatment for neurally mediated syncope involves avoiding any possible triggers. If you're not sure what caused your fainting episode, your GP may suggest keeping a diary of any symptoms you experience and making a note of what you were doing at the time you fainted, to help identify possible causes. There are also steps you can take to avoid losing consciousness if you think that you may be about to faint (see above). Fainting associated with an external trigger Fainting can occur when an external trigger, such as a stressful situation, causes a temporary malfunction in your autonomic nervous system. This is called vasovagal syncope. In most cases of vasovagal syncope, further treatment isn't required. However, you may find it useful to avoid potential triggers, such asstress or excitement, hot and stuffy environments, and long periods spent standing. If you know that injections or medical procedures, such as blood tests, make you feel faint, you should tell the doctor or nurse beforehand. They'll make sure you're lying down during the procedure. Fainting associated with bodily functions Fainting can occur when a bodily function or activity – such as coughing – places a sudden strain on the autonomic nervous system. This is called situational syncope. There's no specific treatment for situational syncope, but avoiding the triggers may help. For example, if coughing caused you to faint, you may be able to suppress your urge to cough and therefore avoid fainting. Carotid sinus syndrome Carotid sinus syndrome is where pressure on your carotid sinus causes you to faint. Your carotid sinus is a collection of sensors in the carotid artery, which is the main artery in your neck that supplies blood to your brain. You can avoid fainting by not putting any pressure on your carotid sinus – for example, by not wearing shirts with tight collars. In some people, carotid sinus syndrome can be treated by having a pacemaker fitted. A pacemaker is a small electrical device that's implanted in your chest to help keep your heart beating regularly. Treating fainting associated with low blood pressure Fainting can occur when your blood pressure drops as you stand up. This drop in blood pressure is called orthostatic hypotension. Avoiding anything that lowers your blood pressure should help prevent fainting. For example, avoid becoming dehydrated by increasing your fluid intake. Your GP may also advise you to eat small, frequent meals, rather than large ones, and to increase your salt intake. Taking certain medications can also decrease blood pressure. However, don't stop taking a prescribed medication unless your GP or another qualified healthcare professional in charge of your care advises you to do so. Physical counterpressure manoeuvres Physical counterpressure manoeuvres are movements that are designed to raise your blood pressure and prevent you losing consciousness. One study found that training in physical counterpressure manoeuvres can reduce fainting in some people. Physical counterpressure manoeuvres include: crossing your legs clenching the muscles in your lower body squeezing your hands into a fist tensing your arm muscles You need to be trained in how to carry out these movements correctly. You can then carry them out if you experience any symptoms that suggest you're about to faint, such as feeling lightheaded. Several different medications have been tested for the treatment of fainting. However, the guidelines for diagnosing and treating fainting (PDF, 2.51Mb), published by the European Society of Cardiology, found that most medications had disappointing results. If you've fainted, it could affect your ability to drive. Depending on what caused you to faint, and whether you have any underlying health conditions, you may need to inform the Driver and Vehicle Licensing Agency (DVLA). It's your legal obligation to inform the DVLA about a medical condition that could affect your driving ability. The GOV.UK website has more information about blackouts, fainting and driving. Safety at work If you've fainted, it may affect your safety at work or the safety of others. For example, continuing to operate machinery may be dangerous if it's likely that you'll faint again. The healthcare professionals who diagnose and treat your condition can tell you whether it's likely to affect your work. If it is, speak to your health and safety representative.
<urn:uuid:72e7ecd5-c45b-4ed1-9d43-4bb1e6955fb6>
CC-MAIN-2022-33
https://www.knowyourdoctor.com.cy/medical/fainting/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571190.0/warc/CC-MAIN-20220810131127-20220810161127-00097.warc.gz
en
0.944435
4,187
4.09375
4
According to recent distracted driving statistics from the National Highway Traffic Safety Administration and the National Safety Council, distracted driving is to blame for more than a quarter of all car accidents. Something preventable is to blame for a quarter of all motor vehicle crashes. Furthermore, many driving safety experts feel the proportion is considerably more significant because it is impossible to verify whether a driver was distracted before an accident. Is there any way to maintain track of ridesharing accident facts, given that Uber, Lyft, and other rideshare drivers are independent contractors? One might expect the local Public Utilities Commission (PUC) to oversee the ridesharing business, but regrettably, most of the detailed information is kept hidden from the public. Here’s all you need to know about rideshare statistics, particularly those for Uber and Lyft: The University of Chicago Booth School of Business has published an in-depth study examining the rise in popularity of ridesharing applications and ride-hailing services, as well as the rise in the number of motor vehicle accidents and fatalities that have resulted. According to the study, rideshare services, such as Uber and Lyft, have resulted in a 3% rise in overall car accident fatalities. Researchers calculated this figure by looking at the popularity of rideshare usage between 2010 and 2016 by assessing traffic volume, preferred modes of transportation, and the total number of car accidents in the eight financial quarters before and after both businesses’ rollout dates. The University of Chicago’s rideshare accident analysis incorporated data from the National Highway Traffic Safety Administration (NHTSA) and compared accident rates in key cities where ridesharing apps were initially introduced (relative to the total number of vehicle miles traveled). Below are some rideshare accident statistics: The increasing number of rideshare vehicles on the road is responsible for about 1,000 daily car accidents. Between 2017 and 2018, Uber vehicles were involved in 97 fatal crashes. A total of 107 people died as a result of these collisions. Riders made up 21% of the crash victims. Drivers made up 21% of the crash victims. Third-party drivers or passengers made up the remaining 58% of crash casualties. The rideshare accident report concludes that whether you’re a rideshare driver, passenger, or traveling in a third-party vehicle, you’re more likely to be involved in a serious or fatal crash. While ridesharing applications are convenient and necessary modes of transportation for many people, their growing popularity has made our roadways more dangerous by increasing the number of fatalities caused by Uber and Lyft accidents. The rise in rideshare accidents appears to be continuing (and in some cases increasing) over time. Throughout the week, weeknights, weekend days, and weekend nights, the accident rate has stayed consistent. With 32,885 fatalities reported in 2010, the number of road deaths in the United States was at its lowest level since 1949. In 2016, the number of people killed on the roads jumped to nearly 37,400. Accidents at drop-off and pickup locations are also on the rise. Increased use of ridesharing apps is linked to more cars on the road, which leads to increased accidents, injuries, and fatalities involving all sorts of travelers, including drivers, passengers, bicyclists, and pedestrians. The study also points out that a rise in rideshare accidents is attributed to Uber and Lyft drivers simply spending more time on the roads rather than more rides being hailed by app users. It’s crucial to remember that rideshare drivers don’t spend all of their time transporting passengers from one location to another. Approximately 40% to 60% of the time, rideshare drivers either travel to pick up passengers or look for new ones (waiting for a rider to purchase a trip). Furthermore, rideshare customers in major cities are more likely to be involved in accidents, with 90% of Uber accidents occurring in urban areas. Uber is now a multibillion-dollar company. It’s a worldwide force that’s grown at a breakneck pace since its start less than a decade ago. Thousands of Uber drivers now transport hundreds of thousands of clients each year. Uber drivers spend a lot more time on the road than regular drivers. As a result, they are more likely to be involved in collisions. Furthermore, Uber drivers might be involved in accidents for the same reasons as other drivers: carelessness, negligence, recklessness, and illegal behavior. Here are a few noteworthy Uber statistics: At night, Uber drivers travel the most. According to the Uber Newsroom, the most popular time to request an Uber is Saturday from 11 p.m. to midnight. Due to poor illumination, drowsy driving, and drivers under the influence of drugs and alcohol, accidents are more likely to occur in the evenings in Los Angeles. Uber has a global user base of over eight million people. With so many people using the ride-hailing service, it’s no wonder that Uber accidents occur regularly. Due to the packed highways in LA, there is a heightened risk of accidents such as rear-end collisions and pedestrian accidents. Additionally, Uber is not regulated by the government in the same way that taxi firms are. The corporation can set its own hiring, training, and safety regulations for its drivers. This may result in unqualified and potentially dangerous drivers picking up passengers. The company’s liability release may limit your legal options if you get into an Uber accident. Rideshare collisions follow a different set of laws than traditional collisions. Uber drivers in California are self-employed contractors. Although this would ordinarily exempt the corporation from culpability in driver-caused accidents because drivers are not employees, some complex lawsuits involving this topic have resulted in the courts holding the company liable for certain incidents. Hire a good lawyer after a Lyft car accident. Every day on LA roads, Uber accidents occur. Here are a few examples you may have heard about in the news: The rideshare driver in a drunk driving accident in Los Angeles was arrested and charged with manslaughter. Well, at the very least, that was the headline in the news. What you didn’t notice was that the victim in this story was a female Uber customer aged 20 who died due to her injuries after being transported to a nearby hospital. Unfortunately, this isn’t the first time a drunk Uber or Lyft driver has caused a tragic ridesharing accident in Los Angeles. When an alcoholic driver jumped a red light and slammed into their vehicle, an Uber driver was killed, and her passenger was critically injured. Following the tragic Uber accident, the driver was charged with murder, while the surviving victim and her family fought to rebuild their lives. Five passengers were hurt in a Los Angeles Uber accident after a car rammed into an SUV in an eerily similar situation. The SUV, which was carrying Uber passengers, flipped over, throwing everyone inside around and injuring them. At the time of the accident, the car’s driver was suspected of being under the influence of alcohol. Alcohol isn’t always necessarily the cause of the problem. A foreign exchange student was killed in an Uber accident after an SUV collided with the vehicle she was riding in one example. That SUV is said to have run a red light. Was the driver going too fast? Were they oblivious to the Uber app? It doesn’t really matter to the victim’s family in the end. Uber has been criticized numerous times for the way it handles accidents on California highways. According to new research, rideshare providers may be lying to the public about accidents in order to make Californians believe it is safe to use their service. You should be aware that the law in California governing rideshare services is continuously changing. Lawmakers are trying to bring the law up to date. Here’s something to keep in mind concerning Uber accidents: Uber is hoping that you will either not file a claim or will take a low-ball settlement. This could explain why the company’s claims procedure is so slow and frustrating. Insurance inquiries and claims after an Uber accident might be confusing. Despite holding $1 million in liability insurance, the company’s policy leaves an open question about who is responsible in the event of a collision. Uber’s policy terms include a clause that indicates that passengers use the ridesharing platform at their own risk. The corporation is not liable for the safety of its drivers or the quality of its services, according to the terms and conditions. This language is in direct conflict with the insurance policy of the company. Several lawsuits have been filed against the company for failure to pay for driver and passenger damages as a result of this irregularity. Passengers can quickly register a claim with Uber’s insurance provider and obtain a payment offer in the event of a simple car accident. This isn’t always the case, though. The corporation may disclaim liability for your accident, leaving you to submit a claim with the driver’s insurance or your own. If Uber’s insurance coverage is insufficient to cover your losses, a third-party insurer may be able to cover the remaining costs. After an Uber accident, it’s a good idea to talk to some personal injury attorneys about your chances of pursuing a personal injury claim against the liable party. In many cases, a personal injury claim in San Francisco is worth more than a Lyft passenger wrongful death insurance claim. Safety issues for females and more vulnerable members of our society are something for which the popular rideshare app has become known. Uber’s ridesharing services incident history includes more than just car accidents when it comes to small children or a woman. There have been 222 Uber/Lyft-related reported attacks and harassments, 60 alleged assaults by drivers, and ten alleged kidnappings, according to Who’s Driving You. Furthermore, there have been 26 deaths related to Uber and Lyft drivers who have been negligent and criminal. Uber drivers in Los Angeles are accused of doing things like: Passengers being dragged out of the vehicle Taking the phones of travelers and rideshare users Threatening passengers with violence Knives pointed at passengers Passengers who are sexually assaulted Driving for ridesharing companies when intoxicated Pretending to be an Uber driver Crashing into a service station, causing fire and explosion deaths and injuries In incidents of sexual and bodily assault involving an Uber driver, criminal and civil charges may be filed. At the same time that a criminal investigation is underway, you can file a personal injury claim against the suspected offender. Both sorts of lawsuits have the potential to provide maximum compensation and justice for your pain and suffering. A joint study conducted back in 2018 by the University of Chicago and Rice University concluded that Uber has increased the number of traffic deaths since incorporating under U.S. ridesharing laws to retrieve passengers. Uber accidents make LA streets less safe and put local drivers in danger. Here are a few of the most common causes of Uber accidents: Speeding – Speed is the leading cause of death on American roads and the leading contributor to the majority of vehicle accidents in Los Angeles. A financial incentive exists for rideshare drivers to drive as quickly as possible. They make more money the more passengers they serve, whether the drivers are professionals or novices driving the same route. Failure to Stop – Uber drivers occasionally break the regulations of the road by rolling through stop signs and failing to come to a complete stop when driving quickly. This type of behavior raises the likelihood of a collision. Unfamiliar Roads – Uber attracts drivers from all over the world who aren’t used to driving on city streets. These drivers may erratically drive their vehicles, increasing the chances of a collision. Distracted Driving – To interact with the ridesharing app and accept rides, a rideshare driver must glance down at their phone. Uber distracted driving accidents in Los Angeles may result because of this. Driver Fatigue – An Uber driver who is sleepy or overworked is a road hazard. Sleep deprivation puts drivers at a higher risk of fatigue in the late evening and early morning hours. New Threat: Self-Driving Cars – Prepare for another Uber hazard on California’s highways. Uber’s self-driving cars have been involved in multiple high-profile accidents. With the guidance of an Uber accident attorney, you can fight for compensation if you are hurt by an autonomous vehicle. Driving Under the Influence – The CPUC levied fines against Uber totaling $7.6 million. The also settled with Lyft for $30,000 as a penalty for failing to report annual safety data. The CPUC went ahead and levied a fine of Uber $750,000 for failing to comply with zero-tolerance policies for DUI, DWI/Marijuana (driving under the influence.) Lyft reached a $500,000 settlement with the cities of San Francisco and Los Angeles. This was for misleading the public over driver screening issues. Uber compensated the state over $217 million for privacy breaches and safety lapses from 2014 to 2019. California approved Assembly Bill 2293 in 2015. AB 2293 mandates that rideshare companies and drivers have liability insurance. A minimum of $1 million in liability insurance must be provided by ridesharing businesses to protect drivers from the time they accept a passenger match until the passenger exits the car. However, the legislation currently does not require ridesharing operators to carry collision or comprehensive insurance. Insurance plays an important part in Uber accident claims in Los Angeles. As a passenger, you can’t always count on your own auto insurance to cover you. It’s possible that you’ll have to file a claim with Uber and the driver’s Uber insurance company. California law mandates that rideshare companies have $1 million in liability insurance in the event of an accident. Uber’s partner program provides coverage from major insurance companies such as Allstate, Liberty Mutual, Progressive, and others to its drivers. Following a covered accident, a passenger may be eligible for compensation for their injuries and other damages. Whether or not your Uber accident is covered by an insurance company depends on which of the three periods the Uber driver was in at the time of the accident. Period 1: The driver is not available or the app has been turned off. During this time, the driver’s personal insurance policy, not Uber insurance, is in effect. Period 2: The driver is available or is awaiting a request for a ride. If a driver is in this period and does not have personal auto insurance, Uber provides third-party liability coverage, which includes $100,000 in bodily injury per accident, $50,000 in bodily injury per person, and $25,000 in property damage per accident. Period 3: The driver is on their way to pick up passengers or is currently driving one. Uber retains $1 million in third-party liability coverage, contingent comprehensive, uninsured/underinsured driver bodily injury coverage, collision coverage, and up to the cash worth of the automobile when a driver is actively picking up or carrying passengers. Many victims of Uber accidents in Los Angeles come to Ehline Law Firm looking for a way to pay for costly medical bills, recoup lost wages, and reclaim a feeling of normality in their lives. Some of the most common injuries we’ve encountered are listed here: Traumatic brain injuries, which have long-term consequences. Whiplash affects the neck and spine. Knee and foot injuries with deep wounds and scars. Throwing your hands out to protect yourself, which can result in hand and wrist injuries. Ligament tears and connective tissue injuries are common. Broken or shattered bones. Do you have any injuries similar to these? Contact our experts as soon as possible if you’ve been seriously injured in a Los Angeles Uber accident. It’s now or never to get the money you need to rest, recover, and move on with your life. Essentially, the amount of your Uber accident payout is determined by the circumstances of your accident. The following are the most important variables to consider: Who was at fault for the accident – If your Uber driver was to blame for the automobile accident and your injuries, you may be able to negotiate a higher settlement. The severity of your injuries – Uber accidents that result in more critical injuries and other losses (such as lost wages) for the passenger are likely to result in higher payouts. When a Lyft driver is involved in a car accident, the driver is virtually always to blame. Even though rideshare companies like Lyft consider their drivers to be independent contractors, they can still be held liable in court for the accidents they cause. Both Lyft and Uber now offer an insurance policy that protects drivers and passengers in the case of a rideshare accident. It compensates people for medical expenses, injuries, and fatalities. As a passenger, you should notify the insurance company of the collision and seek compensation for your injuries. While Lyft has an insurance policy that protects drivers and passengers in the event of a collision with another vehicle, drivers must also have their own car insurance. Essentially, insurance firms have begun to offer ridesharing-specific car insurance. We know the appropriate negligence laws and insurance requirements regulating Uber and Lyft accidents at Ehline Law Firm. You may be able to file a claim under Uber or Lyft’s liability insurance or the rideshare driver’s personal motor insurance, depending on the circumstances. If another driver caused or contributed to the accident, Uber and Lyft accident claims could become even more challenging to resolve. Ehline Law Firm has the required knowledge and resources to handle rideshare accident claims, regardless of your Uber or Lyft accident circumstances. On behalf of our clients, we have a track record of obtaining significant settlements and jury awards. Getting a free consultation with an expert attorney at Uber and Lyft accidents remains vital and extremely helpful in charting a course toward obtaining full and fair financial compensation. Being the driver of a car involved in an accident is difficult enough. You’ll need to communicate with the other driver, exchange insurance information, and possibly file a personal injury claim. As a rideshare platform passenger, the process is much more complex. You might not know who is to blame for your injuries or what your legal options are under California law. You’ll need to know how to file an insurance claim if you’ve been wounded in a rideshare accident as a passenger. First, call the cops to report your Uber accident. Obtain your Uber driver’s insurance details, but don’t expect to be in good hands until you form an attorney-client relationship with a legal representative. Report the collision to Uber once you’ve sought medical attention for any injuries. Select “Trip Issues and Fare Adjustments” from the app’s menu. Press “I was in an accident,” then describe what happened. An Uber representative with extensive experience will contact you shortly to obtain information regarding the accident. Uber will either offer you a settlement through its insurance policy or deny your claim at this stage. Did we miss anything? Contact a rideshare injury-accident lawyer if you need more help. Ehline Law is available 24/7 at (213) 596-9642 to call us, or you can use our free online contact us form today to get the justice and compensation you deserve! Don’t forget to mention our no financial recovery no fee guarantee. Michael is a managing partner at the nationwide Ehline Law Firm, Personal Injury Attorneys, APLC. He’s an inactive Marine and became a lawyer in the California State Bar Law Office Study Program, later receiving his J.D. from UWLA School of Law. Michael has won some of the world’s largest motorcycle accident settlements. Check out our most recent Lyft and Uber accident and rape news blog posts about the famous, infamous and everyday people wounded by at fault parties around the world. Common Carrier Accidents (Includes Uber and Lyft) Spinal Cord Injury Claims
<urn:uuid:c5c333cf-36ef-4502-b7c1-ac312610cacd>
CC-MAIN-2022-33
https://ehlinelaw.com/blog/updated-los-angeles-uber-and-lyft-collision-statistics
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572021.17/warc/CC-MAIN-20220814083156-20220814113156-00096.warc.gz
en
0.955801
4,169
2.546875
3
Colonial Oil Products Pipeline |This article is part of the Global Fossil Infrastructure Tracker, a project of Global Energy Monitor.| Colonial Oil Products Pipeline, headquartered in Alpharetta, Georgia, is the largest U.S. refined products pipeline system, carrying approximately 3 million barrels per day of gasoline, diesel and jet fuel between the U.S. Gulf Coast and the New York Harbor area. The Colonial Pipeline company was founded in 1961, and construction of the pipeline began in 1962.:p.19–20 The pipeline is 5,500-miles (8,850-km) long. Colonial had seven spills in four years in the late 20th century, three of which (1996 to 1999) caused significant environmental damage to waterways in the Southeast. The Environmental Protection Agency alleged these were the result of gross negligence, and in 2000 filed a complaint for violations of the Clean Water Act against Colonial. They reached a settlement in 2003 which included a civil penalty to Colonial of $34 million. The settlement was governed by a consent decree by which Colonial would upgrade environmental protections, at a cost of $30 million. The pipeline originates in Houston, Texas, and terminates at the Port of New Jersey, New Jersey. - Operator: Koch Industries (a.k.a. Koch Capital Investments Company LLC, 28.09% stake ownership), South Korea's National Pension Service and Kohlberg Kravis Roberts (a.k.a. Keats Pipeline Investors LP, 23.44% stake ownership), Caisse de dépôt et placement du Québec (16.55% stake ownership via CDPQ Colonial Partners LP), Royal Dutch Shell (a.k.a. Shell Pipeline Company LP, 16.12% stake ownership), and Industry Funds Management (a.k.a. IFM (US) Colonial Pipeline 2 LLC, 15.80% stake ownership). - Current capacity: 3,000,000 barrels per day - Length: 5,500 miles - Status: Operating - Start Year: 1964 Colonial consists of more than 5,500 mi (8,900 km) of pipeline, originating at Houston, Texas, on the Gulf Coast and terminating at the Port of New York and New Jersey. The pipeline travels through the coastal states of Georgia, South Carolina, North Carolina, Virginia, Maryland, Delaware, Pennsylvania, and New Jersey. Branches from the main pipeline also reach Tennessee. It delivers a daily average of 100,000,000 US gallons of gasoline, home heating oil, aviation fuel and other refined petroleum products to communities and businesses throughout the South and Eastern United States. The main lines are 40 inches and 36 inches in diameter, with one primarily devoted to gasoline and the other carrying distillate products such as jet fuel, diesel fuel, and home heating oil. The pipeline connects directly to major airports along the system. Fifteen associated tank farms store more than 1.2 x 109 US gallons of fuel and provide a 45-day supply for local communities. Products move through the main lines at a rate of about 3 to 5 miles per hour. It generally takes from 14 to 24 days for a batch to get from Houston, Texas to the New York harbor, with an average of 18.5 days. Colonial Pipeline's owners include Koch Industries (a.k.a. Koch Capital Investments Company LLC, 28.09% stake ownership), South Korea's National Pension Service and Kohlberg Kravis Roberts (a.k.a. Keats Pipeline Investors LP, 23.44% stake ownership), Caisse de dépôt et placement du Québec (16.55% stake ownership via CDPQ Colonial Partners LP), Royal Dutch Shell (a.k.a. Shell Pipeline Company LP, 16.12% stake ownership), and Industry Funds Management (a.k.a. IFM (US) Colonial Pipeline 2 LLC, 15.80% stake ownership). Recent financial support for Colonial's operations has involved debt financing via bond issues. In October 2019, Japanese bank MUFG arranged a US$325 million bond issue for the project. This was followed in May 2020 by a US$600 million bond issue involving MUFG, Mizuho, TD Securities and Wells Fargo. History and timeline Eight major oil companies began discussing a Gulf Coast-to-East Coast pipeline in 1956. On June 7, 1961, Sinclair Pipeline Co., Texaco Inc., Gulf Oil Co., American Oil Co., The Pure Oil Co., Phillips Petroleum Co., The Cities Service Co. and Continental Oil Co. filed incorporation papers in Delaware to establish Suwannee Pipe Line Company "for the purpose of building a 22-inch line from Houston to the Baltimore-Washington area capable of delivering 300,000 barrels of refined products a day.":p.15 Construction of Colonial Pipeline's original system started in 1961. In February 1962, the board of the Suwannee Pipe Line Company met to rename the company. It chose Colonial Pipeline Company to reflect the number of Colonial America states the proposed pipeline would traverse from Houston, Texas to New York harbor. Mobil joined the other eight companies in 1962.:p.16 On March 6, 1962, Colonial Pipeline Company formally announced its plans. A press release stated that the nine companies "launched the largest single, privately financed construction project in the history of the United States." The initial investment by the nine companies approached $370 million. R.J. Andress was named President of the newly formed company.:p.16 Constructing the Colonial Pipeline required 600,000 tons of steel and trenching 16.7 million cubic yards of earth to bury the pipeline. It initially included 27 pumping stations to move refined product between Houston, Texas and Linden, New Jersey.:p.16 A ceremonial ground-breaking near Atlanta, the pipeline's eventual headquarters, on June 20, 1962, was attended by U.S. Commerce Secretary Luther Hodges and company, city and state officials.:p.2 On July 2, 1962, Colonial Pipeline Company solicited bids from contractors to build 15 segments of the pipeline's mainline. Each segment averaged 100 miles and 200-300 workers. Work progressed at roughly one mile per day for each of the segments.:p.19 The first lengths of pipe were delivered by rail, barge, and on specially constructed trailers to handle 80-foot double joints on the road.:p.19–20 Construction started on August 1, 1962, in Mississippi. In December 1962, Ben "Tex" Leuty was named president of Colonial Pipeline Company. He had earlier served as vice president and general manager overseeing construction of the pipeline.:p.20 Engineers faced many challenges in constructing the pipeline. Chief among these was designing and constructing valves capable of opening and closing 2-ton steel gates in a timely manner to prevent the intermingling of different products. Electric motors took 3 minutes to close the massive gates; this allowed 2,400 barrels of product to intermix, rendering the product unusable. To reduce this intermixing, Colonial engineers designed a hydraulic system which reduced the intermixing (and loss) to 120 barrels as changes were made in products shipped.:p.21 The first "linefill" of Colonial began the morning of September 16, 1963 in Houston. It was shut down that day, because of forecasts of a developing major storm. Two days later Hurricane Cindy (1963) struck the Gulf Coast.:p.21 Product reached Greensboro, North Carolina for the first time in November 1963.:p.22 Over the next several months, product was delivered to markets farther north in the Southeast and Mid-Atlantic states. On April 27, 1964, the first batch of refined product was delivered to the Roanoke, Virginia area.:p.22 On June 2, 1964, Colonial made its first delivery to the Baltimore, Maryland– Washington, D.C. area.:p. 21 On December 1, 1964 mainline construction of the Colonial Pipeline was completed, and the Linden Junction Tank Farm and Delivery Facility in New Jersey was activated.:p.23 The Colonial Pipeline system was fully operational on December 18, 1964.:p.24 The Colonial system averaged a throughput of 636,553 barrels of refined product a day in 1965, its first full year of operation.:p.25 Fred Steinberger was elected president of Colonial Pipeline Company on July 26, 1965, taking control in October.:p.25 By February 1966, Colonial was averaging a daily throughput of 776,883 barrels of refined product per day, surpassing the 600,000 barrel per day estimated when construction began just a few years before.:p.33 In May 1966, Colonial began phase one of an expansion project to add 18 intermediate booster stations to add horsepower to the system. This resulted in increasing product flow through the mainline between Selma, North Carolina and Greensboro, North Carolina.:p.34 The Colonial pipeline board of directors approved phase 2 and 3 of its early expansion projects to increase capacity on its mainline to 1 million barrels per day.:p.34–35 Phase two of the expansion was completed in November 1967, adding additional pump units and a new stubline from Mitchell, Virginia to Roanoke, Virginia.:p.35 "Looping", or adding a second line parallel to the first, began in 1971. This construction continued through 1980, essentially doubling the capacity of the pipeline system. The second line was staffed by 593 employees.:p.36 Colonial's average throughput increased to an average of 1,584,000 barrels per day.:p.36 Colonial's ownership increases to 10 shareholders that included: Atlantic Richfield Company; BP Oil Company; Cities Service Company; Continental Pipe Line Company; Mobil Pipe Line Company; Phillips Investment Company; Texaco, Inc.; The American Oil Company; The Toronto Pipe Line Company and Union Oil Company.:p.37 Colonial Pipeline Company named Tom Chilton as President and CEO.:p.37 Colonial Pipeline announced the construction of a 40-inch loop line from Atlanta, Georgia to Greensboro, North Carolina, and a 16-inch lateral loop between Greensboro, North Carolina and Selma, North Carolina. These improvements were estimated to increase system capacity by nearly 20 percent to two million barrels per day.:p.38–39 On November 3, 1978, the new 40-inch line from Atlanta, Georgia to Greensboro, North Carolina was placed into service.:p.40 Colonial became the first company to equip gasoline storage tanks with geodesic domes.:p.40 Colonial updated its Atlanta control center with a new generation of its computerized SCADA system.:p.43 An expansion project totaling $670 million neared completion in 1980. The Colonial Pipeline system capacity had increased by 83 percent compared to when the system first opened in 1964.:p.41 Colonial began deliveries to Department of Defense, Defense Fuel Supply Command (DFSC).:p.41 Colonial began using caliper and magnetic pigs to detect anomalies in its pipeline system.:p.42 Colonial's annual throughput reached 635.6 million barrels in 1988.:p.48 In September 1988, Colonial replaced 7,700 feet of mainline pipe across the Delaware River at a cost of $10 million.:p.48 Colonial's annual throughput hit 667.8 million barrels in 1990, a record volume for the company.:p.49 Colonial Pipeline Company moved its corporate headquarters in Atlanta from Lenox Towers to Resurgens Plaza in 1991.:p.52 In 1992, Colonial's annual throughput reached 676.2 million barrels.:p.52 Colonial completed 4,000 miles of pipeline inspections with caliper pigs and corrosion inspections on 3,000 miles of pipe with magnetic pigs.:p.52–53 Colonial introduced elastic-wave pigs to inspect and detect microscopic cracks in the pipeline walls in 1996.:p.56 On March 26, 1997, Colonial Pipeline Company was one of ten companies recognized for quality service by the Department of Defense, Military Traffic Management Command.:p.57 Colonial president and CEO Donald Brinkley retired, and David Lemmon was named president and CEO.:p.58 Colonial replaced Pipeline Instruction and Proficiency Examination with a computer-based training program for operations and environmental field staff.:p.64 Colonial expanded crack-pig internal inspection program, a key element of system integrity.:p.64 As a precautionary measure, on December 31, 1999, Colonial Pipeline shut down operations for a few hours before and after midnight to prevent any Y2K-related power outages.:p.69 Colonial announced plans in 2000 to increase pump power on the mainline, which would increase daily capacity by 144,000 barrels to 2.35 million barrels per day.:p.69 On July 27, Colonial Pipeline announced that it acquired Alliance Products Pipeline and Terminal System from BP Amoco.:p.71 Colonial Pipeline Company was recognized by API for its safety and environmental record, receiving the first "Distinguished Environmental and Safety Award".:p.74 In September 2001, Colonial Pipeline Company moved its headquarters from Atlanta to suburban Alpharetta, Georgia.:p.74 In the wake of the September 11, 2001 terrorist attacks on the United States, Colonial increased security at each of its facilities and created a comprehensive security plan. This was later recognized by the Federal Government as a model for the pipeline industry.:p.74 Colonial Pipeline marked a record year with an annual throughput of 2.3 million barrels per day.:p.74 Following Hurricane Ike in September 2008, the Colonial Pipeline was operating at a severely reduced capacity due to a lack of supply from refineries in the Gulf Coast that had closed, causing gasoline shortages across the southeastern United States. Colonial Pipeline's field operations are divided into three districts: - The Gulf Coast District includes Texas, Louisiana and Mississippi, and is primarily responsible for the originating deliveries of Colonial. Colonial primarily draws products from refineries along the U.S. Gulf Coast. It also uses a few refineries in the Northeast.:p.6 - The Southeast District includes Alabama, Tennessee, Georgia, South Carolina and North Carolina. The company's second-largest tank farm is in suburban Atlanta. Local supplies are delivered from here, and it is the origin of pipelines serving Tennessee and southern Georgia. The company's largest tank farm is in Greensboro, North Carolina, where the two mainlines originating in Houston terminate. Deliveries to the Northeast originate from Greensboro. - The Northeast District's operations include Virginia, Maryland and New Jersey. Colonial's Northeast operations also serve Delaware and Pennsylvania. In Linden, New jersey, Colonial operates the Intra-Harbor Transfer system, which provides numerous customers the ability to transfer products among themselves and access barge transportation for exporting product. Colonial connects directly to several major airports, including Atlanta, Nashville, Charlotte, Greensboro, Raleigh-Durham, Dulles, and Baltimore-Washington. It serves metropolitan New York airports via connections with Buckeye Pipeline.:p.63 Colonial's poducts move with great regularity on the pipeline. Primarily, shipments are fungible, but segregated shipments are possible and occur regularly. Fungible shipments are products commingled with other quantities of the same product specifications. Segregated batches preserve a fuel property not allowed in the fungible specifications. All products delivered on Colonial must pass an oversight test program to assure quality. Colonial protects the quality of the products it carries to the point of excluding certain products. For example, bio-Diesel contains fatty-acid methyl esters (FAME), which cannot be allowed to mix into jet fuels moving in the same pipeline. - 1978 -Colonial became the first company to equip gasoline storage tanks with geodesic domes.:p.40 - 1985 -Colonial begins use of caliper and magnetic pigs to detect anomalies in the lines.:p.42 - 1994 - following a historic flood that ruptured a number of pipelines at the San Jacinto River near Houston, Texas, Colonial directionally drilled 30 feet beneath the river and floodplain to install two new 3,100-foot permanent pipelines.:p.60 - Early on September 2, 1970, residents of Jacksonville, Maryland, detected gasoline odors and noticed gasoline in a small creek flowing beneath a nearby road. That afternoon, a resident notified Colonial at 6:19 p.m. of concern. Colonial had 30-inch-diameter pipeline situated about 1,700 feet east of the point where the creek passed under the road, and shut down the Dorsey Junction, Maryland, pump station (the initial pump station for this section of the pipeline) at 6:34 p.m. About 12 hours later, on the morning of September 3, an explosion and fire occurred in a ditch in which Colonial contract workers were manually digging to expose the pipeline and catch gasoline trickling from the ground. Five persons were injured, none fatally. The leak point was found four days later. The failure resulted in a release of 30,186 gallons (718 barrels) of gasoline and kerosene. - At 9:51 p.m. on December 19, 1991, Colonial's Line 2, a 36-inch-diameter petroleum products pipeline, ruptured about 2.8 miles downstream of the company's Simpsonville, South Carolina, pump station. The rupture allowed more than 500,000 gallons (13,100 barrels) of diesel fuel to flow into Durbin Creek, causing environmental damage that affected 26 miles of waterways, including the Enoree River, which flows through Sumter National Forest. The spill also forced Clinton and Whitmire, South Carolina, to use alternative water supplies. - On Sunday, March 28, 1993 at 8:48 a.m., a pressurized 36-inch-diameter petroleum product pipeline owned and operated by Colonial Pipeline Company ruptured near Herndon, Virginia, a Washington, DC suburb. The rupture created a geyser that sprayed diesel fuel more than 75 feet into the air, coating overhead power lines and adjacent trees, and misting adjacent Virginia Electric & Power Company buildings. The diesel fuel spewed from the rupture into an adjacent storm water management pond and flowed overland and through a network of storm sewer pipes before reaching Sugarland Run Creek, a tributary of the Potomac River. - In October 1994, after heavy rainfall in the Houston area, failures of eight pipelines occurred with damage to 29 others. Two of the failures included Colonial Pipeline lines. The failures occurred at a crossing of the San Jacinto river. The river, which normally flows at 2.5 feet above sea level, crested at 28 feet above sea level on October 21. The flooding caused major soil erosion. Colonial's 40-inch gasoline pipeline failed on October 20 at 8:31 a.m. and by 9:51 a.m., explosions and fires erupted on the river. Colonial's 36-inch diesel pipeline ruptured about 2 p.m. on the same day, although it had previously been temporarily out of service, limiting the amount of the spill. - On June 26, 1996, a 36-inch-diameter Colonial pipeline ruptured at the Reedy River, near Fork Shoals, South Carolina. The ruptured pipeline released about 957,600 US gallons (3,625,000 L) of fuel oil into the Reedy River and surrounding areas. The spill polluted a 34-mile (55 km) stretch of the Reedy River, causing significant environmental damage. Floating oil extended about 23 miles (37 km) downriver. Approximately 35,000 fish were killed, along with other aquatic organisms and wildlife. The estimated cost to Colonial Pipeline for cleanup and settlement with the State of South Carolina was $20.5 million. No one was injured in the accident. The pipeline was operating at reduced pressure due to known corrosion issues, but pipeline operator confusion led to an accidental return to normal pressure in that pipeline section, causing the rupture. - On May 30, 1997, Colonial Pipeline spilled approximately 18,900 US gallons (72,000 L) of gasoline, some of which entered an unnamed creek and its adjoining shoreline in the Bear Creek watershed near Athens, Georgia. During the spill, a vapor cloud of gasoline formed, causing several Colonial employees to flee for safety. This spill resulted from a calculation error related to a regular procedure. No one checked the calculations, nor did Colonial have a procedure in place to check such calculations. - In February 1999, in Knoxville, Tennessee, Colonial spilled approximately 53,550 gallons (1275 barrels) of fuel oil, some of which entered Goose Creek and the Tennessee River, polluting approximately eight miles of the Tennessee River. The released fuel saturated 10 homes in the area and caused the evacuation of six homes. The day before the spill, Colonial found anomalies on the pipeline, including on the area where the rupture later occurred, but did nothing. At the time of the spill, Colonial received information of a sudden drop in pipeline pressure. Despite receiving this information indicating a leak, Colonial continued to send fuel oil through the line. Colonial briefly shut down the pipeline, but reopened it and sent fuel oil again, despite continued indications of a leak, until they were alerted by the Knoxville Fire Department that Colonial's fuel was running into Goose Creek. - On Wednesday, October 3, 2012, Colonial Pipeline shut down line 19 and 20 in Chattanooga, Tennessee due to reports of gasoline odors. Reuters reported that about 500 gallons of gasoline may have been released. The line carrying gasoline was repaired and the distillate line, which carries Diesel fuel, jet fuel and other products, was inspected and found to be undamaged. Both lines were restarted two days later on October 5, 2012. - On Friday, September 9, 2016, a leak was detected in Shelby County, Alabama, spilling an estimated 252,000 US gallons (954,000 L) of summer-grade gasoline, requiring a partial shutdown of the pipeline and threatening fuel shortages in the southeastern United States. This was Colonial's "biggest spill in nearly two decades." It caused a "12-day interruption in the flow of about 1.3 million barrels per day of the fuel from the refining hub on the Gulf Coast to the Northeast." - On October 31, 2016, a Colonial Pipeline mainline exploded and burned in Shelby County, Alabama, after accidentally being hit by a trackhoe during repairs related to the September event. One worker died at the scene, and five others were hospitalized, one of whom later died of his injuries, raising the death toll to two. The explosion occurred approximately several miles from the September 9, 2016 breach. On November 1, 2016, the U.S. Occupational Safety and Health Administration had control of the site, where the fire was still burning. The shutdown was affecting primarily the Southeast, as Northeast markets can receive some oil via ships. The line returned to service November 6. Safety and environmental record As a result of seven different spills on Colonial Pipeline in four years in the 1990s, the United States Environmental Protection Agency (EPA) filed a complaint in 2000 against Colonial for violations of the Clean Water Act. It alleged gross negligence specifically in three cases noted above: 1996 Reedy River, 1997 Bear Creek, and 1999 Goose Creek/Tennessee River. The parties reached a settlement with Colonial Pipeline that was announced on April 1, 2003. Colonial was required to pay a civil penalty of $34 million, the "largest a company has paid in EPA history." "Under the consent decree, Colonial will upgrade environmental protection on the pipeline at an estimated cost of at least $30 million." In this period, Colonial received the American Petroleum Institute (API)'s Distinguished Environmental and Safety award for four consecutive years (1999–2002).:pp.67,71 Some of these awards were made after EPA had filed a complaint against the company for violations of the Clean Water Act, and prior to the landmark civil penalty assessed in the settlement of the civil case. In 2005, Hurricane Katrina knocked out power in large parts of Mississippi and Louisiana, forcing Colonial to operate at reduced flow rates. The company rented portable generators to help restore partial service as utilities recovered and restored normal service. When Hurricane Rita hit a month later, Colonial used these generators to help load product stranded in refinery storage tanks that did not have power. By the time hurricanes Gustav and Ike struck in 2008, Colonial owned and operated this set of emergency generators. It purchased a new set of generators in 2012 and stationed them in Mississippi, inland and out of the direct path of most storms. In August 2017, the pipeline was shut down in the wake of Hurricane Harvey after it forced refineries and some of the pipeline's facilities to close. Representation in media The enormous scale of the Colonial Pipeline Project attracted considerable media attention. Fortune Magazine featured the project as its cover story in February 1963.:p.20 Colonial was featured in an August 1964 edition of TIME Magazine in an article titled, "The Invisible Network: A Revolution Underground.":p.23 An article in a late 1965 edition of Pipeline Magazine included: "Colonial Pipeline will perhaps do more to change America's transportation and marketing operations in the East and South than any single undertaking in which our country has participated in recent years.":p.25 Articles and resources - Devika Krishna Kumar, Commodities: "Colonial may open key U.S. gasoline line by Saturday after fatal blast", Reuters (US), 1 November 2016; accessed 3 November 2016 - Enforcement, "Colonial Pipeline Company Clean Water Act Settlement", Environmental Protection Agency, 2003; accessed 3 November 2016 - FAQ, Colonial Pipeline website - Colonial Pipeline Additional Facility 2019, IJGlobal, accessed Aug. 10, 2020. - Colonial Pipeline Additional Facility 2020, IJGlobal, accessed Aug. 10, 2020. - Parker, Barry; Robin Hood (2002). Colonial Pipeline: Courage, Passion, Commitment. Chattanooga, TN: Parker Hood. ISBN 0-9645704-8-3. - Corporation, Bonnier (1 October 1963). "Popular Science". Bonnier Corporation. Retrieved 17 September 2016 – via Google Books. - "Home". Colonial Pipeline.com. Retrieved 17 September 2016. - "Potential Impacts of Reductions in Refinery Activity on Northeast Petroleum Product Markets.", U.S. Energy Information Administration, May 2012. - "Colonial Pipeline Provides "Call Before You Dig" Message", Pipeline News, October 2012. - "Archived copy". Archived from the original on 2012-10-01. Retrieved 2012-10-09. Cite uses deprecated parameter |deadurl=(help)CS1 maint: archived copy as title (link) - "Archived copy" (PDF). Archived from the original (PDF) on 2012-11-16. Retrieved 2012-10-09. Cite uses deprecated parameter |deadurl=(help)CS1 maint: archived copy as title (link) - "Archived copy" (PDF). Archived from the original (PDF) on 2012-10-09. Retrieved 2013-01-28. Cite uses deprecated parameter |deadurl=(help)CS1 maint: archived copy as title (link), National Transportation Safety Board - NTSB Report, National Chemical Safety Program, Texas A&M University - "Safety Studies, National Transportation Safety Board - "Colonial at Fork Shoals", EPA Compliance Cases - David L. Lemmon, "How Colonial Pipeline Recovered from a Devastating Spill and Set New Industry Standards", Pipeline & Gas Journal; Oct 2003, Vol. 230 Issue 10, p20 - 1998 Report, National Transportation Safety Board - "Colonial Pipeline reports Tennessee gasoline spill". Reuters. 2012-10-04. Retrieved 2016-11-01. - "Press Release", Colonial Pipeline - "Colonial Pipeline: 252,000 gallons of gasoline have leaked in Shelby County". WBRC. 2016-09-13. - "Leaking Alabama pipeline could restart this week, company says". AL.com. Retrieved 2016-11-01. - "Helena – Colonial Pipeline Response". colonialresponse.com. Retrieved 1 November 2016. - "Company: Blast-damaged gasoline line back in service". News & Observer. 2016-11-06. Retrieved 2016-11-07. - "Isaac to Test Power System Upgrades After Katrina's Blackouts", Bloomberg Businessweek, 28 August 2012. - Harvey shuts down major fuel pipeline supplying East Coast, CNN, 31 Aug. 2017
<urn:uuid:deab6a97-36fc-44bf-ac67-79d102a4d8c2>
CC-MAIN-2022-33
https://www.gem.wiki/Colonial_Oil_Products_Pipeline
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570868.47/warc/CC-MAIN-20220808152744-20220808182744-00095.warc.gz
en
0.93398
6,149
2.6875
3
Gleaning Clues from Aristotle to Charles S. Peirce We have recently talked much of the use of knowledge bases in areas such as artificial intelligence and knowledge supervision. The idea is to leverage the knowledge codified in these knowledge bases, Wikipedia being the most prominent exemplar, to guide feature selection or the creation of positive and negative training sets to be used by machine learners. The pivotal piece of information that enables knowledge bases to perform in this way is a coherent knowledge graph of concepts and entity types. As I have discussed many times, the native category structure of Wikipedia (and all other commonly used KBs) leaves much to be desired. It is one of the major reasons we are re-organizing KB content using the UMBEL reference knowledge graph . The ultimate requirement for the governing knowledge graph (ontology) is that it be logical, consistent and coherent. It is from this logical structure that we can provide the rich semsets for semantic matches, make inferences, understand relatedness, and make disjointedness assertions. In the context of knowledge-based artificial intelligence (KBAI) applications , the disjointedness assertions are especially important to aiding the creation of negative training sets based on knowledge supervision. Coherent and logical graphs first require natural groupings or classes of concepts and entity types by which to characterize the domain at hand, situated with respect to one another with testable relations. Entity types are further characterized by a similar graph of descriptive attributes. Concepts and entity types thus represent the nodes in the graph, with relations being the connecting infrastructure. Going back at least to Aristotle, how to properly define and bound categories and concepts has been a topic of much philosophical discussion. If the nodes in our knowledge graph are not scoped and defined in a consistent way, then it is virtually impossible to construct a logical and coherent way to reason over this structure. This inconsistency is the root source of the problem that Wikipedia can not presently act as a computable knowledge graph, for example. This article thus describes how Structured Dynamics informs its graph-construction efforts built around the notion of “natural classes.” Our use and notion of “natural classes” hews closely to how we understand the great American logician, Charles S. Peirce, came to define the concept . Natural classes were a key underpinning to Peirce’s own efforts to provide a uniform classification system related to inquiry and the sciences. Humanity’s Constant Effort to Define and Organize Our World Aristotle set the foundational basis for understanding what we now call natural kinds and categories. The universal desire by all of us to be able to understand and describe our world has meant that philosophers have argued these splits and their bases ever since. In very broad terms we have realists, who believe things have independent order in the natural world and can be described as such; we have nominalists, who believe that humans provide the basis for how things are organized in part by how we name them; and we have idealists, or anti-realists, who believe “natural” classes are generalized ones that conform to human ideals of how the world is organized, but are not independently real . These categories, too, shade into one another, such that these beliefs become strains in various degrees for how any one individual might be defined.The realist strain, also closely tied to the sciences and the scientific method, is what most guides the logic basis behind semantic technologies and SD’s view of how to organize the world. Aristotle believed that the world could be characterized into categories, that categories were hierarchical in nature, and what defined a particular class or category was its essence, or the attributes that uniquely define what a given thing is. A mammal has the essences of being hairy, warm-blooded, and live births. These essences distinguish from other types of animals such as birds or reptiles or fishes or insects. Essential properties are different than accidental or artificial distinctions, such as whether a man has a beard or not or whether he is gray- or red-haired or of a certain age or country. A natural classification system is one that is based on these real differences and not artificial or single ones. Hierarchies arise from the shared generalizations of such essences amongst categories or classes. Under the Aristotelian approach, classification is the testing and logical clustering of such essences into more general or more specific categories of shared attributes. Because these essences are inherent to nature, natural clusterings are an expression of true relationships in the real world. By the age of the Enlightenment, these long-held philosophies began to be questioned by some. Descartes famously grounded the perception of the world into innate ideas in the human mind. This philosophy built upon that of William of Ockham, who maintained the world is populated by individuals, and no such things as universals exist. In various guises, thinkers from Locke to Hume questioned a solely realistic organization of concepts in the world . While there may be “natural kinds”, categorization is also an expression of the innate drive by humans to name and organize their world. Relatedness of shared attributes can create ontological structures that enable inference and a host of graph analytics techniques for understanding meaning and connections. For such a structure to be coherent, the nodes (classes) of the structure should be as natural as possible, with uniformly applied relations defining the structure. Thus, leaving behind metaphysical arguments, and relying solely on what is pragmatic, effectively built ontologies compel the use of a realistic viewpoint for how classes should be bounded and organized. Science and technology are producing knowledge at unprecedented amounts, and realism is the best approach for testing the trueness of new assertions. We think realism is the most efficacious approach to ontology design. One of the reasons that semantics are so important is that language used to capture the diversity of the real world must be able to be meaningfully related. Being explicit about the philosophy in how we construct ontologies helps decide sometimes sticky design questions. Unnatural Classifications Instruct What is Natural These points are not academic. The central failing, for example, of Wikipedia has been its category structure . Categories have strayed from a natural classification scheme, and many categories are “artificial” in that they are compound or distinguished by a single attribute.“Compound” (or artificial) categories (such as Films directed by Pedro Almodóvar or Ambassadors of the United States to Mexico) are not “natural” categories, and including them in a logical evaluation only acts to confuse attributes from classification. To be sure, such existing categories should be decomposed into their attribute and concept components, but should not be included in constructing a schema of the domain. “Artificial” categories may be identified in the Wikipedia category structure by both syntactical and heuristic signals. One syntactical rule is to look for the head of a title; one heuristic signal is to select out any category with prepositions. Across all rules, “compound” categories actually account for most of what is removed in order to produce “cleaned” categories. We can combine these thoughts to show what a “cleaned” version of the Wikipedia category structure might look like. The 12/15/10 column in the table below reflects the approach used for determining the candidates for SuperTypes in the UMBEL ontology, last analyzed in 2010 . The second column is from a current effort mapping Wikipedia to Cyc : Two implications can be drawn from this table. First, without cleaning, there is considerable “noise” in the Wikipedia category structure, equivalent to about half to two-thirds of all categories. Without cleaning these categories, any analysis or classification that ensues is fighting unnecessary noise and has likely introduced substantial assignment errors. Second, the power that comes from a coherent schema of categories and concepts — especially inference and graph analysis — can not be applied to a structure that is not constructed along realistic lines. We can expand on this observation by bringing in our best logician on information, semeiosis and categories, Charles S. Peirce. Peirce’s Refined Arguments of a Natural Class Peirce was the first, by my reading, who looked at the question of “natural classes” sufficient to provide design guidance, and which may be sometimes contraposed against what are called “artificial classes” (we tend to use the term “compound” classes instead). A “natural class” is a set with members that share the same set of attributes, though with different values (such as differences in age or hair color for humans, for example). Some of those attributes are also more essential to define the “type” of that class (such as humans being warm-blooded with live births and hair and use of symbolic languages). Artificial classes tend to only add one or a few shared attributes, and do not reflect the essence of the type . The most comprehensive treatment of Peirce’s thinking on natural classes was provided by Menno Hulswit in 1997. He first explains the genesis of Peirce’s thinking : “The idea that things belong to natural kinds seems to involve a commitment to essentialism: what makes a thing a member of a particular natural kind is that it possesses a certain essential property (or a cluster of essential properties), a property both necessary and sufficient for a thing to belong to that kind.” “According to Mill, every thing in the world belongs to some natural class or real kind. Mill made a distinction between natural classes and non-natural or artificial classes (Mill did not use the latter term). The main difference is that the things that compose a natural class have innumerous properties in common, whereas the things that belong to an artificial class resemble one another in but a few respects.” “Accordingly, a natural or real class is defined as a class ‘of which all the members owe their existence to a common final cause’ (CP 1.204), or as a class the ‘existence of whose members is due to a common and peculiar final cause’ (CP1.211). The final cause is described in this context as ‘a common cause by virtue of which those things that have the essential characters of the class are enabled to exist’ (CP 1.204).” “Peirce concluded from these observations that the objects that belong to the same natural class, need not have all the characters that seem to belong to the class. After thus having criticized Mill, Peirce gave the following definition of natural class (or real kind): “. . . natural classification of artificial objects is a classification according to the purpose for which they were made.” “The problem of natural kinds is important because it is inextricably linked to several philosophical notions, such as induction, universals, scientific realism, explanation, causation, and natural law.” This background sets up Hulswit’s interpretation of then how Peirce’s views on natural classification evolved : “Peirce’s approach was broadly Aristotelian inasmuch as natural classification always concerns the form of things (which is that by virtue of which things are what they are) and not their matter. This entails that Peirce borrowed Aristotle’s idea that the form was identical to the intrinsic final cause. Therefore it was obvious that natural classification concerns the final causes of the things. From the natural sciences, Peirce had learned that the forms of chemical substances and biological species are the expression of a particular internal structure. He recognized that it was precisely this internal structure that was the final cause by virtue of which the members of the natural class exist.” “Accordingly, Peirce’s view may be summarized as follows: Things belong to the same natural class on account of a metaphysical essence and a number of class characters. The metaphysical essence is a general principle by virtue of which the members of the class have a tendency to behave in a specific way; this is what Peirce meant by final cause. This finality may be expressed in some sort of microstructure. The class characters which by themselves are neither necessary nor sufficient conditions for membership of a class, are nevertheless concomitant. In the case of a chair, the metaphysical essence is the purpose for which chairs are made, while its having chair-legs is a class character. The fuzziness of boundary lines between natural classes is due to the fuzziness of the class characters. Natural classes, though very real, are not existing entities; their reality is of the nature of possibility, not of actuality. The primary instances of natural classes are the objects of scientific taxonomy, such as elementary particles in physics, gold in chemistry, and species in biology, but also artificial objects and social classes.” “By denying that final causes are static, unchangeable entities, Peirce avoided the problems attached to classical essentialism. On the other hand, by eliminating arbitrariness, Peirce also avoided pluralistic anarchism. Though Peircean natural classes only come into being as a result of the abstractive and selective activities of the people who classify, they reflect objectively real general principles. Thus, there is not the slightest sense in which they are arbitrary: “there are artificial classifications in profusion, but [there is] only one natural classification” (C P 1.275; 1902).” Importantly, note that “natural kinds” or “natural classes” are not limited to things only found in nature. Perice’s semiotics (theory of signs) also recognizes “natural” distinctions in arenas such as social classes, the sciences, and man-made products . Again, the key discriminators are the essences of things that distinguish them from other things, and the degree of sharing of attributes contains the basis for understanding relationships and hierarchies. Natural Classes Can be Tested, Reasoned Over and Are Mutable Though all of this sounds somewhat abstract and philosophical, these distinctions are not merely metaphysical. The ability to organize our representations of the world into natural classes also carries with it the ability to organize that world, reason over it, draw inferences from it, and truth test it. Indeed, as we may discover through knowledge acquisition or the scientific method, this world representation is itself mutable. Our understanding of species relationships, for example, has changed markedly, especially most recently, as the basis for our classifications shifts from morphology to DNA. Einstein’s challenges to Newtonian physics similarly changed the “natural” way by which we need to organize our understanding of the world. When we conjoin ideas such as Shannon’s theory of information with Peirce’s sophisticated and nuanced theory of signs , other insights begin to emerge about how the natural classification of things (“information”) can produce leveraged benefits. In linking these concepts together, de Tienne has provided some explanations for how Peirce’s view of information relates to information theory and efficient information messaging and processing : “For a propositional term to be a predicate, it must have ‘informed breadth’, that is, it must be predicable of real things, ‘with logical truth on the whole in a supposed state of information.’ . . . . For a propositional term to be a subject, it must have ‘informed depth’, that is, it must have real characters that can be predicated of it also ‘with logical truth on the whole in a supposed state of information’.” “Peirce indeed shows that induction, by enlarging the breadth of predicate terms, actually increases the depth of subject terms—by boldly generalizing the attribution of a character from selected objects to their collection—while hypothesis, by enlarging the depth of subject terms, actually increases the breadth of predicate terms—by boldly enlarging their attribution to new individuals. Both types of amplicative inferences thus generate information.” “. . . information is not a mere sum of quantities, but a product, and this distinction harbors a profound insight. When Peirce began defining, in 1865, information as the multiplication of two logical quantities, breadth and depth (or connotation and denotation, or comprehension and extension), it was in recognition of the fact that information was itself a higher-order logical quantity not reducible to either multiplier or multiplicand. Unlike addition, multiplication changes dimensionality—at least when it is not reduced, as is often the case in schoolbooks, to a mere additive repetition. Information belongs to a different logical dimension, and this entails that, experientially, it manifests itself on a higher plane as well. Attributing a predicate to a subject within a judgment of experience is to acknowledge that the two multiplied ingredients, one the fruit of denotation, the other of connotation, in their very multiplication or copulative conjunction, engender a new kind of logical entity, one that is not merely a fruit or effect of their union, but one whose anticipation actually caused the union.” The essence of knowledge is that it is ever-growing and expandable. New insights bring new relations and new truths. The structures we use to represent this knowledge must themselves adapt and reflect the best of our current, testable understandings. Keeping in mind the need for all of our classes to be “natural” — that is, consistent with testable, knowable truth — is a key building block in how we should organize our knowledge graphs. Similar inspection can be applied to the relations used in the knowledge graph , but I will leave that discussion to another day. Though hardly simple, the re-classification of Wikipedia’s content into a structure based on “natural classes” will bring heretofore unseen capabilities in coherence and computability to the knowledge base. Similar benefits can be obtained from any knowledge base that is presently characterized by an unnatural structure. We now have both tests and guidelines — granted, still being discerned from Peirce’s writings or its logic — for what constitutes a “natural class”. “Natural classes” are testable; we not only know it when we see it, we can systematize the use of them. In classifying a class as a “natural” one does entail aspects of judgment and world view. But, so long as the logics and perspectives behind these decisions are consistent, I believe we can create computable knowledge graphs that cohere following these tests and guidelines. Some may question whether any given structure is more “natural” than another one. But, through such guideposts as coherence, inference, testability and truthfulness, these structural arrangements are testable propositions. As Peirce, I think, would admonish us, failure to meet these tests are grounds for re-jiggering our structures and classes. In the end, coherence and computability become the hurdles that our knowledge graphs must clear in order to be reliable structures.
<urn:uuid:4e594ee5-0a0d-411d-8002-4b8e5972a3f9>
CC-MAIN-2022-33
https://www.mkbergman.com/1876/natural-classes-in-the-knowledge-web/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00697.warc.gz
en
0.935633
4,136
2.96875
3
This post may contain affiliate links. We may earn a small commission from purchases made through them, at no additional cost to you. Many types of home crafts exist for you to experiment and have fun with. One such art form is painting polymer clay and learning how to color polymer clay. Following your polymer painting, the clay is usually placed into the oven to set the paint. There are a multitude of items that you can create when you start learning to paint polymer clay; these may include items like ornaments, adornments, or jewelry pieces. There is a range of techniques for polymer painting that you can really explore and tap into your creativity. Table of Contents - 1 A First Glance at Polymer Clay - 2 The Best Polymer Clay Paint - 3 How to Color Polymer Clay Yourself - 4 How to Paint Polymer Clay Successfully - 5 Possible Problems with Painting Clay - 6 Valuable Tips for Painting Clay - 7 Frequently Asked Questions A First Glance at Polymer Clay This type of clay is a wonderful medium as it is easy to use and manipulate. You also only require a normal home oven to bake your clay creations, a kiln is not necessary. The ingredients of this man-made clay include resin, possible colorant, filler, and polymers. Painting polymer clay is perfect for any age or experience level, although obviously children should be supervised. The clay does come in various colors and acrylic paint is suggested for polymer painting as they are good for fine work on your item. There are various other techniques that you can use, including coloring the polymer clay with powdered pigment, ink, or chalk and then texturing the surface by sanding or polishing the clay. The Best Polymer Clay Paint Often the best paint for polymer clay is not oil-based, so acrylics are a good option. It is not recommended to use watercolor paint however as it can possibly bubble if it goes into the clay before it gets baked, this is because it has more water in it than acrylics do. Aside from acrylics, you may also want to try out the following paints for painting clay. - Alcohol-based ink: These inks consist of colored dyes within an alcohol base. You can paint the unbaked clay or apply the ink afterward when the clay is set. - Oil paint: These paints would be used to paint polymer clay that is baked and then it needs to be baked a second time to ensure that the paint dries and sets. A marbling technique can also be used by adding the paint to unbaked clay. It is always wise to test a small area of clay to see how best your paint for polymer clay works and the various effects you can create. Our Recommendations for The Best Acrylic Paint for Polymer Painting Finding the best polymer clay paint may be overwhelming with all the varieties of polymer and paint on the market. Water-based acrylics are commonly used as they are inexpensive, non-toxic, readily available, and easy to apply. For the best results, you should look to buy mid-range to fairly viscose paint for the good adherence qualities of thicker paint. Best Artist Level Acrylic Paints: GOLDEN Acrylic Heavy Bodied Paint Golden high viscosity acrylic paint is available in individual paint tubes in a variety of colors. They have a smooth and even texture with even coverage, no more than one coat should be necessary with these paints. The pigment level is high and these paints are quick-drying and with a simple application technique. Best Acrylic Paint for Beginners: DECOART Crafter’s Acrylic Value Pack DecoArt Acrylic Paint is more of craft paint and is, therefore less expensive. This range offers a smooth and even finish with a variety of colors on offer. These acrylics are effective on numerous different surfaces such as metal, timber, wax candles, clay, and others. Colors mix easily and application, as well as cleaning, is simple. Best Acrylic Paint Set: CRAFTS 4 ALL Acrylic Paint Set This range from Crafts 4 All is good value for money and is useful for both professional artists and those new to crafts. They offer a good range of colors that are compatible with a variety of different surfaces. These are viscose paints that blend well and the kit also includes brushes. How to Color Polymer Clay Yourself One can purchase clay that has already been colored for you; you can even mix different color clay for another shade of your own. Your other option is to color your own clay. As mentioned, acrylic paint in polymer clay is a good option. As it is water-based, it is important to allow the clay to dry sufficiently, which takes about three days. Only after that can it be baked. It really does depend on the type of paint that you use as to how the colors will end up when baked. Water and alcohol paints may come out lighter or less radiant than perhaps using more pure coloring methods. Colored mica powder gives you a good range of color options; even adding metallics and pearls to your creation. You would typically use these prior to baking the clay. You can also try out powdered makeup like a blusher for a similar effect. You can combine these in the clay or on the surface to give it sheen. Chalk can also successfully be used when crushed into powder form and mixed with unbaked clay. It also works to make paint from the chalk. This can really work well on the outside of your project. Painting clay is really so versatile and you can let your imagination run loose. You can add coloring inks; wax crayon shaved using a craft knife, metallic foil, glitter, or various art pens in your work. You can add a few different ideas together and just enjoy experimenting with the creative process. Below you will find a summary table of the different coloring agents that you can use to color polymer clay. |Bake this clay normally as no liquid needs to evaporate||More expensive||Use the powder on the outside of the raw clay or mixed into it before you bake| Allow the clay to dry for at least two or three full days before you bake |Various grades to choose from, these are available and easy on the pocket. Artist quality paint can become pricey||You can use acrylic paint on polymer clay either before or after it has been baked| |Allow the alcohol to evaporate for three to four hours before baking||Readily available but are quite expensive||Inks can be used successfully before or after the baking process| Oil Paint Color |Paint your clay and to set the color bake it again for a short time||This option is quite expensive|| | This can be used for painting clay that is cured or mixed into the polymer before baking How to Paint Polymer Clay Successfully The best polymer clay paint to use for your projects is acrylic paint, the reason being that it is water-soluble and easy to use. Higher quality artists acrylic does the best job and offers a better quality finish but average quality crafting acrylics can also be used with good results. Avoid using products such as nail varnish as a coloring agent or medium of paint for polymer clay, as it corrodes and melts the clay, causing it to become sticky. Test your chosen paint method on a small amount of clay first as some polymer brands out there are not suitable for painting. Sculpey is a well-known and excellent brand to try out. Once you have applied your best polymer clay paint to your project, you can also use a sealant. If your clay is already colored then a sealer is not required as the clay already has durability. A seal will provide extra shine to the clay and is recommended should you have included additional materials or painted the polymer. Sealant options include waxes, epoxy, sealer for acrylic paint and varnishes. To create a sheen on a pre-colored polymer without using sealants, you need only sand the clay and then rub it with a soft, dry cloth. Do You Paint Polymer Clay Before or After Baking It In the Oven? Many people ask the question; do you paint polymer clay before or after baking it in the oven? You can really do either method, however, it seems most popular to paint the clay once it is baked. The reason for this is that there is often residual moisture in the paint, this can cause bubbling or imperfections in the baked polymer. Should you decide to paint before baking your clay, allow the item to dry for two or three days to ensure that the paint is completely dry and contains no moisture. Painting Polymer Before Baking In the baking process, the clay does not change size or shape in any way, thus painting the item before baking is perfectly fine. In fact, the paint and clay may form better compatibility before oven baking. The best advice is to do a small test of your desired method first; this will give you an idea of how it will turn out. When heated, certain paints may also transform in color, this is another reason to do a test to ensure that you do not come out with an odd array of colors. Make sure that you adhere to any instructions on the products that you use. Painting unbaked clay is a simple process, first mold your piece, proceed with painting the desired effects and then bake the item. Be aware of oven temperature to ensure that the clay does not burn. Keep an eye on the clay in the oven just to be sure. Painting Polymer Clay After Baking Once your clay has been baked in an oven you can still easily add some finishing touches to your project. The process is easy to follow with a little knowledge of suitable products and this will round off your stunning creations perfectly. - First of all form your desired piece from your polymer clay. - Make sure that the oven is preheated to a temperature of 32 degrees Fahrenheit. - Consult the baking time recommendations on the clay packaging; these may differ depending on the brand. Then bake your polymer clay. - Remove from the oven and allow the clay to cool before touching or working with the item. If the clay is hot when you work with it, any paint will not stick to the surface. - Using a paintbrush, coat the clay with two thin applications of glaze, allowing a drying time of at least one hour between each sealant layer. Allow a further three-hour time period following this to ensure that the glaze is completely dry and set before you can apply any paint. - Carefully sand the clay surface using medium grit sandpaper, this will slightly roughen it and ensure that the paint adheres properly. Sand sparingly so as not to sand away the glaze that you have applied. - Gently wipe the clay to get rid of any dust particles left behind by the sanding process. - Begin using acrylic paint on polymer clay for your first layer. - Paint in thin coats, layering as you go, this will ensure that you do not leave brush marks. - Once again, as with the glaze, a two-hour drying time is required between layers and before you handle the clay. Sealing Your Polymer Clay The Sculpey Gloss Glaze offered by Polyform assists with sealing and a gloss finish once your polymer clay has been baked and sufficiently cooled down. This glaze is suitable to use with a variety of different paints such as oils, inks, and water-based acrylics. Should you choose not to make use of a glaze, simply use the sanding method as aforementioned, just without a glaze. Once again, be aware of sanding too much as intricate detail can be lost. Should your work be very fine and delicate it would be worth looking into different types and brands of polymer in order to source the most suitable clay that will not require any sanding. Once the clay is baked and cooled, it may occur that the paint is not able to adhere to the surface; this can happen if a thin layer of oil is creating a barrier between the polymer and the paint. This can easily be remedied with a little alcohol for rubbing used on a cloth; gently wipe the surface and this will remove any oily residue. Once the alcohol has evaporated you can begin painting even without any glazing or use of the sandpaper. Experimenting with Various Clay Painting Effects You can have lots of fun and be very creative and imaginative with polymer clay. You can create different surface effects and give the illusion of glass, ceramics, and even metals. You can also add elements to the polymer to incorporate mixed materials such as glass beads and glitters. Let us start by exploring various paint effect techniques for polymer clay. - With a light brush, you can add a little metallic color or powder into the polymer, then bake the clay and use a sealer. - If you paint a raised embellishment or area this will be highlighted above the plain surrounding area. - You can create an antiqued impression by painting an indented area and then removing the excess paint, some of the color will remain in the creases of the pattern. - To create a watercolor impression, use thin acrylic paint on a wet sponge, this works well on cured, white polymer. - To create a crackled surface, paint a flattened portion of the unbaked polymer and allow it to dry. Once dried put the clay through a pasta machine or other such method to crack the acrylic paint. - Interesting embellishments can be added with techniques such as dotting, which is simply creating images using painted dots, either raised for texture or flat. You can also do swirling, imprinting, and accentuating. Possible Problems with Painting Clay With all the techniques, products, and brands available for painting clay, unfortunately, a few tricky obstacles can also come up. You may just need to try different methods and experiment as you go to find the best option for your specific idea. In the meantime below are a few obstacles that other people have found themselves faced with. Remember that a medium that includes dye as a stain will not be lightfast, thus the color will fade with aging. The color in paint is far more durable and does last longer; it will however also begin to fade, particularly if it is in a sunny area. Many types of paints are primarily designed for use on paper or variations of paper, such as canvas, therefore if they are used on different mediums a few issues may occur. Polymer clays are manufactured from vinyl that has been plasticized and is, therefore, a completely different material. The plastic in the clay may have a reaction with acrylics which renders them soft and sticky. You also need to be aware that there are various different brands of polymer and paints on the market. Some research and experimentation are required to see what clay and paint varieties are compatible. Should you find yourself with sticky paint, it can gently be removed with crafting alcohol, or else a sealer can be applied to try and conceal the problem. Cracks, Bubbles, and Peeling Polymer clay is non-absorbent; therefore paint is not immersed in this material as it would be with paper or canvas. The paint dries to the surface of the polymer and with time it may crack and start to peel. Some paints are better suited to maintaining surface coherence to certain types of suitable clay. To help improve cohesion, you can gently sand the clay to create a slightly rough surface. When asked, do you paint polymer clay before or after baking, with regard to this section, it may indeed prove better to paint the raw clay and then bake it as the paint should adhere more effectively. When using water-soluble paints on raw clay, the paint may well bubble when heated due to the moisture content in the applied paint. To help to prevent this, ensure that your piece has dried for two or three before you bake it. Paints have different viscosities and therefore the coverage ability will differ on your project’s surface. Depending on whether the paint is thick or thinner, you may require a single coat or a few coats to achieve the desired effect. For ideal surface coverage and color, you should aim for products with a good quality make and proportion of pigment. That said, the higher the level of pigment the more you will pay. Artist-level acrylics for example will be more expensive than simply using crafters acrylics. Crafters acrylics may be used, but they will require several coats to achieve the desired color density and coverage. Valuable Tips for Painting Clay Using polymer as an art medium can be very creative and inspiring, lending itself to anything from sculpture to beads, buttons, bowls, and anything else you can imagine. As we know, a few minor issues may come up when working with clay and paint, below are some tips and tricks to help you avoid any unwanted problems. - To soften unbaked clay, knead it and pull it to make it more flexible for baking. - To help your paint adhere better to the clay surface, try gently sanding your clay. - For a longer-lasting and good-quality paint job, try double baking your piece. - Ensure that your piece is cooled completely after it is baked. If you paint the clay when it is still warm, the paint may not stick to the surface. - Choose to use quality acrylic paint that has a smooth consistency and will not fade. - This type of clay should not connect with any surface that may be in contact with food items. It is best to bake the polymer on a tray or baking sheet lined with tin foil. - Your clay will need to be absolutely dry before you bake it in the oven. To test if it is ready for baking, feel the clay with your hand. If the clay is still very cold it will require more time to dry. - Avoid dramatic temperature fluctuations when dealing with baked clay. Allow your clay piece to cool inside the oven when it is turned off, leaving the door open. By doing this the cooling process is slow and you will avoid cracks. - Use a quality paintbrush so as not to shed bristles or leave brush strokes. Crafting with polymer clay is both satisfying and relatively straight-forward. With some simple tools and oodles of creativity, you can create beautiful and striking pieces of painted polymer clay art. Remember to always use the best quality paint you can afford, and follow our tips and tricks to make the painting process as smooth as possible. Frequently Asked Questions Can I Use Food Coloring in Polymer Clay? You can use food coloring in clay; however, it is always best to check the ingredients in the coloring as some additives may affect the processing procedure. You can also test this method on a small amount of clay to see what happens. Using paint methods is usually more popular. Do You Paint Polymer Clay Before or After Baking It In the Oven? You can effectively use both methods with acrylic colors. When painting unbaked clay be sure that the piece is thoroughly dry before putting it in the oven. Painting after baking helps the paint to adhere to the surface effectively. Can I Color Clay Myself? There are a variety of packs, kits, and individual colored polymers on the market but you can also easily make your own colored clay by using ink, acrylics, metallic powders, chalks, or flecks of wax crayon. Must I Use a Sealer or Varnish? You have the option of using various types of sealants for a smooth, shiny finish. These may include waxes, epoxy, glazes, varnishes, or acrylic sealers. You do not need to use a sealer; instead, you can lightly sand the clay and buff it up with a soft, dry cloth.
<urn:uuid:efb89ef6-5d2e-481a-8c4d-2fb14bb1bea5>
CC-MAIN-2022-33
https://craft-art.com/how-to-paint-polymer-clay/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571502.25/warc/CC-MAIN-20220811194507-20220811224507-00097.warc.gz
en
0.946239
4,181
2.796875
3
A barbecue grill is a device for cooking food by applying heat directly from below. There are several varieties of such grills, with most falling into one of two categories: gas-fueled and charcoal. There is great debate over the merits of charcoal or gas for use as the cooking method between barbecue grillers. History in the USA Grilling existed in the Americas since pre-colonial times. The Arawak people used a wooden structure to roast meat on, which was called barbacoa in Spanish. For some time, the word referred to the wooden structure and not the act of grilling, but this word was eventually applied to the pit style cooking techniques used in the Southeastern United States. Originally used to slow cook hogs, different ways of preparing the food lead to regional variations. In time, other food were cooked in a similar fashion, with hamburgers and hot dogs being recent additions. LazyMan Model AP, world’s first portable gas grill. Taken during the summer of 1954. E.G. Kingsford is the inventor of the modern charcoal briquette. Kingsford was a relative of Henry Ford who charged him with establishing a Ford auto parts plant and sawmill in northern Michigan. The local community swelled and was named Kingsford in his honor. Kingsford noticed that Ford’s Model Tproduction lines were generating a large amount of wood scraps that were just being discarded. Kingsford suggested to Ford that a charcoal manufacturing facility be established next to the assembly line to process and sell charcoal, with the Ford name, in Ford dealerships. Several years after Kingsford’s death, the chemical company was sold to local businessmen and renamed the Kingsford Chemical Co. George Stephen created the hemispherical grill design, jokingly called “Sputnik” by Stephen’s neighbors. Stephen, a welder, worked for Weber Brothers Metal Works, a metal fabrication shop primarily concerned with welding steel spheres together to make buoys. Stephen was tired of wind blowing ash onto his food when he grilled so he took the lower half of a buoy, welded three steel legs onto it, and fabricated a shallower hemisphere for use as a lid. He took the results home and following some initial success started the Weber-Stephen Products Co. The outdoor gas grill was invented in the early 1950s by Don McGlaughlin owner of the Chicago Combustion Corp., known today as LazyMan. Don, invented the first built-in grill from the successful gas broiler called BROILBURGER. These first Lazy-Man grills were marketed as “open fire charcoal type gas broiler” which featured “permanent coals” otherwise known as lava rock. In the 1950s most residential households did not have a barbecue so the term broiler was used to market to commercial establishments. The gas open broiler design was later adapted into the first portable gas grill in 1954 by Chicago Combustion Corp as the Model AP. Don’s portable design featured the first use of the 20-lb propane cylinders which previously were exclusively used by plumbers as a fuel source. In 1958, Phillip Arnold was a sales engineer for the natural-gas public utility in Milwaukee. He took a challenge from his boss to develop a product for the home that could burn natural gas. Thinking that natural gas could be used to light a barbecue grill, Phillip experimented with an oil drum (cut down to 9 inches tall), some lava rock from his garden and a log lighter from his fireplace. To perfect the technology, Phillip teamed up with Walter Kozoiol, who was manufacturing gas lighting under the name of Charmglow at the time. In 1966, Phillip moved from Wisconsin to southern California to open AEI Corp. to distribute the grills. A single-burner propane gas grill that conforms to the cart grill design common among gas grills. Gas-fueled grills typically use propane (LP) or natural gas (NG) as their fuel source, with gas-flame either cooking food directly or heating grilling elements which in turn radiate the heat necessary to cook food. Gas grills are available in sizes ranging from small, single steak grills up to large, industrial sized restaurant grills which are able to cook enough meat to feed a hundred or more people. Gas grills are designed for either LP or NG, although it’s possible to convert a grill from one gas source to another. The majority of gas grills follow the cart grill design concept: the grill unit itself is attached to a wheeled frame that holds the fuel tank. The wheeled frame may also support side tables and other features. A recent trend in gas grills is for the manufacturers to add an infrared radiant burner to the back of the grill enclosure. This radiant burner provides an even heat across the burner and is intended for use with a horizontal rotisserie. A meat item (whole chicken, beef roast, pork loin roast) is placed on a metal skewer that is rotated by an electric motor. Smaller cuts of meat can be grilled in this manner using a round metal basket that slips over the metal skewer. Another type of gas grill gaining popularity is called a flattop grill. According to Hearth and Home magazine, flattop grills “on which food cooks on a griddlelike surface and is not exposed to an open flame at all” is an emerging trend in the outdoor grilling market. A small metal “smoker box” containing wood chips may be used on a gas grill to give a smoky flavor to the grilled foods. Although, barbecue purists would argue that to get a true smoky flavor (and smoke ring) you have to cook low and slow, indirectly and using wood or charcoal. According to The Gas Grill Review and Ratings Guide, gas grills are difficult to maintain at the low temperatures required (~225-250 °F), especially for extended periods. An ignited Infrared grill burner, only seeing the visible light spectrum. Infrared grills work by igniting propane or natural gas to superheat a ceramic tile, causing it to emit infrared radiation by which the food is cooked. The thermal radiation is generated when heat from the movement of charged particles within atoms is converted to electromagnetic radiation in the infrared heat frequency range. The benefits are that heat is uniformly distributed across the cooking surface and that temperatures reach over 500 °C(900 °F), allowing users to sear items quickly. Infrared cooking differs from other forms of grilling, which uses hot air to cook the food. Instead of heating the air, infrared radiation heats the food directly. The benefits of this are a reduction in pre-heat time and less drying of the food. Grilling enthusiasts claim food cooked on an infrared grill tastes similar to food from char-grills. Proponents say that food cooked on infrared grills seems juicier. Also, infrared grills have the advantages of instant ignition, better heat control, and a uniform heat source. This technology was previously patented, but the patents expired in 2000 and more companies have started offering infrared grills at lower prices. Charcoal grills use either charcoal briquettes or all-natural lump charcoal as their fuel source. The charcoal, when burned, will transform into embers radiating the heat necessary to cook food. There is contention among grilling enthusiasts on what type of charcoal is best for grilling. Users of charcoal briquettes emphasize the uniformity in size, burn rate, heat creation, and quality exemplified by briquettes. Users of all-natural lump charcoal emphasize the reasons they prefer it: subtle smoky aromas, high heat production, and lack of binders and fillers often present in briquettes. There are many different charcoal grill configurations. Some grills are square, round, or rectangular, some have lids while others do not, and some may or may not have a venting system for heat control. The majority of charcoal grills, however, fall into the following categories: A brazier grill loaded with fresh charcoal briquettes. The simplest and most inexpensive of charcoal grills, the brazier grill is made of wire and sheet metal and composed of a cooking grid placed over a charcoal pan. Usually the grill is supported by legs attached to the charcoal pan. The brazier grill does not have a lid or venting system. Heat is adjusted by moving the cooking grid up or down over the charcoal pan. Even after George Stephen invented the kettle grill in the early 1950s, the brazier grill remained a dominant charcoal grill type for a number of years. Brazier grills are available at most discount department stores during the summer. Pellet grills are fueled by compressed hardwood pellets (sawdust compressed with vegetable oil or water at approx. 10k psi) that are loaded into a hopper and fed into a fire box at the bottom of the grill via an electric powered auger that is controlled by a thermostat. The pellets are lit by an electric igniter rod that starts the pellets burning and they turn into coals in the firebox once they burn down. Most pellet grills are a barrel shape with a square hopper box at the end or side. The advantage of a pellet grill is that it can be set on a “smoke” mode where it burns at 100–150 °F (38–66 °C) for slow smoking. It can be set at 180–300 °F (82–149 °C) to slow cook or BBQ meats (like brisket, ribs and hams) or cranked up to a max of 450–500 °F (232–260 °C) for what would be considered low temperature grilling. It is one of the few “grills” that is actually a great smoker, a fantastic BBQ and a decent grill. Critics argue that a good “grill” should be able to exceed 500 °F (260 °C) to sear the meat. The best pellet grills can hold steady temperatures for more than ten hours. Many use solid diffuser plates between the firebox and grill to provide even temperature distributions. Most pellet grills burn 1/2 to 1 pound of pellets per hour at 180–250 °F (82–121 °C), depending on the “hardness” of the wood, ambient temperature and how often the lid is opened. Most hoppers hold 10 to 20 pounds of wood pellets. Pellets in a wide variety of woods (hickory, oak, maple, apple, alder, mesquite, grapevine, etc.) can be used or mixed for desired smoke flavoring. Pellet technology is widely used in home heating in certain parts of North America. Softer woods including pine are often used for home heating. Pellets for home heating are not cooking grade and should not be used in pellet grills. The square charcoal grill is a hybrid of the brazier and the kettle grill. It has a shallow pan like the brazier and normally a simple method of adjusting the heat, if any. However, it has a lid like a kettle grill and basic adjustable vents. The square charcoal grille is, as expected, priced between the brazier and kettle grill, with the most basic models priced around the same as the most expensive braziers and the most expensive models competing with basic kettle grills. These grills are available at discount stores and have largely displaced most larger braziers. Square charcoal grills almost exclusively have four legs with two wheels on the back so the grill can be tilted back using the handles for the lid to roll the grill. More expensive examples have baskets and shelves mounted on the grill. Various Japanese traditionalShichirin (Tokyo Egota), Made fromdiatomite. The traditional Japanese hibachi is a heating device and not usually used for cooking. In English, however, “hibachi” often refers to small cooking grills typically made of aluminium or cast iron, with the latter generally being of higher quality. Owing to their small size, hibachi grills are popular as a form of portable barbecue. They resemble traditional, Japanese, charcoal-heated cooking utensils calledshichirin. click here to listen to the pronunciation of “Shichirin” (help•info) Alternatively, “hibachi-style” is often used in the U.S. as a term for Japaneseteppanyaki cooking, in which gas-heated hotplates are integrated into tables around which many people (often multiple parties) can sit and eat at once. The chef performs the cooking in front of the diners, typically with theatrical flair—such as lighting a volcano-shaped stack of raw onion hoops on fire. In its most common form, the hibachi is an inexpensive grill made of either sheet steel or cast iron and composed of a charcoal pan and two small, independent cooking grids. Like the brazier grill, heat is adjusted by moving the cooking grids up and down. Also like the brazier grill, the hibachi does not have a lid. Some hibachi designs have venting systems for heat control. The hibachi is a good grill choice for those who do not have much space for a larger grill, or those who wish to take their grill traveling. Binchō-tan is most suitable for fuel of shichirin. Two charcoal kettle grills, a small 18 inches (460 mm) tabletop model, and a freestanding 22.5 inches (570 mm) model. The kettle grill is considered the classic American grill design. The original and often-copied Weber kettle grill was invented in 1951 by George Stephen. It has remained one of the most commercially successful charcoal grill designs to date. Smaller and more portable versions exist, such as the Weber Smokey Joe. The kettle grill is composed of a lid, cooking grid, charcoal grid, lower chamber, venting system, and legs. Some models include an ash catcher pan and wheels. The lower chamber that holds the charcoal is shaped like a kettle, giving the grill its name. The key to the kettle grills’ cooking abilities is its shape. The kettle design distributes heat more evenly. When the lid is placed on the grill, it prevents flare-ups from dripping grease, and allows heat to circulate around the food as it cooks. It also holds in flavor-enhancing smoke produced by the dripping grease or from smoking wood added to the charcoal fire. The Weber kettle grill has bottom vents that also dispatch ash into a pan below the bowl. Most kettle grills can be adapted for indirect cooking. The kettle design allows the griller to configure the grill for indirect cooking (or barbecuing) as well. For indirect cooking, charcoal is piled on one or both sides of the lower chamber and a water pan is placed in the empty space to one side or between the charcoal. Food is then placed over the water pan for cooking. The venting system consists of one or more vents in the bottom of the lower chamber and one or more vents in the top of the lid. Normally, the lower vent(s) are to be left open until cooking is complete, and the vent(s) in the lid are adjusted to control airflow. Restricted airflow means lower cooking temperature and slower burning of charcoal. The charcoal cart grill is quite similar in appearance to a typical gas grill. The cart grill is usually rectangular in design, has a hinged lid, cooking grid, charcoal grid, and is mounted to a cart with wheels and side tables. Most cart grills have a way to adjust heat, either through moving the cooking surface up, the charcoal pan down, through venting, or a combination of the three. Cart grills often have an ash collection drawer for easy removal of ashes while cooking. Their rectangular design makes them usable for indirect cooking as well. Charcoal cart grills, with all their features, can make charcoal grilling nearly as convenient as gas grilling. Cart grills can also be quite expensive. In its most primitive form, the barrel grill is nothing more than a 55 US gallons (210 l; 46 imp gal) steel barrel sliced in half lengthwise. Hinges are attached so the top half forms the lid and the bottom half forms the charcoal chamber. Vents are cut into the top and bottom for airflow control. A chimney is normally attached to the lid. Charcoal grids and cooking grids are installed in the bottom half of the grill, and legs are attached. Like kettle grills, barrel grills work well for grilling as well as true barbecuing. For barbecuing, lit charcoal is piled at one end of the barrel and food to be cooked is placed at the other. With the lid closed, heat can then be controlled with vents. Fancier designs available at stores may have other features, but the same basic design does not change. The ceramic cooker design has been around for roughly 3,000 years. The shichirin, a Japanese grill traditionally of ceramic construction, has existed in its current form since the Edo period however more recent designs have been influenced by the mushikamado now more commonly referred to as a kamado. Recently, the kamado ceramic cooker design has been made popular by the Grill Dome, Komodo Kamado, Kamado Joe, The Big Green Egg, and Primo. The ceramic cooker is more versatile than the kettle grill as the ceramic chamber retains heat and moisture more efficiently. Ceramic cookers are equally adept at grilling, smoking, and barbecuing foods. Main article: Tandoor A tandoor is used for cooking certain types of Irani, Indian and Pakistani food, such as tandoori chicken and naan. In a tandoor, the wood fire is kept in the bottom of the oven and the food to be cooked is put on long skewers and inserted into the oven from an opening on the top so the meat items are above the coals of the fire. This method of cooking involves both grilling and oven cooking as the meat item to be cooked sees both high direct infrared heat and the heat of the air in the oven. Tandoor ovens often operate at temperatures above 500 °F (260 °C) and cook the meat items very quickly. Portable charcoal grills are small but convenient for traveling, picnicking, and camping. This one is loaded with lump charcoal. The legs fold up and lock onto the lid so it can be carried by the lid handle. The portable charcoal grill normally falls into either the brazier or kettle grill category. Some are rectangular in shape. A portable charcoal grill is usually quite compact and has features that make it easier to transport, making it a popular grill for tailgating. Often the legs fold up and lock into place so the grill will fit into a car trunk more easily. Most portable charcoal grills have venting, legs, and lids, though some models do not have lids (making them, technically, braziers.) There are also grills designed without venting to prevent ash fallout for use in locations which ash may damage ground surfaces. Some portable grills are designed to replicate the function of a larger more traditional grill/brazier and may include spit roasting as well as a hood and additional grill areas under the hood area. A hybrid grill is a grill used for outdoor cooking with charcoal and natural gas or liquid propane and can cook in the same manner as a traditional outdoor gas grill. The manufacturers claim that it combines the convenience of an outdoor gas grill with the flavor and cooking techniques of a charcoal and wood grill. In addition to providing the cooking heat, the gas burners in a hybrid grill can be used to quickly start a charcoal/wood fire or to extend the length of a charcoal/wood cooking session. Some of the newer hybrid stoves cater more towards the emergency preparedness/survivalist market with the ability to use propane, charcoal or wood. Generally, they have a propane burner that can be removed and charcoal or wood substituted as the fuel source. Many have features similar to the portable charcoal grill with a volcano shaped cooking chamber for efficiency, the ability to be folded or collapsed for a smaller footprint and a carrying case for easy portability. A commercial barbecue is typically larger in cooking capacity than traditional household grills, as well as featuring a variety of accessories for added versatility. End users of commercial barbecue grills include for-profit operations such as restaurants, caterers, food vendors and grilling operations at food fairs, golf tournaments and other charity events, as well as competition cookers. The category lends itself to originality, and many commercial barbecue grills feature designs unique to their respective manufacturer. Model Mobile-SLPX Commercial Barbecue Grill Commercial barbecue grills can be stationary or transportable. An example of a stationary grill is a built-in pit grill, for indoor or outdoor use. Construction materials include bricks, mortar, concrete, tile and cast iron. Most commercial barbecue grills, however, are mobile, allowing the operator to take the grill wherever the job is. Transportable commercial barbecue grills can be units with removable legs, grills that fold, and grills mounted entirely on trailers. Trailer mounted commercial barbecue grills run the gamut from basic grill cook tops to pit barbecue grills and smokers, to specialized roasting units that cook whole pigs, chicken, ribs, corn and other vegetables. Many gas grill components can be replaced with new parts, adding to the useful life of the grill. Though charcoal grills can sometimes require new cooking grids and charcoal grates, gas grills are much more complex, and require additional components such as burners, valves, and heat shields. A gas grill burner is the central source of heat for cooking food. Gas grill burners are typically constructed of: • stainless steel • aluminized steel • cast iron (occasionally porcelain-coated) Burners are hollow with gas inlet holes and outlet ‘ports’. For each inlet is a separate control on the control panel of the grill. The most common type of gas grill burners are called ‘H’ burners and resemble the capital letter ‘H’ turned on its side. Another popular shape is oval. There are also ‘Figure 8′, ‘Bowtie’ and ‘Bar’ burners. Other grills have a separate burner for each control. These burners can be referred to as ‘Pipe’, ‘Tube’, or ‘Rail’ burners. They are mostly straight since they are only required to heat one portion of the grill. Gas is mixed with air in venturi tubes or simply ‘venturis’. Venturis can be permanently attached to the burner or removable. At the other end of the venturi is the gas valve, which is connected to the control knob on the front of the grill. A metal screen covers the fresh air intake of each venturi to keep spiders from clogging the tube with their nests. Cooking grids, also known as cooking grates, are the surface on which the food is cooked in a grill. They are typically made of: • Stainless steel (Usually the most expensive and longest-lasting option, may carry a lifetime warranty) • Porcelain-coated cast iron (The next best option after stainless, usually thick and good for searing meat) • Porcelain-coated steel (Will typically last as long as porcelain-coated cast iron, but not as good for searing) • Cast iron (More commonly used for charcoal grills, cast iron must be constantly covered with oil to protect it from rusting) • Chrome-plated steel (Usually the least expensive and shortest-lasting material) Cooking grids used over gas or charcoal barbecues will allow fat and oil to drop between the grill bars, which will cause flare up where flames can burn and blacken food long before it is safely cooked. To reduce the occurrence of flare up, some barbecues may be fitted with plates, baffles or other means to intercept the dripping flammable fluids. Most high end barbecue grills use stainless steel grates, but there is a health benefit to using bare cast iron grids. When cast iron is used to cook food containing high level of acidity, such as lentils, tomatoes, lemonade sauces, or marinades with strong vinegar content, there is increased iron dietary intake. According to the Centers for Disease Control and Prevention, iron and iron deficiency is a particularly important issue for pregnant women and young children. The longer and hotter the grilling temperature, the more iron is infused into the food. This process can only take place with plain cast iron grids – without any form of porcelain or other types of coating. The downside to bare cast iron is that it sticks to the food and can be hard to clean. Rock grates are placed directly above the burner and are designed to hold lava rock or ceramic briquettes. These materials serve a dual purpose – they protect the burner from drippings which can accelerate the deterioration of the burner, and they disperse the heat from the burner more evenly throughout the grill. Heat shields are also known as burner shields, heat plates, heat tents, radiation shields, or heat angles. They serve the same purpose as a rock grate and rock, protecting the burner from corrosive meat drippings and dispersing heat. They are more common in newer grills. Heat shields are lighter, easier to replace and harbor less bacteria than rocks. Like lava rock or ceramic briquettes, heat shields also vaporize the meat drippings and ‘infuse’ the meat with more flavor. Valves can wear out or become rusted and too difficult to operate, thus requiring replacement. A valve is unlike a burner in that a replacement usually must match exactly to the original in order to fit properly. Therefore, many grills are disposed of when valves fail, due to a lack of available replacements. If a valve seems to be moving properly, but no gas is getting to the burner, the most common cause for this is debris in the venturi. This impediment can be cleared by using a long flexible object. A barbecue cover is a textile product specially designed to fit over a grill so as to protect it from outdoor elements such assun, wind, rain and snow, and outdoor contaminants such as dust, pollution, and bird droppings. Barbecue covers are commonly made with a vinyl outer shell and a heat resistant inner lining, as well as adjustable straps to secure the cover in windy conditions. The cover may have a polyester surface, often with polyurethane coating on the outer surface, with polyvinyl chloride liner. While live-fire cooking is difficult indoors without heavy-duty ventilation, it is possible to simulate some of the effects of a live-fire grill with indoor equipment. The simplest design is known as a grill pan, which is a type of heavy frying pan with raised grill lines to hold the food off the floor of the pan and allow drippings to run off. Otherwise, a simple frying pan would do a reasonable job of grilling
<urn:uuid:d2e686b9-9955-4c3e-bea9-ce0387dbf297>
CC-MAIN-2022-33
http://charcoalexporter.com/wood-charcoal-barbecue-grill-2.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572033.91/warc/CC-MAIN-20220814113403-20220814143403-00297.warc.gz
en
0.941944
5,795
3.015625
3
Red light therapy is a safe, natural treatment method that shines red and/or near-infrared (NIR) light onto the skin via light-emitting diode (LED) bulbs. As the light absorbs into the skin, it stimulates healing at a cellular level. This non-invasive and painless treatment is used for a variety of cosmetic and medical conditions, such as tissue repair, skin conditions, pain relief, and reducing inflammation. This article will discuss how red light therapy works, and how it can benefit your health and well-being. What Is LED Light Therapy? Red light therapy uses devices fitted with specifically calibrated LED bulbs. These bulbs produce non-invasive wavelengths of red and/or near-infrared light to promote cellular health, which can be the key to relieving a range of conditions. Wavelengths of light, which are measured in nanometers (nm), absorb to various depths of the body’s tissues. Red light in the 620–660 nm range and NIR light in the 810–850 nm range is considered to be in the “therapeutic window” of light that delivers benefits without side effects. Red light therapy is safe to use since it is non-invasive and the light does not generate heat in the cells. Red and NIR light photons have clinically proven positive effects on the body’s tissues, and no adverse side effects have been reported. Is All Red Light Therapy the Same? You may have heard red light therapy called other names, such as low-level light therapy (LLLT), low-power light therapy, or photobiomodulation. These are all names for the same type of treatment with devices that use LED bulbs. Several other terms are also frequently used in reference to red light therapy, including low-level laser therapy, soft laser therapy, low-power laser therapy, and cold laser therapy – all forms of red light therapy that use lasers rather than LED lights. Both lasers and LEDs are considered effective ways to deliver light to the body’s tissues. What's the difference? LED light panels deliver the same wavelengths, but to a larger area of the body, which makes LED light therapy more convenient than low-level laser therapy. Are there any advantages to using LED lights over lasers? The main advantage of LED devices is the ability to treat larger areas of the body at the same time. Laser lights have a narrower irradiance (power output) when compared with LED lights. The greater irradiance with LED lights allows larger areas of the body to be treated. Also, multiple LED devices can be arranged into planar arrays. This is easier to do with red light therapy panels as opposed to other forms of red light treatment. LED light produces a non-thermal emission of light, making it a safer option to use. With at-home panels, you can also avoid expensive in-clinic low-level laser therapy treatments, although clinical red light therapy treatment also remains an effective option. What Does Red Light Therapy Do? The most common use for red light therapy is skin rejuvenation. The result of light therapy treatments is healthier skin cells, which can help to restore a more youthful appearance by supporting the skin’s natural cellular regeneration. But, the treatment has many diverse applications that go far beyond skin rejuvenation. Studies have shown red light therapy to be an effective treatment for wound healing, chronic skin disorders, muscle recovery, low back pain, chronic pain from arthritis, neck pain, neuropathy, weight loss, and hair growth, among others. The PlatinumLED Therapy Lights Learning Center page offers a wealth of information on the many applications of red and NIR light treatments. How Does Red Light Therapy Work? As mentioned earlier, different wavelengths of light have different effects on the body. There are four major ways the body can benefit from red light therapy. Increased Cellular Energy Production Inside cells are tiny organelles called mitochondria, which are responsible for producing cellular fuel known as adenosine triphosphate (ATP). Sometimes, for a multitude of reasons (aging, illness, lifestyle, etc.) the mitochondria can’t produce enough ATP. When that happens, cells struggle to survive, rather than being able to function optimally or effectively repair or regenerate themselves. This mitochondrial dysfunction, as it is known, can snowball, as depleted cells in one bodily system cause a ripple effect in other systems. Mitochondrial dysfunction is considered to be at the heart of most major diseases and disorders. Red light therapy triggers a reaction that helps reverse this downward spiral. Light-sensitive molecules in cells (chromophores) in the mitochondria are especially responsive to red and NIR light photons. This interaction excites the mitochondria and stimulates ATP production. Energized cells are happy cells. They perform better, repair themselves more efficiently, and replace themselves faster. As old, damaged cells are replaced, healthy new cells emerge. This is one of the reasons that red light treatments are so popular for improving skin complexion and speeding up wound healing. Inflammation is one of the main contributing causes of mitochondrial dysfunction. Red light therapy has an astonishingly powerful effect on reducing inflammation in treated areas of the body. Red/NIR light treatments increase blood circulation as well as lymph circulation in skin tissue and beneath the skin, bringing more oxygen and nutrients to the treatment area and removing waste and toxins. Increased Collagen and Elastin Production One of the key components of healthy skin is an ample supply of collagen, the protein that makes up about 80 percent of the skin’s structure. Collagen production declines naturally with age. It is also influenced by inflammation, oxidative stress, chronic emotional stress, poor diet, too much sun exposure, and other factors that cause thin, crepey, sagging skin. Red/NIR light stimulates collagen synthesis and helps promote the optimal organization of collagen proteins. This is most noticeable on delicate facial skin, where users report seeing significantly reduced fine lines and wrinkles after one to four months of regular use. Intradermal collagen density increase is essential for reducing crepey skin and regaining firm youthful skin. Yet, intradermal collagen density increase isn't the whole picture when it comes to skin rejuvenation. Red and NIR light treatments also promote the synthesis of elastin, the protein that gives skin its youthful elasticity, treating stretch marks as well. The Benefits of Red Light Therapy Now that you know how red light therapy works, let’s look at what this natural treatment has been clinically proven to do. The anti-aging properties of red light therapy are well-documented. In 2014, for example, German researchers conducted a study on the effects of red light therapy on aging, including fine lines, wrinkles, and skin roughness. A total of 128 participants completed the study. Those who received red light treatments experienced significant improvement in skin feeling and appearance, including noteworthy intradermal collagen density increase and patient satisfaction with the treatment. Overall Skin Health Chronic skin disorders have been shown to respond well to consistent red and NIR light treatments. One of the main benefits of red light therapy is reducing inflammation. This could be beneficial in treating inflammatory skin disorders such as eczema and psoriasis. One study with positive results involved 81 patients who suffered from atopic dermatitis, which is the most common form of eczema. The study found that 830 nm NIR light therapy effectively decreased itching and skin eruptions. This study used low-power laser therapy; more recent studies are increasingly using LED light therapy devices. Faster Wound Healing A series of studies compared the effects of red light on helping heal skin wounds. They compare the effects of light delivered via light-emitting diode (LED) devices versus lasers. Both mechanisms delivered results including reduced inflammation, angiogenesis (the formation of new blood vessels), increased fibroblast proliferation (fibroblasts are the precursors to collagen), increased collagen synthesis, and formation of granulation tissue, which is the first tissue you can see as a wound is closing and healing. In the past, lasers were the only way to deliver light photons to cells. Many original studies used low level-laser therapy. Today, you can achieve similar results with a high-quality, powerful medical-grade red light therapy device. LED lights are coming into the spotlight because results could be more easily reproduced outside of a clinical setting. Red light therapy supports tissue regeneration. When used immediately after injury, red light shows great potential for reducing the growth and appearance of scars. If a scar has already formed, regular use of red light can gradually soften the scar and reduce its appearance as new, normal, healthy skin cells work their way to the surface. Treatment with 633 nm light has been shown to inhibit the migration of skin fibroblasts involved in skin fibrosis, which is excessive growth of scar tissue. Better Muscle Performance and Faster Recovery After Workouts or Injury Red light therapy can be used to precondition muscle tissue before a workout to enhance performance and stimulate faster recovery of workout-stressed muscle tissue. A review of 46 studies involving 1,045 participants used red/NIR light to increase muscle mass and decrease inflammation. In fact, the benefits outlined in this review are significant, leading the review’s authors to question whether red light therapy offers an unfair advantage in athletic competition. An exciting development in red and NIR light therapy is its potential to help you lose excess fat. A 2015 study by researchers from Brazil tested whether a combination of red light therapy and aerobic exercise could reduce obese women’s chances of developing heart disease. The researchers gave 62 women an exercise regimen. The participants were randomly assigned to receive red light therapy or a placebo for four months. At the conclusion of the study, the red light group had greater losses in waist circumference and other measures. How does red light therapy help with fat reduction? Research has shown that red and NIR light cause temporary perforations in fat cells that allow lipids to leak out; then, they are naturally expelled by the body. This shows potential for targeted fat reduction and body contouring. This does not mean, however, that red light therapy is a “miracle weight loss” approach – because there is no such thing. You can achieve the best results by combining LED light therapy with a healthy diet, exercise, and stress management techniques. Hair growth is one of the most popular uses for red light therapy because results have been promising. One study, for instance, found significantly greater hair counts in men with androgenetic alopecia (male- and female-pattern hair loss) who self-treated with 655 nm red light once daily for 16 weeks. Red light stimulates the cells in dormant hair follicles and promotes two important hair growth phases in hair follicles: anagen (growth phase) and telogen (reentry from dormancy). At the same time, the treatment inhibits early transition to catagen (regression), keeping hair follicles in “production” mode longer. Whether you’re injured or are looking at treating rheumatoid arthritis or osteoarthritis pain, red light therapy could help. A large, promising study on university athletes with a variety of injuries found faster return-to-play after 830 nm near-infrared LED treatment. In another double-blind study, elderly patients with degenerative knee osteoarthritis and similar pain levels were randomly split into groups receiving red light treatment, NIR light treatment, or a placebo. The patients self-applied the treatment to their knee for 15 minutes twice daily for 10 days. After the 10-day treatment, pain had been reduced in the red and infrared groups by more than 50 percent. The red and infrared groups also experienced functional improvement after treatment, with no improvement in the placebo group. And, the red and infrared groups experienced longer intervals between the need for additional pain relief treatments. The Pros and Cons of Red Light Therapy You may wonder if it sounds too good to be true. It’s perfectly natural to wonder, of course … so here are the pros and cons of this safe, all-natural, non-invasive treatment. Red Light Therapy Pros There are far too many positive aspects of red light therapy to list here, so here are the most important pros: Red light therapy presents a wide range of benefits. Hundreds of peer-reviewed studies point to statistically significant improvements in the conditions discussed earlier, as well as many others. Check out our Learning Center page to discover the many uses for red light therapy on yourself – and even your pets. Red light is considered a low-risk treatment. It will not cause burns and there are no known side effects other than potential brief redness/tightness in the treated area in sensitive individuals, which is likely due to increased blood flow to the area. You can self-treat at home. A large, powerful LED light therapy panel allows you to treat virtually any part of your body without having to visit a clinic. Since red light therapy is not (or extremely rarely) covered by insurance, this is a potential way to complement medically prescribed treatments without additional ongoing costs. Red Light Therapy Cons Does red light therapy have any cons? When used as directed, it's arguably the easiest self-care practice there is. But here are a few points to ponder: People with sensitive skin should start slowly. It can be easy to overdo LED light treatments because the therapy is painless and feels warm and soothing. If you have sensitive skin, start slowly to avoid temporary redness or tightness that may stem from increased blood circulation. Using Red Light Therapy Panels More usage is not necessarily better. Research has determined that three to 20 minutes daily is the optimal treatment time. Beyond that, extending the treatment time will not yield faster or greater benefits. As easy and pleasant as the treatments are, always follow the recommendations. There’s a huge difference in the quality and efficacy of light therapy devices. Always choose a high-quality FDA-cleared medical-grade device, which will deliver more energy needed for the light to absorb effectively into your tissues. Larger panels deliver much more energy than small devices – meaning, more intense light and a greater coverage area. The intensity of LED lights is close to that of low-level laser therapy, but with more broadly dispersed light to reduce treatment time. Go easy on the eyes. Although red light therapy has been used successfully for eye health, always wear proper eye protection to prevent eye strain and potential damage, especially from longer NIR wavelengths. Because NIR light is invisible, you can’t see it; thus, it may not seem intense, but the eyes are still affected. Red Light Therapy vs. Other Types of Light Therapy Every visible and invisible wavelength of light has different effects on the body. Depending on your treatment goals, you may be tempted to use a combination of light therapies. But you do not need to do that, and here is why. Red and near-infrared wavelengths are more easily absorbed by the parts of cells that respond to light energy, especially deeper in the body. Longer wavelengths are important. If light can’t absorb into the tissue to reach the cells, it won’t have any benefits. Ultraviolet (UV) Light Therapy UV light is seen as “bad” wavelengths that are a leading cause of skin aging and contributor to skin cancer. - When applied by a medical professional, UV rays can be useful for treating a variety of chronic skin conditions. But … - Along with being much safer than UV wavelengths, red and NIR light have successfully reduced flare-ups of psoriasis, eczema, and rosacea. Blue Light Therapy Blue light is commonly used to kill acne-causing bacteria. - A popular approach among acne sufferers is to use blue and red light together to support collagen production and reduce inflammation. - There is some risk with blue wavelengths, however; they have been linked to accelerated skin aging and the eye condition known as macular degeneration. For that reason, we recommend sticking to a gentle but rigorous skincare routine, skin-friendly dietary alterations (such as eliminating added sugar), stress relief, exercise, and sun protection, along with skin-cell-stimulating red light therapy for optimal skin complexion. Green and Yellow Light Therapy Green and yellow (amber) light is often used to tone down redness in the skin and reduce skin roughness. This is another use for low-level light therapy (using these two wavelengths). - While green and amber light can have some effect, the benefits are unfortunately limited because of their short wavelengths, with a very shallow absorption depth. - Red light absorbs deeper than green and yellow light because its wavelengths are much longer; thus, it can stimulate chromophores, which are light-absorbing molecules within cells. Because of red light’s cell-stimulating and anti-inflammatory effects, it can help reduce redness. NIR light absorbs into the layers beneath the skin where it can address the inflammation that could be contributing to facial redness. - Combining a red and near-infrared light treatment can address various skin-deep and deep-tissue relief at different depths. Red and Near-Infrared Light Therapy The term “red light therapy” can mean using red wavelengths from 620 to 660 nm or NIR wavelengths from 810 to 850 nm, or both. - Red wavelengths absorb into the outer layers of the skin, while NIR wavelengths can penetrate deeper into the body, including joints and bones. - “Infrared light therapy” typically refers to near-infrared wavelengths in the low 800 nm range. - Longer infrared wavelengths, such as those found in infrared saunas, should be used with caution, since overuse can potentially cause thermal damage to sensitive cells in the eye and testicles. Shorter NIR wavelengths delivered by a non-thermal LED light do not have a significant heating effect on the cells, which makes near-infrared light treatment much safer. How Long Does Red Light Therapy Take to Work? As beneficial as red light therapy is, it should not be considered a miracle treatment for any condition. Whether benefits are skin-deep or deeper in the body, they often take time to manifest because red light therapy works at the cellular level. Cells need time to repair themselves or replicate, and for the effects of millions of cells working together to result in improvement. Consistency is key to seeing results. You may see some results very quickly, but the full effects may not be visible for three to six months of daily use. We recommend taking “before” and “after” photos to monitor your progress. Once you’ve achieved the results you want, you can continue with less frequent treatments to maintain your results. The Side Effects of Red Light Therapy Scientists have found that red light therapy is free of side effects when used as directed, including using protective eyewear and not overdoing the treatment times. In every study we've reviewed, patient satisfaction has been very high. The Best Red Light Therapy Devices for At-Home Use Red light therapy devices found at high-end clinics, spas, and gyms are the very same devices you can purchase from PlatinumLED Therapy Lights. These are industry-leading devices with several key components: - The highest irradiance (remember: light energy power) of any panels in their power class. - A patent-pending combination of five of the most therapeutic red/NIR wavelengths in a ratio that delivers optimal results; and - A medical-grade FDA-cleared device These criteria naturally weed out most of the products on the market. You will get much more value with high-powered panels that can treat your body literally from head to toe. A Natural and Easy Way to Greater Well-Being To recap, red light therapy is a natural, non-invasive treatment that stimulates a host of beneficial biological effects in the body. It is used for anti-aging, pain relief, muscle performance and recovery, skin health, and so much more. Take a closer look at the best red light therapy devices on the consumer market for affordable, at-home treatment.
<urn:uuid:68d8c306-5f49-4537-80fd-6b135823d2e7>
CC-MAIN-2022-33
https://platinumtherapylights.com/blogs/news/what-is-red-light-therapy
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00496.warc.gz
en
0.932165
4,260
2.625
3
The Hidden History of the Underground Railroad: An Interview with Eric FonerHistorians/History tags: Eric Foner, interview, Underground Railroad, Gateway to Freedom In pre-Civil War America, African American slaves escaped bondage for freedom at great personal risk, often with the assistance of sympathetic black and white citizens. This interracial endeavor to aid runaway slaves was referred to as the Underground Railroad, and it infuriated Southern slaveholders who saw it as illegally depriving them of their property rights. For decades, historians have offered contrasting opinions on the legendary Underground Railroad from exaggerated accounts of an extensive, well-organized network that supported fugitive slaves to a view that no such effort ever existed. Renowned American historian Eric Foner demystifies and clarifies the story of the Underground Railroad in his new book Gateway to Freedom: The Hidden History of the Underground Railroad (W.W. Norton). His new study is based on extensive archival research, including his unearthing of a detailed record of fugitive slave accounts kept by New York abolitionist journalist Sydney Howard Gay. Gay’s records revealed the human cost of slavery: fugitives fleeing the South reported brutal physical abuse, the ever-present prospect of being sold, and the fear of losing all family ties. As Professor Foner recounts, New York in the decade before the Civil War was extremely pro-Southern and local officials upheld fugitive slave laws and cooperated in the return of runaway slaves to their owners. Moreover, slave hunters prowled the streets of Manhattan, seizing blacks and selling them or sending them back to the South. And Professor Foner also explains how the activism and resistance of abolitionists and fugitive slaves became a serious irritant in the South and a major cause of the Civil War. Professor Foner’s book on the Underground Railroad has been praised for its extensive research, deft storytelling, and vivid profiles of fugitive slaves and black and white abolitionists. Historian David W. Blight commented: “Making brilliant use of an extraordinary, little-known document, Foner, with his customary clarity, tells the enlightening story of the thousands of fugitive slaves who journeyed to freedom along the eastern corridor of the United States. Many stories of individual courage illuminate a network of operatives both formal and informal that played a powerful role in causing sectional conflict and the Civil War.” And historian James Oakes wrote: “Gateway to Freedom liberates the history of the underground railroad from the twin plagues of mythology and cynicism. The big picture is here, along with telling details from previously untapped sources. With lucid prose and careful analysis, Eric Foner tells a story that is at once unsparing and inspiring.” Eric Foner is DeWitt Clinton Professor of History at Columbia University. In his teaching and scholarship, Foner focuses on the Civil War and Reconstruction, slavery, and nineteenth-century America. He has served as president of the Organization of American Historians, the American Historical Association, and the Society of American Historians. Some of his best-known books include Reconstruction: America's Unfinished Revolution, 1863-1877, which won the Bancroft, Parkman and Los Angeles Times Book Prizes; and The Fiery Trial: Abraham Lincoln and American Slavery, winner, among other awards, of the Bancroft Prize, Pulitzer Prize for History, and The Lincoln Prize for 2011; as well as Free Soil, Free Labor, Free Men: The Ideology of the Republican Party Before the Civil War; Tom Paine and Revolutionary America; Nothing But Freedom: Emancipation and Its Legacy; and Who Owns History? Rethinking the Past in a Changing World. Professor Foner has also been the co-curator of prize-winning exhibitions on American history: A House Divided: America in the Age of Lincoln, for the Chicago Historical Society. He has won numerous awards for teaching and scholarship, and in 2014 he was awarded the Gold Medal by the National Institute of Social Sciences. He is an elected fellow of the American Academy of Arts and Sciences and the British Academy. He serves on the editorial boards of Past and Present and The Nation, and has written for the New York Times, Washington Post, Los Angeles Times, London Review of Books, and many other publications, and has appeared on numerous television and radio shows. Professor Foner recently talked about his new book by telephone from his office in New York. Robin Lindley: How did you come to write your new book on the Underground Railroad? I sense there’s a story behind it. Professor Eric Foner: There is. I allude to it in the acknowledgements. Generally, when I work on a book of history, as with most historians, I think of a historical question, and then what sources I can consult to deal with it. This book worked the opposite way. Out of serendipity, a student of mine, an undergraduate history major at Columbia who also was employed as our dog walker, one day said she was working on a senior thesis about this abolitionist journalist, Sydney Howard Gay. She was interested in his journalistic career. There’s a very big collection [on Gay in the Columbia Library], and she said, “In Box 72, there’s this “Record of Fugitives.” I’m not sure what it is. It’s not relevant to me, but you might find it interesting with your work on slavery.” I filed that in the back of my mind. I was working on my Lincoln book then. Eventually, I was up there and said, “Let me look at Box 72.” They brought out two small notebooks, little volumes, in which Sydney Howard Gay for two years, 1855 and 1856, recorded the experiences of over 200 fugitive slaves who passed through New York City. He was very active in the Underground Railroad and was also a journalist, so he was interested in their stories. This was remarkable. Robin Lindley: It must have been exciting to first see the Gay material. Professor Eric Foner: It was. I was astonished when I started reading it. You know, you go through manuscript collections and other sources, and it can be tedious, just turning pages and not finding much. But here you have something I just stumbled upon and, as soon as I began reading it, my eyes popped open and I said this is really remarkable. I’d never seen any document like this and, moreover, I’d never seen this document quoted or cited anywhere. It was not cataloged in the Columbia Library so there was no way of knowing it was there unless you happened upon it. It got me interested because there’s very little information on the Underground Railroad in New York City. It was a fairly pro-Southern town and closely connected to the cotton trade and the Southern economy in general. Bankers and merchants were all plugged into the South. It was not an abolitionist center, and there were very few abolitionists in New York. I got interested in how this guy operated and in the people mentioned [in Gay’s notebooks], so I worked outward from these documents. I didn’t really know where I was going, but the more research I did, the more I filled in these pieces. I found it to be a very interesting story with all of these human elements of the experiences of 200 or so fugitive slaves. It began accidentally, but grew into this new book. Robin Lindley: It seems you’ve uncovered a great deal of previously unpublished details on the Underground Railroad. You also correct some popular misconceptions about the Underground Railroad. What are a few things you’ve learned and you’d like people to know about the Underground Railroad? Professor Eric Foner: Everybody has heard the name of the Underground Railroad. There are a number of misconceptions. People generally have an exaggerated view of it: that it was a totally organized system with regularized routes and agents and set station houses. They almost take the railroad metaphor literally. It wasn’t like that at all. On the other hand, there’s an opposite view, which has been prominent that it didn’t exist at all. Partly, this is the result of some historians’ emphasis on the “agency” of African Americans, as they call it, which is very important. [They say] it’s well known that slaves escaped on their own without help and the whole thing was a myth. That’s not true either. There are two things I’d like people to recognize about the Underground Railroad from the book. One is that it was not a highly organized system. I call it a set of interconnected local networks. Small groups of people in different communities did help fugitive slaves and they communicated with one another. Their fortunes rose and fell. The Philadelphia Committee went out of existence for about seven or eight years, and then it came back and was very active. It involved a small number of people. In New York there were no more than a dozen people actively involved at any one time in the 30 years or so before the Civil War. On the other hand, it’s important to recognize that they accomplished a great deal. Nobody knows how many [slaves escaped bondage]. I wouldn’t even begin to venture a number, but many thousands of slaves were able to get out of slavery with the assistance of what we call the Underground Railroad. The Underground Railroad is a metaphor. It shouldn’t be seen as a tight organization. It’s a metaphor for these loosely connected, local groups that operated in an effective way, especially in the 1850s when they were at their most active. Robin Lindley: You also stress the cooperation between free and formerly enslaved blacks and white abolitionists. There may be an impression that white abolitionists drove this effort with the help of a few blacks such as Harriet Tubman and Frederick Douglass. You emphasize the critical role of African Americans. Professor Eric Foner: That is true. There is a popular view of the abolitionist movement as white Americans helping out downtrodden slaves. That’s fair enough as far as it goes, but it doesn’t go very far. What struck me is that the Underground Railroad was a good example of interracial cooperation. There were prominent white people involved in every place, but most of the day-to-day activity was done by blacks, many of whom are anonymous and not known to history. I tried to bring to life a few individuals like Louis Napoleon, an interesting man who was very active in New York. He worked in the office of the National Anti-Slavery Standard. He scoured the docks and looked for fugitives who had hidden in ships or came in by railroad at the depot. Even though he was illiterate and he signed with an “x,” he managed to go to court and get writs of habeas corpus for fugitive slaves. He was involved in legal cases. He was remarkable, and I had never heard of him, and I think most scholars had never heard of Louis Napoleon. I tried to bring a few people like that to life who are little known but very active in this story. Robin Lindley: You mention the iconic Harriet Tubman who’s so prominent in the popular view of the Underground Railroad. What did you learn about her? Professor Eric Foner: I was quite taken when I was reading this [Gay] document, and Harriet Tubman pops up twice in the Record of Fugitive. She led groups of fugitive slaves that came through New York City. They stopped at Sydney Howard Gay’s office. He wrote down his interviews with them on their experiences. Tubman was unique. She escaped from Maryland around 1850 and then she went back several times in the 1850s, and it’s estimated that she brought out 70 to 80 slaves. Nobody else did that quite like that. There were people who went back once to get a wife or brother or child out, but Harriet Tubman was a very remarkable person and very courageous obviously, going back again and again and again, placing herself in tremendous danger. She symbolizes the Underground Railroad, but it’s important to know that most slaves who got out did not have assistance from anybody like Harriet Tubman or anybody else. They got out of the South mostly on their own. There were a few people in the South who could assist them once they came in contact with them. Once they reached the North, Pennsylvania or Wilmington, Delaware, they began to connect with networks. But most slaves did it on their own or with the help of some local people but not with the help of someone like Harriet Tubman who came to lead them out of slavery. Robin Lindley: You have fascinating stories of the creativity some of the fugitive slaves exhibited in making their escapes, such as Henry “Box” Brown. Professor Eric Foner: Yes, absolutely. That’s another thing that I tried to show. Our image of fugitive slave tends to be the lone person running through the woods, hiding during the day, running at night. There were certainly people like that, but people used all sorts of methods of escape and there was a great deal of ingenuity. Brown got himself shipped in a crate. People also stole their masters’ horse-drawn carriages and just took off. And some people somehow got boats. I was also struck by the fact that many of these people escaped in groups, not just as a lone person. There were many small groups—three, four or five people and some even larger—usually family members who got out together. And there were large numbers who got out on ships. The coastal trade was enormous and thousands of ships plied the Atlantic coast. And plenty of captains were willing to take money to hide a fugitive slave or two on their boats. A lot of people got out that way. And also [people got out] on the railroad, where the Underground Railroad could be taken literally. Quite a few people, including Frederick Douglass in 1838, just got on a train and escaped. That wasn’t so simple. The trains were observed by police, but if you had a forged pass or something, you could do it. That was a lot simpler. You could get to the North a lot quicker by train than by wandering in the woods. Robin Lindley: Thank you for sharing those remarkable stories and introducing them in your book. Professor Eric Foner: The thing that really struck me about the [Gay] document is its human quality. The names of people and often, in parentheses, their new name because many of them, for obvious reasons, changed their names when they got to the North. And then their individual stories. Most of these people are lost to history. I don’t know what happened to them after they got out. Some got to Canada or upstate New York. A few of them pop up in the census in Canada in 1861. But most are anonymous in history, yet in this Sydney Howard Gay document, you have snapshots of their life stories. Robin Lindley: And I think many readers may be surprised by New York City’s strong connection to the South and the racism of the city. You mention that the mayor of New York and business leaders supported a compromise with the South in 1861 when war broke out. Professor Eric Foner: They certainly were for a compromise and the mayor proposed that New York secede—not to join the Confederacy but to become a free port so they could trade North and South. Then, of course, there were the New York City draft riots in 1863 [resulting in the murder of dozens of black citizens]. Over 70 years ago, my uncle Philip S. Foner, wrote an important book called Business and Slavery about the New York merchants and their ties with the South. He laid out how many of the major merchants in New York were plugged into the cotton trade and how many white Southerners came to New York to vacation and to do business. Thousands of Southerners were on the streets of New York City much of the year, and often they brought their slaves to stay with them in their houses and hotels, even though hotels wouldn’t let free blacks in. So there was a real Southern tinge to New York City, which was different than upstate New York: Syracuse, Albany, those places. So the abolitionist movement in New York City was very weak and very small. The Underground Railroad operated at great disadvantages in New York City. Robin Lindley: It’s striking the contrast of New York City and Boston, a center of abolitionism. Professor Eric Foner: Absolutely, though Boston also had cotton manufacturers who were tied to the South. But the abolitionist movement up there was much stronger. Robin Lindley: It’s complicated, but can you give a sense of the legal framework at the time of the Underground Railroad. Of course, slaves were property and there was a “fugitive slave” provision in the Constitution and then Fugitive Slave Acts requiring the capture and return of slaves to their owners. Professor Eric Foner: The legal situation is complicated and murky. The Constitution has a “fugitive slave” clause. It’s very vague. It doesn’t mention slaves or slavery, but provides that a person who is held to labor and escapes must be returned. It doesn’t say who is responsible for returning them—the federal government? State government? It doesn’t say what legal procedures need to be followed to return someone to slavery. And those issues became major points of debate. Northern states recognized the responsibility to return fugitive slaves, but many began to pass so-called “personal liberty laws” like requiring a trial by jury to determine if a person is really a slave or not, or to limit the way public officials like sheriffs and others could cooperate in the apprehension of fugitive slaves. They begin to nullify the fugitive slave clause, and eventually that leads the South to demand and get the Fugitive Slave Act of 1850, which federalizes the fugitive slave issues. No longer would state procedures be used. They set up a system of federal commissioners. Federal marshals could be involved if necessary. The Army could be sent in to remove the fugitive slave and bring him back to the South. That happened in Boston in the Anthony Burns case of 1854. The Fugitive Slave law overrode all local procedures. The odd thing about the law when we think of the South as the bastion of states’ rights, this was a complete abrogation of the rights of the states. The South was in favor of slavery, not of states’ rights. When states’ rights was a bulwark of slavery, which it frequently was, they went for states’ rights. If a powerful federal intervention was necessary to defend slavery, they were in favor of a powerful federal government. So the Constitutional arguments were instrumental. In other words, [the South favored] whatever was the mode of legal thinking protected the institution of slavery, and the Fugitive Slave Act of 1850 was probably the most vigorous exercise of the power of the federal government within the existing states of almost any law I can think of before the Civil War. Because of that, it created a lot of alarm and opposition in the North. Robin Lindley: Thank you for explaining the use of law to defend slavery. Lincoln alluded to the Underground Railroad in his first inaugural address. He mentioned that fugitives should be returned and the Fugitive Slave Act was enforced through 1861. Professor Eric Foner: Yes, it was enforced in the North, and particularly with slaves that ran away with the Union Army. At the very beginning, the Army returned them, but that fell apart very fast. Lincoln was a lawyer and a Constitutionalist, and he believed in the rule of law. In a famous letter of his from 1855 to his friend Joshua Speed he talks about how he hates to see fugitive slaves hunted down, “but I bite my lip and keep silent.” Why does he keep silent? Because this is the law and he believes in the rule of law. And he believes that even an unjust law must be obeyed until it is [changed]. He never calls for repeal of the Fugitive Slave Law, but he calls for modification of it to protect the rights of those accused so that a free man is not caught up in this and inadvertently sent into slavery, which could and did happen. fugitive slave issue is not the big issue for Lincoln. The big issue for him is the westward expansion of slavery. In the secession crisis, Lincoln says he will not compromise on the westward expansion of slavery but he is willing to compromise on fugitive slaves. That’s where Lincoln differs from abolitionists. For abolitionists, the fugitive slave law is completely immoral and should not be obeyed. They believe, as William Seward said, that there is a higher law, a law of morality and of religion, and where there’s a conflict between the higher law and the manmade law, you follow the higher law. Lincoln didn’t believe that. Robin Lindley: And you found that most of the fugitive slaves came from the northern slave states and not the Deep South. Professor Eric Foner: It’s a long way from a place like Alabama to the North. But in Maryland, where most of them came from, you are just a few miles to 50 or 100 miles from the North. So it was a lot easier to get out of Maryland. Moreover, half of the black population of Maryland was free by the late pre-Civil War period. So black people on their own on the roads and on the trains were a common sight in Maryland. You still needed a pass proving you were free, but plenty of blacks were traveling whereas there were few free blacks traveling in the Deep South states, so you were more likely to arouse suspicion if you were a black person off the plantation in the Deep South than in Maryland. Robin Lindley: I think some people don’t recognize that Maryland was a slave state. Professor Eric Foner: That’s possibly true. And Kentucky. I read an article in a newspaper about something in Kentucky. It’s a different point, but displays lack of historical knowledge. It said Kentucky was a part of the Confederacy. Well, it wasn’t. It was a slave state but it wasn’t part of the Confederacy. Unfortunately, that’s a sign of how journalists may not be as knowledgeable about our history as they perhaps ought to be. Robin Lindley: Did you find much information on a “reverse Underground railroad.” An example was a house in southern Illinois that became a sort of station where captured fugitive slaves would be sent back to slave owners. Professor Eric Foner: In the 1820s and 1830s in New York, there was a group of kidnappers who would kidnap blacks and put them on a ship and send them down South and sell them into slavery. People may be more aware of this now because of the movie about Solomon Northup, Twelve Years a Slave. But he was just one of many, particularly younger kids who would be grabbed off the street and taken South to be sold. That was a reverse Underground Railroad going the wrong direction. It was fairly prominent until the 1840s when it began to be suppressed. Robin Lindley: Your book vividly shows the human face of slavery and the abuse and barbarity of the institution. Professor Eric Foner: Thank you. Sydney Howard Gay was a journalist and he liked to get someone’s story, so he interviewed them [fugitive slaves] and got their experiences. In this document you hear the voice of the fugitive slave indirectly, through Sydney Howard Gay, but this is contemporaneous. A lot of what we know about fugitive slaves comes from reminiscences much later. For historians, reminiscences are valuable and suspect at the same time. But here you have it at the contemporaneous moment and you have all of these vivid human stories, which include physical abuse, but also acts of amazing ingenuity and courage in getting out of the South. Robin Lindley: And a Southern doctor found the desire of slaves to escape bondage was caused by a disease he called “drapetomania.” Professor Eric Foner: Yes, that was Dr. [Samuel A.] Cartwright with that diagnosis. It was a disease making black people want to escape. After all, since slavery was so pleasant and fine, why should they want to escape? There must be something wrong with them. Robin Lindley: You conclude that the fugitive slave issue was a major cause of the Civil War. Professor Eric Foner: As I said, this book expanded outward. I started with a local document, but following the paths led me to the South and to other Northern cities and eventually to the federal government and the impact of the fugitive slave issue on pre-Civil War politics. This issue did not cause the Civil War by itself obviously, but it was one of the catalysts. Southern complaints about the difficulty in getting fugitive slaves back from the North became louder and more strident as the Underground Railroad became more effective in the 1850s. As I point out, in South Carolina’s “Declaration of the Causes of Secession” in 1860, the longest paragraph is about fugitive slaves; it’s not about the westward expansion of slavery or states’ rights or the tariff. It’s about fugitive slaves, although few fugitive slaves escaped to the North from South Carolina. It was just too far to get to the North from South Carolina. comments powered by Disqus - What Happens When SCOTUS is This Unpopular? - Eve Babitz's Archive Reveals the Person Behind the Persona - Making a Uranium Ghost Town - Choosing History—A Rejoinder to William Baude on The Use of History at SCOTUS - Alexandria, VA Freedom House Museum Reopens, Making Key Site of Slave Trade a Center for Black History - Primary Source: Winning World War 1 By Fighting Waste at the Grocery Counter - The Presidential Records Act Explains How the FBI Knew What to Search For at Mar-a-Lago - Theocracy Now! The Forgotten Influence of L. Brent Bozell on the Right - Janice Longone, Chronicler of American Food Traditions - Revisiting Lady Rochford and Her Alleged Betrayal of Anne Boleyn
<urn:uuid:81816f19-c708-464a-8f8d-9b1544ca2265>
CC-MAIN-2022-33
http://historynewsnetwork.org/article/158362
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00693.warc.gz
en
0.975038
5,541
3.453125
3
OR WAIT null SECS Eating disorders are serious and potentially life-threatening, associated with severe food restriction, overexercise, malnutrition, and distorted thinking about body shape and weight or binge eating and purging behaviors. Eating problems are common in children and adolescents, and eating disorders typically have their onset during these developmental periods.1 Anorexia nervosa is a serious and potentially life-threatening disorder associated with severe food restriction, overexercise, malnutrition, and distorted thinking about body shape and weight. The typical age of onset is early adolescence (ages 12 to 15 years). Bulimia nervosa is characterized by periods of restriction followed by binge eating and purging behaviors (eg, vomiting, laxative use, overexercise) and often begins during middle adolescence (ages 15 to 17 years). A variety of social, developmental, genetic, and familial factors have been implicated in the etiology of these disorders, but their cause is unknown. In younger children, a range of eating disorders has been identified that includes atypical syndromes, such as selective eating and food avoidance emotional disorder (FAED).2 These atypical disorders are diagnosed as eating disorders not otherwise specified (EDNOS) in the current DSM. These disorders typically occur in school-aged children. Children with selective eating are highly sensitive to taste, texture, and amounts of food and have an extraordinarily narrow range of acceptable foods they will eat.3 These syndromes often lead to social, behavioral, and nutritional problems. Food neophobia may be primary (eg, never learned to eat a range of food), or it may occur in response to an adverse event (eg, choking, vomiting, diarrhea). Selective eating is associated with general anxiety and with autism spectrum disorders. Children with FAED are underweight, do not report shape and weight concerns, and often have somatic complaints (eg, stomachaches). In contrast to patients with anorexia nervosa, those with FAED usually recognize that they are too thin and want to gain weight.4 Identification and diagnosis of eating disorders in children and adolescents is a major problem (Table 1). As with many psychiatric disorders, diagnostic classifications of eating disorders in DSM are not developmentally sensitive. Current criteria are appropriate for adults with long-standing disorders. For example, in anorexia nervosa, the requirement of weight loss to a suggested level of 85% of expected weight for height is challenging when applied to a growing child, even when using best estimates for age.5 In addition, the criteria for reporting fear of weight gain requires that young adolescents make verbal statements attesting to this fear; however, many do not connect their behaviors with the emotion of fear. At the same time, their scrupulously avoidant and restrictive eating habits and their reactions to attempts to make them eat are more articulate than the words they speak.2 In children with bulimia nervosa, the availability and opportunity to binge eat and purge is constrained by family, school, and other environmental processes that may artificially limit these activities. For these reasons, the behavioral thresholds set for these disorders are difficult to apply to this age-group.6 Consequently, the majority (approximately 60%) of children and adolescents are given a diagnosis of EDNOS.7 This heterogeneous categorization leads to confusion about the diagnosis and problems in specifying treatment, and it sometimes prohibits insurance coverage. Research studies on treatment of eating disorders have generally lagged behind those of disorders of similar severity and incidence. In the adult literature, there are 9 published randomized controlled studies of psychosocial treatments for anorexia nervosa with fewer than 900 participants.8 Attrition rates averaged 50% in these studies, and no psychosocial treatment was found to be effective. Psychopharmacological studies of anorexia nervosa are also few and consist of small pilot comparisons with no evidence that medications are useful for the disorder.9 There are 6 randomized clinical trials for anorexia nervosa in children and adolescents, with fewer than 400 participants studied.8 However, family therapy was examined in 5 of these trials, and data suggest that family therapy is useful for this age-group. On average, between 60% and 80% of adolescents who are treated with family therapy no longer meet diagnostic criteria for anorexia nervosa, while between 40% and 60% reach normal weight for their age and have no evidence of eating-related psychopathology.10 No randomized clinical trials of medications for adolescents with anorexia nervosa have been published. A number of small studies of antidepressants and newer atypical antipsychotics suggest that these medications may be useful for secondary anxiety and distress, as well as for promoting short-term weight gain. Long-term benefits are unknown, however.11 For adults with bulimia nervosa, cognitive-behavioral therapy (CBT) has been shown to be more effective than placebo, medications, and several other forms of psychotherapy.12 Rates of recovery (ie, no binge eating or purging for 28 days) are about 35% with CBT, although there are much greater declines in rates (approximately 50% to 80%) of binge eating and purging even among those who do not recover. However, there are only 2 published randomized clinical trials in adolescents with bulimia nervosa.13,14 The first of those 2 studies compared a self-help version of CBT with family therapy in 80 adolescents with bulimia nervosa or partial bulimia nervosa.14 There were no differences in outcome between the 2 groups: 40% of those in both groups recovered at follow-up, but guided self-help was more cost-effective than family therapy in that study. The second study compared family therapy with individual therapy in 80 adolescents with bulimia nervosa or partial bulimia nervosa.13 Family therapy was found to be superior to individual therapy at the end of treatment and follow-up. Rates of recovery (as defined above) at follow-up were 30% for those in family therapy and 10% for those who received individual therapy. CBT has been adjusted for adolescents with bulimia nervosa with the addition of a focus on adolescent developmental issues, parental involvement, using age-appropriate examples, and improving the therapeutic alliance.15 Although no randomized clinical trials using these additional foci have been conducted, case series data suggest that CBT adjusted for adolescents who have bulimia nervosa was effective and led to recovery rates of approximately 50% in a clinical sample. A number of medication treatment studies for adults with bulimia nervosa suggest that some antidepressant medications are useful for managing bulimia nervosa.16 However, when compared with CBT, medications are less effective but are more cost-effective.17 The single study of antidepressant treatment in adolescents with bulimia nervosa was a pilot study of 10 participants that found that the medication was well tolerated and led to clinical improvements in binge eating and purging.18 Table 2 presents a summary of treatment options for children and adolescents with anorexia nervosa or bulimia nervosa. There are no systematic studies of treatment for children and adolescents who received selective eating, FAED, or another EDNOS diagnosis. A report on the use of behavioral desensitization and shaping in an inpatient milieu for the management of a somatization disorder similar to FAED suggests that this approach may be useful in severe cases.19 There are few systematic studies that compare the relative merits of different types of treatment settings for children and adolescents with eating disorders.20 Two randomized clinical trials in adolescents compare inpatient treatment with outpatient treatment.21,22 The first study compared an inpatient stay of several months in a specialty service with outpatient family and individual therapy.21 At the end of treatment and follow-up, there were no differences in outcome. A more recent study compared specialized inpatient treatment for adolescents with anorexia nervosa with specialized outpatient CBT for adolescents with anorexia nervosa and usual outpatient care.22 Again, no differences were found at 1-year follow-up; however, specialized CBT was the most cost-effective.23 There are no systematic studies that compare day programs or other intensive outpatient treatments with outpatient care, although there is evidence that such programs are becoming more common.24 However, one study compared a high dose with a low dose of outpatient family treatment for adolescents with anorexia nervosa and found that at the end of treatment and at 4-year follow-up, those adolescents who received the lower dose did as well as those who received twice as much treatment for twice as long.25,26 Taken together, these findings suggest that treatment for adolescents with anorexia nervosa does not require intensive intervention for psychiatric improvement. Although the psychiatric treatment of adolescents with anorexia nervosa may generally be conducted on an outpatient basis, management of the medical consequences of severe malnutrition as a result of anorexia nervosa often requires medical hospitalization.27 A number of academic and medical associations have published guidelines for the medical management of these disorders in children and adolescents, and they identify acute medical complications associated with significant weight loss, including bradycardia, hypotension, and refeeding syndrome.28 Current studies of treatment There are a number of treatment studies of child and adolescent eating disorders that are under way. It is hoped that these studies will provide important guidance on how to improve outcomes in this population. At the University of Chicago and Stanford University, a study of 120 adolescents with anorexia nervosa that compares a developmentally focused individual treatment (adolescent-focused therapy [AFT]) with family therapy is nearing completion. This study is a large-scale comparison of 2 major and divergent approaches to managing anorexia nervosa. AFT concentrates on individuation, autonomy, and self-mastery to overcome the preoccupations and inappropriate avoidance that characterize the symptoms of anorexia nervosa.29 Family therapy, in contrast, employs parents to directly manage the adolescent’s weight restoration and only secondarily examines adolescent developmental issues in the family context.30 The results of this study should be available soon. Another large study of adolescent anorexia nervosa is being conducted at 7 sites in the US and Canada. The study compares family therapy (as described above) with a systemic family approach aimed at family dynamics rather than empowering parents to effect weight change in their anorexic child.31 Results will shed light on the specific role of family involvement in the treatment of adolescents with anorexia nervosa. A study that compares family therapy, individual supportive therapy, and CBT for adolescents with bulimia nervosa is just getting under way at the University of Chicago and Stanford University. When completed, it will be the largest study of adolescent bulimia nervosa undertaken. It will provide information on whether individual supportive therapy, CBT, or family therapy is the most effective approach for the disorder and will help identify patients who might benefit differentially from one treatment. While clinicians await the results of these trials, the current evidence suggests that for adolescents with eating disorders, the best available treatment is family therapy aimed at helping parents manage their child’s eating disorder symptoms.32 While the evidence for the superiority of this form of family therapy over other treatments is still limited, the data suggest that it is effective in many cases. Family therapy also appears to be useful clinically in nonresearch populations, and manualized versions of the approach are available.30,33,34 At the same time, there is undoubtedly an important role for other therapies, including individual therapy for adolescents with anorexia nervosa and bulimia nervosa, especially in situations where family therapy is not an option.29 Medications for eating disorders in children and adolescents should be reserved for those with comorbid conditions (eg, anxiety, depression) or for those who are not responsive to psychosocial treatments. The use of medication for the treatment of adolescents with anorexia nervosa-even for comorbid conditions-might best be deferred until weight is normalized to help ensure that anxiety, obsessive-compulsive behaviors and thoughts, and depressed affect are not primarily nutritionally or behaviorally based. Among the many challenges clinicians face is developing specific expertise in treating child and adolescent eating disorders. Most eating disorder specialists focus on treating adults, and few have sufficient training or appreciation of developmental differences in younger patients who have an eating disorder. Furthermore, many nonspecialist clinicians have little training in the treatment of eating disorders, particularly in family therapy. Although there are regional centers of excellence in the treatment of child and adolescent eating disorders, these are few in number and are located mostly in urban centers. Reliance on hospital and residential treatment is, in part, a result of these limitations of trained professionals. Efforts to address these disparities by integrating eating disorder treatment training in clinical training programs, use of distance learning, and distance therapy are needed. 1. Hoek HW, Hoeken D. Review of prevalence and incidence of eating disorders. Int J Eat Disord. 2003; 34:383-396. 2. Bravender T, Bryant-Waugh R, Herzog D, et al; Workgroup for the Classification of Eating Disorders in Children and Adolescents. Int J Eat Disord. 2007; 40:S117-S122. 3. Nicholls D, Randall CD, Lask B. Selective eating: symptom disorder or normal variant? Clin Child Psychol Psychiatry. 2001;6:257-270. 4. Bryant-Waugh R, Lask B. Overview of eating disorders. In: Lask B, Bryant-Waugh R, eds. Eating Disorders in Childhood and Adolescence. 3rd ed. Hove, UK: Routledge; 2007:35-50. 5. Centers for Disease Control and Prevention. CDC Growth Charts for the United States: Development and Methods. Atlanta: Centers for Disease Control and Prevention, US Dept of Health and Human Services; 2002. http://www.cdc.gov/growthcharts/percentile_data_files.htm. 6. Marcus MD, Kalarchian MA. Binge eating in children and adolescents. Int J Eat Disord. 2003;(34 suppl):S47-S57. 7. Turner H, Bryant-Waugh R. Eating disorder not otherwise specified (EDNOS): profiles of clients presenting at a community eating disorder service. Eur Eat Disord Rev. 2004;12:18-26. 8. Bulik CM, Berkman ND, Brownley, KA, et al. Anorexia nervosa treatment: a systematic review of randomized controlled trials. Int J Eat Disord. 2007;40: 310-320. 9. Walsh BT, Kaplan AS, Attia E, et al. Fluoxetine after weight restoration in anorexia nervosa: a randomized controlled trial. JAMA. 2006;295:2605-2612. 10. Couturier J, Lock J. What is remission in adolescent anorexia nervosa? A review of various conceptualizations and a quantitative analysis. Int J Eat Disord. 2006;39:175-183. 11. Couturier J, Lock J. A review of medication use for children and adolescents with eating disorders. J Can Acad Child Adolesc Psychiatry. 2007;16:173-176. 12. Agras WS, Walsh T, Fairburn CG, et al. A multicenter comparison of cognitive-behavioral therapy and interpersonal psychotherapy for bulimia nervosa. Arch Gen Psychiatry. 2000;57:459-466. 13. le Grange D, Crosby R, Rathouz PJ, Leventhal BL. A randomized controlled comparison of family-based treatment and supportive psychotherapy for adolescent bulimia nervosa. Arch Gen Psychiatry. 2007;64: 1049-1056. 14. Schmidt U, Lee S, Beecham J, et al. A randomized controlled trial of family therapy and cognitive-behavior therapy–guided self-care for adolescents with bulimia nervosa and related conditions. Am J Psychiatry. 2007;164:591-598. 15. Lock J. Adjusting cognitive-behavioral therapy for adolescent bulimia nervosa: results of a case series. Am J Psychother. 2005;59:267-281. 16. Walsh BT, Wilson GT, Loeb KL, et al. Medication and psychotherapy in the treatment of bulimia nervosa. Am J Psychiatry. 1997;154:523-531. 17. Koran LM, Agras WS, Rossiter EM, et al. Comparing the cost-effectiveness of psychiatric treatments: bulimia nervosa. Psychiatry Res. 1995;58:13-21. 18. Kotler LA, Devlin MJ, Davies M, Walsh BT. An open trial of fluoxetine for adolescents with bulimia nervosa. J Child Adolesc Psychopharmacol. 2003;13: 329-335. 19. Lock J, Giammona A. Severe somatoform disorder in adolescence: a case series using a rehabilitation model for intervention. Clin Child Psychol Psychiatry. 1999;4:341-351. 20. Meads C, Gold L, Burls A. How effective is outpatient care compared to inpatient care for the treatment of anorexia nervosa? A systematic review. Eur Eat Disord Rev. 2001;9:229-241. 21. Crisp AH, Norton K, Gowers S, et al. A controlled study of the effect of therapies aimed at adolescent and family psychopathology in anorexia nervosa. Br J Psychiatry. 1991;159:325-333. 22. Gowers S, Clark A, Roberts C, et al. Clinical effectiveness of treatments for anorexia nervosa in adolescents. Br J Psychiatry. 2007;191:427-435. 23. Byford S, Barrett B, Roberts C, et al. Economic evaluation of a randomised controlled trial for anorexia nervosa in adolescents. Br J Psychiatry. 2007;191:436-440. 24. Frisch MJ, Herzog DB, Franko DL. Residential treatment for eating disorders. Int J Eat Disord. 2006;39:434-442. 25. Lock J, Agras WS, Bryson S, Kraemer HC. A comparison of short- and long-term family therapy for adolescent anorexia nervosa. J Am Acad Child Adolesc Psychiatry. 2005;44:632-639. 26. Lock J, Couturier J, Agras WS. Comparison of long-term outcomes in adolescents with anorexia nervosa treated with family therapy. J Am Acad Child Adolesc Psychiatry. 2006;45:666-672. 27. Golden NH, Katzman DK, Kreipe RE, et al. Eating disorders in adolescents: position paper of the Society for Adolescent Medicine. J Adolesc Health. 2003;33:496-503. 28. American Academy of Pediatrics; Committee on Adolescence. Identifying and treating eating disorders. Pediatrics. 2003;111:204-211. 29. Fitzpatrick K, Moye A, Hostee R, et al. Adolescent-focused therapy for adolescent anorexia nervosa. J Contemp Psychother. In press. 30. Lock J, le Grange D, Agras WS, Dare C. Treatment Manual for Anorexia Nervosa: A Family-Based Approach. New York: Guilford Publications, Inc; 2001. 31. Pote H, Stratton P, Cottrell D, et al. Systemic family therapy can be manualized: research process and findings. Journal of Family Therapy. 2003;25:236-262. 32. National Institute for Health and Clinical Excel-lence. Eating disorders: core interventions in the treatment and management of anorexia nervosa, bulimia nervosa, and binge eating disorder. London: NICE; 2004. http://www.nice.org.uk/CG009. Accessed September 17, 2009. 33. Loeb KL, Walsh BT, Lock J, et al. Open trial of family-based treatment for full and partial anorexia nervosa in adolescence: evidence of successful dissemination. J Am Acad Child Adolesc Psychiatry. 2007;46:792-800. 34. le Grange D, Lock J. Treating Bulimia in Adolescents. New York: Guilford Press; 2007.
<urn:uuid:33983115-5fdd-4b10-8a61-5076a647c796>
CC-MAIN-2022-33
https://www.psychiatrictimes.com/view/eating-disorders-children-and-adolescents
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571719.48/warc/CC-MAIN-20220812140019-20220812170019-00294.warc.gz
en
0.918488
4,257
3.6875
4
Essential of Geography Geosystems, 4th Canadian ed (4CE)*, pp. 20-37 (top) (Geosystems, 3rd Canadian ed (3CE)*, pp. 18-35 (top)) * Note that you may be using one of two editions of Geosystems: for convenience I will use abbreviations: - if you have Geosystems, 4th Canadian edition 2016, I will use the abbreviation: 4CE (most of you will have this one) - if you have Geosystems, 3rd Canadian edition 2013, I will use the abbreviation: 3CE “The earth is the Lord’s, and everything in it. The world and all its people belong to him. He built the earth’s foundation on the seas and built it on the ocean depths.” “O Lord, let me feel this world as Your love taking form, then my love will help it.” – prayer of Indian Christian leader, Rabindranath Tagore (d. 1941) There is a video version of this lecture here: https://youtu.be/WA4ddA4FSHI. The exam is based on the content in these notes, so please print them off to study from. I. The Earth’s Shape The earth, of course, is a perfectly round sphere! Right? Wrong! Actually the Earth is an “oblate ellipsoid”; it’s a sphere that is compressed at the poles and bulges at the equator. This is due to the centrifugal force of the earth spinning on its axis. This means that if you were to go around the Earth, it is actually shorter to go up and over the poles rather tan around the equator! - See 4CE Figure 1.12, p. 20 (3CE, Figure 1.11 p. 17) – Earth’s Dimensions How do we know the earth is roughly spherical (before satellites, etc.)? Folks as far back as Pythagoras (c. 540 BC) and Aristotle (384-322 BC) believed it was! Why? - the “sinking ship syndrome” – as a ship sails away it appears to sink below the waves. This is because the ship is actually sailing over the surface of the earth, a sphere. It disappears over the curve of the Earth. If the Earth were flat, the boat would simply get smaller and smaller, but not appear to “sink.” - lunar eclipses – when the earth passes directly between the sun and the moon, it’s shadow appears to cross the moon as an arc. Only if the Earth were a sphere would it cast a circular shadow on the moon. - Observation of the North Star (directly over the North Pole). As you move south, the North Star gets lower and lower in the sky (closer to the horizon). In fact, for every 111 km you travel, it appears to drop 1°. If you work out the geometry (and the Greek, like Pythagoras, were good at geometry) this could only happen if the earth were spherical. Isn’t it curious that the ancient Greeks had this all worked out! But some medieval Christians believed the Earth was flat! Many Christians in the middle-ages believed the Bible taught a three tiered universe: - Heaven … above the Earth - the Earth - Hell … below the Earth Because the Bible says, “The sun rises …” they were sure the Earth was the centre of the universe. The sun went around the Earth. Since this is literally what the Bible taught (in their interpretation), the Earth must be flat! Their observation led to this as a logical conclusion. Some Christians were horrified by suggestions the Earth might be spherical and NOT the center of the solar system … those were unbiblical assertions (in their opinion). It threatened their view of biblical authority. It threatened the foundations of their Christian faith. (There still are churches that teach the earth is the centre of the universe and the solar system – while they may concede the earth is spherical, they would argue that the sun revolves around the earth / modern astronomy is all wrong – because the Bible, according to their interpretation, says so (Gerardus Bouw is the main proponent of this view)). For some in the Medieval Church, one’s opinion about the shape of the Earth became a test of authentic Christian faith. If you couldn’t believe the Bible that the Earth was flat and the centre of the universe, how could you believe it about anything!?!? Faith in Christ was not the issue … the shape of the Earth and solar system was! Many people believe folks like Copernicus and Galileo were accused of being heretical for challenging this conviction; in fact it was a bit more complicated – and political – than just that. You can read more about Galileo’s conflict with the church here. The problem was … what? - The problem was one of poor biblical exegesis and hermeneutics (interpreting what the Bible actually says and what it means today). The Bible doesn’t actually say the Earth is flat in a definitive way (does it?). Biblical writers described things as they appeared to be — the sun does appear to rise. The winds appear to come from the “four corners” of the earth, but that is a figure of speech, isn’t it? They were describing things as they saw them; they were not trying to make scientific observations. It is our error, as interpreters, to think biblical writers are trying to give us modern scientific explanations when they are, in fact, simply telling things as they see them, using figures of speech, and are limited by the “scientific” understandings of their age. This is a sobering reminder for us that, when it comes to issues of natural sciences and the Bible, we need to be careful about our exegesis and hermeneutics. Otherwise we may ourselves using issues related to our interpretation of issues in science and the Bible (the age of the Earth?) as a test of orthodoxy (does the Bible actually say how old the Earth is? Not really …). And we may find ourselves ignoring the real issues (faith in Christ). And we may find ourselves in untenable scientific territory when the clear evidence contradicts what we interpret the Bible to be saying. With a variety of issues, if we try to deuce scientific knowledge from the Bible, it may well be our error, as interpreters, to think biblical writers are trying to give us modern scientific explanations when they are, in fact, simply telling things as they see them, using figures of speech, and are limited by the “scientific” understandings of their age. II. How Big is the Earth? In about 247 BC, Eratosthenes (the librarian at Alexandria, Egypt) observed that at noon, the sun was directly overhead (no shadows at all) at the city of Syrene, Egypt. Meanwhile, in Alexandria, the sun came down at an angle, casting a shadow. By calculating the angle of the sun and knowing the distance between the two cities, he calculated the circumference of the earth to be about 46,250 km. Modern measurements calculate the circumference to be 40,008 km. Not too bad! A. Does the Earth Move? Yes! The Earth spins like a top on its axis, called rotation. It takes about 24 hours for one complete rotation (one day!). At the equator, a point on the earth’s surface is moving at about 1700 km/h. At our latitude, we’re moving at about 650 km/h. At the poles you do not move at all! The earth’s rotation is responsible for: - our routine of day and night - tides (this course) - the curved flow paths of air and water (the Coriolis Effect, which we discuss in the other course The Earth also revolves around the sun. It takes approximately 365¼ days for one full revolution around the sun. This, combined with the fact that the earth is tilted on its axis, is responsible for our seasons (also in the other course). B. Locating Points On the Earth In order to locate places on the earth, geographers from the days of the ancient Greeks onwards created grid systems. Grid systems on spheres are related to these key terms and concepts: 1. Longitude and Latitude a. Latitude is the angular distance, measured in degrees, between any location and the Equator (0°). It will range from 0° – 90° either North or South of the Equator. 90ºN is the north pole; 90ºS is the south pole. 4CE Figure 1.13, p. 20 (3CE Figure 1.13, p. 18) – Parallels of Latitude Degrees are further subdivided into 60 minutes (‘); minutes are subdivided into 60 seconds (“). - e.g. Toronto, ON, is located at 43° 32′ N. A specific neighborhood within Toronto can be located using minutes as well. - e.g. Moncton, NB, is located at 46º 7′ N. A specific neighbourhood within Moncton could be located using minutes as well. - e.g. Vancouver, BC, is located at 49º 28′ N. A specific neighbourhood within Vancouver could be located using minutes as well. - e.g. Regina, SK, is located at 50º 26′ N. A specific neighbourhood within Regina could be located using minutes as well. Latitude can be determined with reference to the sun or the stars. You can measure the angle of the sun above the horizon at noon and calculate how far south or north you must be (the sun is higher in the sky at noon the further south you go). Or you can measure the angle of the North Star (Polaris). Polaris always stays within 1 degree of celestial north pole. If a navigator measures the angle to Polaris and finds it to be 10 degrees from the horizon, then he is on a circle at about 10 degrees of geographic latitude. By using a sextant (which measures the angle the sun or North Star is above the horizon), you can figure out how far you are from the Equator. b. Longitude is the angular distance, measured in degrees (°), between a point and the Prime Meridian (0°). It will range from 0° – 180° either East or West (whichever is shortest). 4CE Figure 1.16, p. 22 (3CE Figure 1.15, p. 20) – Meridians of Longitude Degrees are further subdivided into 60 minutes (‘); minutes are subdivided into 60 seconds (“). - e.g. Moncton, NB, is located at 64º 41′ W. A specific neighbourhood within Moncton could be located using minutes as well. - e.g. Toronto, ON, is located at 79°23′ W. A specific neighborhood within Toronto can be located using minutes as well. - e.g. Regina, SK, is located at 104° 40′ W. A specific neighbourhood within Regina could be located using minutes as well. - e.g. Vancouver, BC, is located at 123° 12′ W. A specific neighbourhood within Vancouver could be located using minutes as well. Longitude is determined by time. If you know what time it is in Greenwich, England, and what time it is in your present location, you can determine your longitude. In the old days, ships used to carry two clocks – one standardized to Greenwich time, one set by the time at the current location (you can determine your current local time by setting your clock to 12:00 noon when the sun is highest in the sky). Thus sailors could determine their longitude by comparing their local time with Greenwich mean time. (Longitude is linked to time zones – we all know that when it is noon where we are, it is a different time at a place west or east of us). A concept related to latitude and longitude is that of great and small circles. See 4CE Figure 1.17, p. 23 (3CE Figure 1.16, p. 22) – Great Circles and Small Circles. a. “Great Circles” always cut the earth exactly in half; they must pass directly through the center of the earth: - for navigation (either sea or air), the shortest distance between two points always follows a great circle path - the only line of latitude that is a great circle is the equator. - all lines of longitude are one half of a great circle. b. “Small Circles” cut the earth into different sized pieces; they never pass through the exact center - all lines of latitude, except the equator, are small circles - no lines of longitude are small circles Also related to latitude, longitude, great circles, and small circles are meridians and parallels. a. “Meridians“ are halves of great circles, whose ends all meet at either the North or South Poles - meridians are the same as lines of longitude - all run in a true N-S direction - they are furthest apart at the Equator and converge at the Poles - an infinite number may be drawn; a meridian exists for any point on the globe b. “Parallels“ are entire small circles of latitude parallel to the equator - parallels are the same as lines of latitude - they are all parallel (don’t diverge or converge) - they are all true E-W lines - they all intersect meridians at right angles - they are all small circles, except for the equator (the equator is a large circle) - an infinite number may be drawn; a parallel exists for any point on the globe So … latitude, parallels, and small circles are very closely related! (remember, however, that the equator is a line of latitude, and a parallel, and a great circle! All other lines of latitude are parallels, but also small circles) And … longitude, great circles, and meridians are very closely related! (remember, however that meridians and line of longitude are halves of great circles) GPS, or Global Positioning System, is a technology that helps you locate a spot precisely, in terms of longitude, latitude, and elevation above sea level. A series of satellites provide a triangulation network to help you locate any site exactly. Most of us are very familiar with this in navigation systems we use in cars or carry with us. Believe it or not, this technology has only been available wince 1994 (initially only for military use) — civilian use was only permitted in the late 1990s. III, Maps and Map Projections The ideal map or “model” of the Earth is a globe. However small globes can include relatively little detail. And detailed globes would have to be too large to be conveniently stored! This creates a problem! Unlike a cone or a cylinder, you cannot simply “unroll” a sphere, like the Earth, onto a flat piece of paper! Try it! You cannot lay out a spherical surface in any easy way. Try cutting up a tennis ball … or the skin of a baseball … or an orange peel. Try laying it out, flat, in a neat way. It doesn’t work. At least not in a way that would work well for a map! To use mathematical terms, a sphere is not a “developable” surface. 4CE Figure 1.22, p.28 (3CE Figure 1.24, p. 28) – From Globe to Flat Map Consequently, mapmakers (“cartographers”) have developed several “map projections”; mathematical attempts to put the spherical surface of the Earth onto flat pieces of paper. Note that, because you cannot perfectly lay out a sphere on a flat paper, no map projection is perfect. They are all distortions of reality! But they are attempts to provide useful models of the Earth’s surface. These are the most common map projections (see 4CE Figure 1.23, p. 29 (3CE Figure 1.25, p. 29) – Classes of Map Projections ): - Cylindrical Projections try to picture the earth as if it were a tube or cylinder, rolled out (because you can actually unroll a cylinder … try it with the paper wrapper around your next can of beans) . Of course the Earth is actually a sphere, so there will be distortions! The most familiar cylindrical projection is the Mercator Projection (4CE Figure 1.23a, p. 29 (3CE Figure 1.25a, p. 29)). On this projection: – meridians do not converge as they really do (at 60° N & S, distances are exaggerated 2X; at 80° N & S, distances are exaggerated 6X. Thus the Canadian Arctic and Greenland appear to be 6X as large as they really are! They really aren’t that big!). What the map makers have done is ADDED hundreds of square kilometers of land that don’t really exist, in the Arctic and Antarctic, to fill in the gaps) – Mercator maps cannot show the actual North or South Poles (infinite exaggeration … notice how huge Antarctica seems to be on most maps … it’s not really that big!) – the relative size, area, and shape of different locations is distorted (northern countries like Canada are exaggerated in size … equatorial countries appear smaller than they are). This is interesting. We are used to Canada and Russia appearing SO huge! They are big, but NOT actually as big as the appear! – these maps excellent for navigational directions; a straight line drawn anywhere on the map is a line of constant compass bearing (called a rhumb line) (4CE Figure 1.24, p. 29 (3CE Figure 1.26, p. 30)). So, as a pilot, if you draw a straight line from Vancouver to London, it will give you the “lazy” route … you can fly along that line and never turn! Note however this is NOT the shortest distance between Vancouver and London! – great circles (the shortest distances between places), however, appear as arcs. These are terrible maps to try to determine the shortest distances between places. - Planar or Azimuthal Projections try to simply lay out the earth on a flat plane. The most interesting is the Gnomic Projection, because great circles (the shortest distances between points) are straight lines. Thus planar projections are great for plotting air or sailing routes. However rhumb lines (lines of constant compass bearing) appear as arcs. (4CE Figure 1.23b, p. 29 (3CE Figure 1.25b, p. 29)). So, when Air Canada plans its shortest route from Vancouver to London, they will use a gnomic projection. It will give the shortest distance. It is NOT the easiest route to fly however, because the pilot will have to constantly adjust his compass direction. However, most maps in the airlines’ magazines are variations of Mercator projections … so in the in-flight magazine, flight routes usually appear as arcs. - Conic Projections try to picture the earth as a cone. By combining a series of conic sections to create a “polyconic” projection, distortion is minimized. This is the best projection for showing limited portions of the Earth as accurately (with the least amount of distortion) as possible). However global conic projections — when you try to use them for a large area, like the whole Earth — look very odd!. 4CE Figure 1.23c, p. 29 (3CE Figure 1.25c, p. 29). These are great projections for highway maps of Alberta, even of Canada or the U.S. Maps are fascinating things for all sorts of political reasons, too! The projections we use influence how we see the world. In the Mercator Projection, for instance, northern areas are enlarged, relative to equatorial areas. Canada and Russia appear huge! In fact, Canada is not that much bigger than Brazil! There are other ways to view the world! For instance, most of our maps have “north” at the top. This is not necessary! Australian cartographers have argued we should put “south” at the top. Check out the results: Upside Down Map Page - Note that Europe and North America are no longer the first places you notice … Africa and South America are. Interesting … Some scholars point out that by putting North America and Europe in the top centre of virtually every world map, we have subconsciously suggested that these are the most important regions of the world. It would be interesting to redraw maps with other continents front and centre. How might that change our perspective on the world? - What happens when North America is “maraginalized” (lower left), and Australia is front and centre? - What happens to your perspective on the world when Africa is front and centre? As Christians with a heart for poorer nations, our perspective on the world is worth some reflection … There are other ways of “seeing the world” than centered on Europe and North America. When we see Canada marginalized, how do we feel? Would you dare use an “Upside-Down Map” on your church’s “Mission Bulletin Board”? Why? Why not? Aside: Map Making … or cartography (you don’t need to study this) From the Natural Resources Canada: |Early European explorers of this country’s vast geography made maps using a variety of instruments, from compasses to survey chains. Cartography, the science of mapping, is now high-tech. An interesting read: “The modern map is no longer an unwieldy printed publication we wrestle with on some blustery peak, but digital, data-rich, and dynamic. It is transforming the way we interact with the world around us. Thanks to “big data”, satellite navigation, GPS-enabled smartphones, social networking and 3D visualisation technology, maps are becoming almost unlimited in their functionality, and capable of incorporating real-time updates. Advanced LED screen technology and smartphones equipped with projectors are going to transform the way we interact with maps,” says Ian White, founder and chief executive of Urbanmapping.com, a San Francisco-based geoservices provider. For example, tourists will be able to plan their visits by using their phones to project a 3D map onto a wall, that they’ll then be able to manipulate it remotely with their hands, adding layers of information such as landmarks, restaurants, recommendations from friends, as well as transport links and times …” continue reading here: BBC News – The maps transforming how we interact with the world IV. Remote Sensing and Geographic Information Systems (GIS) Remote sensing is a means of gathering information about an object (like the Earth) without direct physical contact. We “remote sense” all the time – with our eyes, our nose, cameras, binoculars, telescopes, etc. Active remote sensing involves sending something out – a probe to Mars, sonar beams, radar beams, drones with cameras, etc. – and monitoring the results. Passive remote sensing involves observing an object without sending anything – taking a picture, recording seismic waves after an earthquake, monitoring solar radiation. Geographic Information Systems (GIS) refer to computer-generated data bases that combine many layers of data (often gathered by remote sensing) – both natural and human – for a particular location so that their relationships and interconnectedness can be studied and modeled. GIS models help to show the potential environmental or other effects possible when changes are introduced into a location. See GIS.com – the Guide to Geographic Information Systems for a complete introduction! For example, a GIS model of the changes associated with a new highway overpass would show human changes like road alignment, traffic flow, zoning, land ownership, etc, and natural changes like drainage patterns, vegetation changes, wildlife habitat and migration, etc. GIS models are used extensively in environmental impact assessments associated with major development projects. They can also be used to evaluate natural hazards and alert people of past, present and future risks. In the U.S., for instance, you can enter the name of your community and generate a map that shows your flood hazard, earthquake activity, hurricane activity, hail storms, wind storms, or tornadoes … neatly layered GIS data! More and more GIS information is being posted on the web daily — it’s the cutting edge these days — check out your community’s GIS presence online. In many ways Google Earth (have you found yourself yet?) and Google Maps are a simplified GIS product — layering maps, satellite images, etc. There is a great YouTube video on how digital mapping is changing our perspectives on the world from Penn State University: http://www.youtube.com/watch?v=ZdQjc30YPOk&feature=player_embedded Because Canada has so much land area – much of it uninhabited, but rich in resources – Canadians have been among the world leaders in remote sensing and GIS technologies! If you want a career tip … check out GIS. It is one of the booming employment sectors! Worth reflecting on … Please watch these two short interviews with Dr. Jennifer Wiseman, head of NASA’s Space Telescope Programme. Dr. Wiseman is an amazing scientist and a strong Christian! You will be encouraged! Feel free to discuss the interviews on the course discussion site (see the syllabus for details …) To Review … Check out the resources at www.masteringgeopgraphy.com Do not worry about time zones … although this may be interesting to you, you don’t need to know it for this course! This page is the intellectual property of the author, Bruce Martin, and is copyrighted by Bruce Martin. This page may be copied or printed only for educational purposes by students registered in courses taught by Dr. Bruce Martin. Any other use constitutes a criminal offence. Scripture quotations marked (NLT) are taken from the Holy Bible, New Living Translation, copyright © 1996. Used by permission of Tyndale House Publishers, Inc., Wheaton, Illinois 60189. All rights reserved. Mercator projection: “Mercator projection SW” by Strebe – Own work. Licensed under CC BY-SA 3.0 via Commons – https://commons.wikimedia.org/wiki/File:Mercator_projection_SW.jpg#/media/File:Mercator_projection_SW.jpg Great circle: “Great circle hemispheres” by Jhbdel at en.wikipedia. Licensed under CC BY-SA 3.0 via Commons – https://commons.wikimedia.org/wiki/File:Great_circle_hemispheres.png#/media/File:Great_circle_hemispheres.png Upside down map: http://www.mapworld.com.au/products/upside-down-world Latitude-longitude: Illinois State University
<urn:uuid:0231c766-72f4-4d4f-af8d-0a5c4f21236a>
CC-MAIN-2022-33
https://rossway.net/1b-describing-the-earth/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570871.10/warc/CC-MAIN-20220808183040-20220808213040-00296.warc.gz
en
0.93117
5,815
3.53125
4
In aging and many disease states, the energy production capacity of the body’s cells is diminished. The mitochondria are the structures within the cell responsible for generating energy from oxygen and nutrients. If their number is reduced or their function is impaired, free radicals are produced and damaging toxins accumulate in the cells. These toxins further damage the mitochondria and impair other aspects of cellular function. Many of the most common health problems, such as obesity, diabetes, and many problems associated with aging, arise from problems in cellular energy production. As one group of researchers has put this, "[a]ging is associated with an overall loss of function at the level of the whole organism that has origins in cellular deterioration. Most cellular components, including mitochondria, require continuous recycling and regeneration throughout the lifespan."1 Another has observed, "[m]itochondrial biogenesis [the creation of new mitochondria] is a key physiological process that is required for normal growth and development and for maintenance of ongoing cellular energy requirements during aging."2 These observations link two key aspects of mitochondrial health, preventing and removing damaged mitochondria (mitophagy) and creating new mitochondria (mitogenesis). Although the importance of the mitochondria as a central point of health has been accepted for decades, over the last few years the understanding of the mechanisms involved has changed significantly. Twenty or ten years ago, antioxidants and the free radical theory of aging largely dominated thinking. Today, the importance of mitochondrial biology linking basic aspects of aging and the pathogenesis of age-related diseases remains strong, yet the emphasis has changed. The focus has moved to mitochondrial biogenesis and turnover, energy sensing, apoptosis, senescence, and What Promotes Mitochondrial Biogenesis? The body maintains a complex network of sensors and signaling functions to maintain stability despite a constantly changing environment and numerous challenges. Of special note is the concept of hormesis, meaning a state in which mild stress leads to compensation that improves the ability of the body to respond in the future to similar challenges. It turns out that many of the approaches that are associated with longevity and healthy aging promote hormesis. In terms of mitochondria biogenesis, these include caloric restriction, certain nutrient restrictions or shortages, caloric restriction mimetics, and exercise. Many of the mechanisms that activate mitochondrial biogenesis in the face of hormesis have been elucidated. Keeping in mind that there always must be a balance between the elimination of worn-out and defective mitochondria and the generation of new ones, the activators of both actions can overlap. For instance, low energy levels (caloric restriction) and increased reactive oxygen species/free radicals can promote the activity of special cellular control points. These include activating metabolic sensors such as AMP kinase/ AMPK (adenosine monophosphate kinase) and the protein known as SIRT1 (sirtuin 1, i.e., silent mating type information regulation 2 homolog 1). Activated AMPK is an indicator that cellular energy is low and serves as a trigger to increase energy production. It inhibits insulin/IGF-1/mTOR signaling, all of which are anabolic and can lead not just to tissue production, such as muscle growth, but also to fat storage. Along with SIRT1, AMPK activates the biogenesis of new mitochondria to enable the cell to generate more energy. At the same time, activated AMPK and SIRT1 increase the activity of a tumor suppressor that induces mitophagy. The balance of the dual activations replaces defective mitochondria with newly formed functionally competent mitochondria. A key to health and healthy aging is to regulate the catabolic processes via controlled amounts and types of stressors such that worn out mitochondria are removed without overshooting the mark and reducing overall cellular and tissue functionality. The most successful way to maintain this balance is to follow the body’s own natural metabolic signals rather than to attempt to override the body’s checkpoints. AMPK and SIRT1 ultimately are energy/nutrient sensors or control points. Hence rather than attempting to manipulate these directly, it likely is safer and ultimately more effective to address the factors in the cell that these sensors sense. The recent attention in the issue of aging to the role of NAD+ (the oxidized form of nicotinamide adenine dinucleotide) is a good example of this principle. Directions coming from the nucleus of the cell that help to regulate the normal production of NAD+ and the ratio between distinct pools found in the cytoplasm and in the mitochondria decline with age. The changes in the NAD+ from the nucleus lead to a disruption on the mitochondrial side. In terms of energy production, it is a bit like losing a link or two in the timing chain on your car engine with a resultant reduction in engine efficiency. To date, attempts to increase NAD+ in cells via supplementation with precursors have not proven particularly successful. Major benefits have been demonstrated in animal models only in the already seriously metabolically impaired or the relatively old. Recent research on oral supplementation has led to at least one extremely difficult article which, at least in this author’s opinion, delivers more smoke than heat.4,5 There is, however, an argument to the effect that supplementing together both nicotinamide riboside (a NAD+ precursor) and a sirtuin activator, such as pterostilbene, may prove to be more successful. It turns out that there are key points in normal cellular energy generation processes that strongly influence the NAD+ pools available for the cell to draw upon and the rate at which NAD+ can be replaced in these pools. Aging has been shown to promote the decline of nuclear and mitochondrial NAD+ levels and to increase the risk of cancer along with components of the metabolic syndrome. It is significant that the risks of these conditions can be reduced in tandem. Three places to start are 1) the pyruvate dehydrogenase complex, 2) the tricarboxylic acid cycle (TCA cycle) also known as the Krebs Cycle, and 3) the malate shuttle. A fourth junction is Complex I of the electron transport system, again, in the mitochondria.6 Manipulation of steps (1) and (2) already is being used in cancer treatment.7 Readily available dietary supplements can influence all four of these metabolic Supplements for Promoting Mitochondrial Biogenesis Medicine has started to pay a great deal of attention to effecting mitochondrial biogenesis through not just drugs, but also dietary supplements. Those interested should go online and look up "Mitochondrial Biogenesis: Pharmacological Approaches" in Current Pharmaceutical Design, 2014, Vol. 20, No. 35. Quite a few options are mentioned, including well known compounds, such as R-lipoic acid (including with L-carnitine), quercetin and resveratrol, along with still obscure supplements, including various triterpenoids and the Indian herb Bacopa monnieri. Pomegranate, French White Oak and Walnuts The pomegranate, with its distinctive scarlet rind (pericarp) and vibrantly colored seed cases (arils), is one of the oldest cultivated fruits in the world. This exotic fruit features prominently in religious texts and mythological tales and has been revered through the ages for its medicinal properties. An image of a pomegranate even can be found on the shield of the British Royal College of Medicine. Numerous studies have demonstrated the benefits of the fruit for cardiovascular health with other benefits suggested in areas ranging from arthritis to stability of cell replication to bone health. Now a study in Nature Medicine (July 2016) has uncovered perhaps the most important benefit of all, the ability of pomegranate compounds (ellagitannins) transformed by gut bacteria to protect the mitochondria of the muscles and perhaps other tissues against the ravages of aging. The mitochondria are the energy generators of the cells and the weakening of this energy generating function in an increasing percentage of mitochondria as we age is a primary source of physical decline over the years. Urolithin A, a byproduct of gut bacterial action on pomegranate compounds, allows the body to recycle defective mitochondria and thereby slow or even reverse for a time some of the major aspects of aging. The lifespan in a nematode model of aging was increased by more than 45 percent. Older mice in a rodent model of aging exhibited 42 percent better exercise endurance. Younger mice also realized several significant benefits.8 Beginning almost three decades ago, there were numerous speculations in the research world regarding the so-called "French Paradox" in which the French consumed quite large amounts of saturated fat in the form of butter and cheese, yet consistently experienced much lower rates of cardiovascular disease than did Americans. Not only that, the French, especially in the southwest of the country, typically led longer lives even in the areas noted for consuming large amounts of goose fat and pate de foie gras, which is to say, not just the Mediterranean diet based on olive oil, etc. One hypothesis put forth very early on was that it was the French consumption of red wine that protected them. It was thought that red wine components, including anthocyanidins, proanthocyanidins and resveratrol, are the protective compounds. Not considered until recently is that French red wines traditionally have been aged in casks made from white oak (Quercus robur). White oak contains roburin A, a dimeric ellagitannin related chemically to punicalagin. Human data show relatively good absorption and conversion of roburins into substances including urolithin A and ellagic acid—as compared with ellagitannins in general, which evidence only poor absorption. Hence, the benefits of good red wine traditionally produced and good cognac (also aged in oak barrels) involve urolithin A. Notably, the benefits of roburins, most likely derived from the conversion to urolithin A, go beyond mitophagy to include ribosomes, referring to cell components that translate DNA instructions into specific Other sources of ellagitannins have been shown to lead to the production of urolithin A by bacteria in the human gut. Not surprisingly, sources of ellagitannins are foods long associated with good health longevity, including not just pomegranate and oak-aged red wine, but also walnuts (and a smattering of other nuts), strawberries, raspberries, blackberries, cloudberries and even black tea in small Exercise and Pyrroloquinoline Quinone (PQQ) Peroxisome proliferator-activated receptor gamma coactivator (PGC-1á) is the master regulator of mitochondrial biogenesis.13 Exercise is perhaps the most significant activator of PGC-1á that most individuals can access. Exercise, furthermore promotes mitochondrial biogenesis through a number of other pathways, especially endurance and interval training.14 There are non-exercise options. You can’t take PGC-1á orally because it is a large protein molecule which does not survive digestion. PQQ is a small molecule that is available when ingested and that increases circulating PGC-1á. PQQ supplementation leads to more mitochondria and more Fasting, Ketogenic Diets and Fasting-Mimicking Supplements As already discussed, fasting promotes mitochondrial biogenesis by AMPK activation.16 AMPK senses the energy status of the cell and responds both to acute shortages, such as that induced by exercise, and to chronic shortages, such as from fasting. Probably due to an overall reduction in metabolic rate, chronic caloric restriction (as opposed to intermittent fasting) contributes to the health of mitochondria rather than biogenesis.17 The robustness of AMPK response decreases with age.18 Ketogenic diets (very low carbohydrate diets) also promote increases in mitochondria.19 Few individuals are willing or able to follow ketogenic diets chronically just as few individuals are willing to undergo routine fasts. Fasting-mimicking supplements offer an alternative approach. The dietary supplement (-)–hydroxycitric acid (HCA) is the best researched of these compounds. (Keep in mind that there is a vast difference in the efficacy of commercially available forms.20) Researchers have proposed that HCA used properly can activate mitochondrial uncoupling proteins and related Furthermore, according to a study published in the journal Free Radical Research in 2014, HCA improves antioxidant status and mitochondrial function plus reduces inflammation in fat cells.22 Inflammation is linked to the metabolic syndrome at the cellular level by way of damage to the antioxidant enzyme system (e.g., superoxide dismutase, glutathione peroxidase, glutathione reductase) and mitochondria. This damage, in turn, propagates further production of pro-inflammatory mediators (e.g., TNF-á, MCP-1, IFN-ã, IL-10, IL-6, IL-1â). HCA protected fat cells from ER stress by improving the antioxidant status to reduce oxidative stress (i.e., reduce ROS) and improve the function of the mitochondria to short circuit an ER stress—inflammation loop in these cells. Reducing TNF-á is important in that doing so removes a major impediment to mitochondrial biogenesis.23 Other Supplements to Promote Mitochondrial Biogenesis Scholarly reviews looking at natural compounds such as those that are found in anti-aging diets suggest yet other supplements to promote mitobiogenesis. For instance, it turns out that hydroxytyrosol, the most potent and abundant antioxidant polyphenol in olives and virgin olive oil, is a potent activator of AMPK and an effective nutrient for stimulating mitochondrial biogenesis and function via what is known as the PGC-1á pathway.24 Another herb with anti-aging effect, this time by activating the malate shuttle mechanism mentioned above, is rock lotus (Shi Lian Hua). This herb has been described in detail in this magazine in the article, "Uncovering the Longevity Secrets of the ROCK LOTUS."25 It is possible to improve the functional capacity of the mitochondria through dietary practices, exercise and supplements. Indeed, a number of compounds have been identified by researchers as mitochondrial nutrients. These compounds work together to increase the efficiency of energy production, to reduce the generation of free radicals, and so forth and so on. Likewise, these nutrients have been shown to improve the age-associated decline of memory, improve mitochondrial structure and function, inhibit the ageassociated increase of oxidative damage, elevate the levels of antioxidants, and restore the activity of key enzymes. Perhaps best of all, the body can be encouraged both to remove damaged mitochondria (mitophagy) and to create new ones, which is to say, mitochondrial biogenesis. - López-Lluch G, Irusta PM, Navas P, de Cabo R. Mitochondrial biogenesis and healthy aging. Exp Gerontol. 2008 Sep;43(9):813–9. - Stefano GB, Kim C, Mantione K, Casares F, Kream RM. Targeting mitochondrial biogenesis for promoting health. Med Sci Monit. 2012 Mar;18(3):SC1- - Gonzalez-Freire M, de Cabo R, Bernier M, Sollott SJ, Fabbri E, Navas P, Ferrucci L. Reconsidering the Role of Mitochondria in Aging. J Gerontol A Biol Sci Med Sci. 2015 Nov;70(11):1334-42. - Trammell SA, Schmidt MS, Weidemann BJ, Redpath P, Jaksch F, Dellinger RW, Li Z, Abel ED, Migaud ME, Brenner C. Nicotinamide riboside is uniquely and orally bioavailable in mice and humans. Nat Commun. 2016 Oct 10;7:12948. - Mitteldorf J. Nicotinamide Riboside —Where’s the Beef? http://joshmitteldorf.scienceblog.com/2014/11/17/nicotinamide-riboside-wheres-thebeef/. - Yang Y, Sauve AA. NAD+ metabolism: Bioenergetics, signaling and manipulation for therapy. Biochim Biophys Acta. 2016 Dec;1864(12):1787– 1800. - Schwartz L, Buhler L, Icard P, Lincet H, Steyaert JM. Metabolic treatment of cancer: intermediate results of a prospective case series. Anticancer Res.2014 Feb;34(2):973–80. - Ryu D, Mouchiroud L, Andreux PA, Katsyuba E, Moullan N, Nicolet-Dit-Félix AA, Williams EG, Jha P, Lo Sasso G, Huzard D, Aebischer P, Sandi C, Rinsch C, Auwerx J. Urolithin A induces mitophagy and prolongs lifespan in C. elegans and increases muscle function in rodents. Nat Med.2016 Aug;22(8):879-88. - Pellegrini L, Belcaro G, Dugall M, Corsi M, Luzzi R, Hosoi M. Supplementary management of functional, temporary alcoholic hepatic damage with Robuvit® (French oak wood extract). Minerva Gastroenterol Dietol. 2016 Sep;62(3):245–52. - Vinciguerra MG, Belcaro G, Cacchio M. Robuvit® and endurance in triathlon: improvements in training performance, recovery and oxidative stress. Minerva Cardioangiol. 2015 Oct;63(5):403–9. - Országhová Z, Waczulíková I, Burki C, Rohdewald P, Ïuraèková Z. An Effect of Oak-Wood Extract (Robuvit®) on Energy State of Healthy Adults-A Pilot Study. Phytother Res. 2015 Aug;29(8):1219–24. - Natella F, Leoni G, Maldini M, Natarelli L, Comitato R, Schonlau F, Virgili F, Canali R. Absorption, metabolism, and effects at transcriptome level of a standardized French oak wood extract, Robuvit, in healthy volunteers: pilot study. J Agric Food Chem. 2014 Jan 15;62(2):443–53. - Ventura-Clapier R, Garnier A, Veksler V. Transcriptional control of mitochondrial biogenesis: the central role of PGC-1alpha. Cardiovasc Res. 2008 Jul 15;79(2):208–17. - Wright DC, Han DH, Garcia-Roves PM, Geiger PC, Jones TE, Holloszy JO. Exercise-induced mitochondrial biogenesis begins before the increase in muscle PGC-1alpha expression. J Biol Chem. 2007 Jan 5;282(1):194–9. - Bauerly K, Harris C, Chowanadisai W, Graham J, Havel PJ, Tchaparian E, Satre M, Karliner JS, Rucker RB. Altering pyrroloquinoline quinone nutritional status modulates mitochondrial, lipid, and energy metabolism in rats. PLoS One.2011;6(7):e21779. - Zong H, Ren JM, Young LH, Pypaert M, Mu J, Birnbaum MJ, Shulman GI. AMP kinase is required for mitochondrial biogenesis in skeletal muscle in response to chronic energy deprivation. Proc Natl Acad Sci U S A. 2002 Dec 10;99(25):15983–7. - Lee CM, Aspnes LE, Chung SS, Weindruch R, Aiken JM. Influences of caloric restriction on age-associated skeletal muscle fiber characteristics and mitochondrial changes in rats and mice. Ann N Y Acad Sci. 1998 Nov 20;854:182–91. - Jornayvaz FR, Shulman GI. Regulation of mitochondrial biogenesis. Essays Biochem. 2010;47:69–84. - Bough KJ, Rho JM. Anticonvulsant mechanisms of the ketogenic diet. Epilepsia. 2007 Jan;48(1):43–58. - Louter-van de Haar J, Wielinga PY, Scheurink AJ, Nieuwenhuizen AG. Comparison of the effects of three different (-)-hydroxycitric acid preparations on food intake in rats. Nutr Metab(Lond). 2005 Sep 13;2:23. - McCarty MF. High mitochondrial redox potential may promote induction and activation of UCP2 in hepatocytes during hepatothermic therapy. Med Hypotheses.2005;64(6):1216–9. - Nisha VM, Priyanka A, Anusree SS, Raghu KG. (-)–Hydroxycitric acid attenuates endoplasmic reticulum stress-mediated alterations in 3T3-L1 adipocytes by protecting mitochondria and downregulating inflammatory markers. Free Radic Res.2014 Nov;48(11):1386-96. - Valerio A, Cardile A, Cozzi V, Bracale R, Tedesco L, Pisconti A, Palomba L, Cantoni O, Clementi E, Moncada S, Carruba MO, Nisoli E. TNFalpha downregulates eNOS expression and mitochondrial biogenesis in fat and muscle of obese rodents. J Clin Invest. 2006 Oct;116(10):2791–8. - Liu J, Shen W, Zhao B, Wang Y, Wertz K, Weber P, Zhang P. Targeting mitochondrial biogenesis for preventing and treating insulin resistance in diabetes and obesity: Hope from natural mitochondrial nutrients. Adv Drug Deliv Rev. 2009 Nov 30;61(14):1343–52.
<urn:uuid:0bf669df-59e4-4cab-b39f-5c5863da390b>
CC-MAIN-2022-33
https://totalhealthmagazine.com/tag/hydroxytyrosol.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573744.90/warc/CC-MAIN-20220819161440-20220819191440-00495.warc.gz
en
0.852
5,025
3.125
3
We are searching data for your request: Upon completion, a link will appear to access the found materials. F.W. De Klerk South African Politician Frederik Willem de Klerk was born March 18, 1936 in ohannesburg South Africa. He was educated at Potchestroom University. He received a law degree and began practicing law in Vereeninging. He became active in the National Party and in 1972, entered the South African Parliament. He served in the cabinets of B.J. Vorster and P.W. Botha. He established a reputation as an ultra-conservative, but when he came to power he began dismantling apartheid. His first move was releasing Nelson Mandela from jail, followed by the legalization of the African National Congress. Together with Nelson Mandela, De Klerk received the Nobel Prize for peace in 1993. Why FW de Klerk let Nelson Mandela out of prison After 26 years in captivity, Nelson Mandela did not want to be set free straight away. Two days before his release, the world's most famous political prisoner was taken to see President FW de Klerk in his Cape Town office. The president got a surprise. "I told him he would be flown to Johannesburg and released there on 11 February 1990. Mr Mandela's reaction was not at all as I had expected," said De Klerk. "He said: 'No, it is too soon, we need more time for preparation.' That is when I realised that long hours of negotiation lay ahead with this man." Twenty years after the event, sitting in the study of his Cape Town home, Frederik Willem de Klerk, now 73, still has the headmasterly style and deliberate speech that the watching world came to know as he played a crucial role in dismantling apartheid. But the winner of the 1993 Nobel peace prize still recalls the enormous leap of faith that was required to negotiate the end of white minority rule with what he describes as the "fundamentally socialistic" African National Congress of the time. Just after 4pm on the date appointed by De Klerk, Mandela, then 71, walked free, holding the hand of his wife, Winnie. The prisoner had lost his argument for a later release date but had persuaded De Klerk to allow him to leave directly from Victor Verster prison, in Paarl, near Cape Town. Mandela held up his fist in an ANC salute. In an instant he switched from being a symbol of the oppressed to the global symbol of courage and freedom that he remains today. Mandela's release did not signal the end of apartheid. In fact, the white-ruled pariah state was entering the most dangerous chapter in its history since the introduction of racial separateness in 1948. Four hours after leaving prison, Mandela arrived in Cape Town to address thousands of people gathered outside city hall. The impatient crowd had clashed with police and bullets had been fired. But Mandela did not bring a message of appeasement. "The factors which necessitated armed struggle still exist today," he told the cheering onlookers. Mandela called on the international community to maintain its sanctions. "I have carried the idea of a democratic and free society in which all persons live together in harmony and with equal opportunities. I hope to live to see the achievement of that ideal. But if need be, it is an ideal for which I am prepared to die," he shouted. With hindsight, Mandela used the fiery address to take up a negotiating position and convince the black majority that he had not made a secret pact with the authorities. De Klerk had his moment of truth nine days earlier, in an address to the all-white parliament that coined the phrase "a new South Africa". "There were gasps in the house, yes," said De Klerk, "but not at the news of Mr Mandela's release. The gasps came when I announced the unbanning not only of the ANC but also the South African Communist party and of all affiliated organisations, which included the armed wing of the ANC, Umkhonto we Sizwe. There were gasps then and, from the far-right party, protests and boos." De Klerk speaks slowly and clearly – and charmlessly. He is a lawyer from a strict, Calvinist tradition in which displays of emotion are a seen as a sign of weakness. His one quirk seems to be the incessant chewing of gum. He has lived in this modern house in Fresnaye for 18 months, having moved into Cape Town with his second wife, Elita, from his farm in Paarl. He points out that, from his garden, he has a view of Robben Island, where Mandela spent 18 years in prison. It is a fact. He does not reveal whether it leaves him hot or cold. But radical change requires steely nerves. De Klerk had become president in September 1989, the son of a National party cabinet minister and the nephew of a prime minister. He grew up with Afrikaner fear in his DNA – the dread that after 400 years on the tip of Africa and the struggle against British colonial rule, his Huguenot descendants would be chased into the sea by the black majority. That fear contributed to policies that built his nation – forced removals to create racially segregated areas and blacks being deprived of their citizenship. It led to "passbooks", introduced to restrict black people's movements beyond those that were necessary to the economy, and separate beaches, buses, hospitals, schools, universities and lavatories for blacks, whites, mixed-race "coloureds" and Indians. As he prepared his 2 February speech at his holiday home in Hermanus in the Western Cape, De Klerk claims he had no confidant. "My predecessor, PW Botha, had an inner circle and I did not like it. I preferred decisions to evolve out of cabinet discussions. That way we achieved real co-ownership of our policies." He says his consultative style was a break with National party culture. But he also claims – in a line of argument that allows him to avoid condemning apartheid outright – that the system unravelled through a gradual process. Even today, he admits only that international sanctions against South Africa "from time to time kept us on our toes". In 1959 prime minister Hendrik Verwoerd's government divided black South Africans into eight ethnic groups and allocated them "homelands" – nations within the nation. The move was a cornerstone of an Afrikaner nationalist dream to create a republic, but it led to international isolation. De Klerk was a vigorous supporter. "I wanted us to take a more adventurous approach to the nation state concept, but the project ultimately failed because the whites wanted to keep too much land for themselves. "The third phase – which coincided with my entering cabinet but was not started by me – was a shift towards reform. It focused on making separate development more acceptable while still believing it was just. But by the early 1980s we had ended up in a dead-end street in which a minority would continue to hold the reins of power and blacks, outside the homelands, really did not have any meaningful political rights. We had become too economically inter-dependent. We had become an omelette that you could not unscramble." In 1986 the National party abandoned the concept of separate development. "We embraced the idea of a united South Africa with equal political rights for all, but with very effective protection of minorities. Then my predecessor lost his enthusiasm. When I took over, my task was to flesh out what was already a fairly clear vision, but we needed broad support. We needed negotiation." De Klerk moved quickly. In October 1989, a month after succeeding Botha, he released Mandela's political mentor, Walter Sisulu, and seven other prominent Robben Island prisoners. De Klerk says: "When I first met Mandela we did not discuss anything of substance, we just felt each other out. He spent a long time expressing his admiration for the Boer generals and how ingenious they were during the Anglo-Boer war. We did not discuss the fundamental problems or our political philosophies at all. "Later, during the negotiations, it became clear that there was a big divide. On the economic side, the ANC was fundamentally socialistic, the influence of the Communist party was pervasive and they wanted nationalisation. They also wanted to create an unelected government of national unity which would organise elections. We insisted on governing until a new constitution had been negotiated and adopted by parliament." De Klerk's successive negotiated victories potentially saved South Africa from the post-colonial governance void suffered by many other countries on the continent. They also entrenched minority rights constitutionally and set the country on a capitalist path. "The government that came into power after the April 1994 elections was going to need a budget. It was drafted by our finance minister, Derek Keys, and he convinced them of the necessity to stay within the free-market principles that had been in force in South Africa for decades. The ANC has stuck to these principles and that is one of the great positives." He worries that the left wing of the governing alliance – which supported President Jabob Zuma's offensive to oust Thabo Mbeki in 2008 – will win its current campaign for payback. De Klerk, who retired as deputy president in 1997, also believes South Africa is ripe for a political shake-up, maybe as soon as in next year's municipal elections. "You cannot say we are a healthy, dynamic democracy when one party wins almost two-thirds of the vote. We need a realignment in politics. I am convinced there will be further splits in the ANC because you cannot keep together people who believe in hardline socialism and others who have become convinced of free-market principles. The 2011 elections will be the opportunity for some much-needed shock therapy. I hope people at those elections will use their right to vote less with emotion and more through reason to express their concerns about the failure of service delivery." The foundation he runs in Cape Town officially exists to defend the constitution but places a strong focus on minority rights – those of Afrikaners and the Afrikaans-speaking "coloured" population. "The ANC has regressed into dividing South Africa again along the basis of race and class. We see an attitude in which for certain purposes all people of colour are black, but for other purposes black Africans have a more valid case in the field of, for example, affirmative action than do brown or Indian South Africans. The legacy of Mandela – reconciliation – urgently needs to be revived." He says some whites still accuse him of having given the country away. Asked what would have happened had he not made the 2 February speech, De Klerk has a ready answer. "To those people I say it is a false comparison to look at what was good in the old South Africa against what is bad today. "If we had not changed in the manner we did, South Africa would be completely isolated. The majority of people in the world would be intent on overthrowing the government. Our economy would be non-existent – we would not be exporting a single case of wine and South African planes would not be allowed to land anywhere. Internally, we would have the equivalent of civil war." The End of Apartheid Apartheid, the Afrikaans name given by the white-ruled South Africa ’s Nationalist Party in 1948 to the country’s harsh, institutionalized system of racial segregation, came to an end in the early 1990s in a series of steps that led to the formation of a democratic government in 1994. Years of violent internal protest, weakening white commitment, international economic and cultural sanctions, economic struggles, and the end of the Cold War brought down white minority rule in Pretoria. U.S. policy toward the regime underwent a gradual but complete transformation that played an important conflicting role in Apartheid’s initial survival and eventual downfall. Although many of the segregationist policies dated back to the early decades of the twentieth century, it was the election of the Nationalist Party in 1948 that marked the beginning of legalized racism’s harshest features called Apartheid. The Cold War then was in its early stages. U.S. President Harry Truman’s foremost foreign policy goal was to limit Soviet expansion. Despite supporting a domestic civil rights agenda to further the rights of black people in the United States, the Truman Administration chose not to protest the anti-communist South African government’s system of Apartheid in an effort to maintain an ally against the Soviet Union in southern Africa. This set the stage for successive administrations to quietly support the Apartheid regime as a stalwart ally against the spread of communism. Inside South Africa, riots, boycotts, and protests by black South Africans against white rule had occurred since the inception of independent white rule in 1910. Opposition intensified when the Nationalist Party, assuming power in 1948, effectively blocked all legal and non-violent means of political protest by non-whites. The African National Congress (ANC) and its offshoot, the Pan Africanist Congress (PAC), both of which envisioned a vastly different form of government based on majority rule, were outlawed in 1960 and many of its leaders imprisoned. The most famous prisoner was a leader of the ANC, Nelson Mandela , who had become a symbol of the anti-Apartheid struggle. While Mandela and many political prisoners remained incarcerated in South Africa, other anti-Apartheid leaders fled South Africa and set up headquarters in a succession of supportive, independent African countries, including Guinea, Tanzania, Zambia, and neighboring Mozambique where they continued the fight to end Apartheid. It was not until the 1980s, however, that this turmoil effectively cost the South African state significant losses in revenue, security, and international reputation. The international community had begun to take notice of the brutality of the Apartheid regime after white South African police opened fire on unarmed black protesters in the town of Sharpeville in 1960, killing 69 people and wounding 186 others. The United Nations led the call for sanctions against the South African Government. Fearful of losing friends in Africa as de-colonization transformed the continent, powerful members of the Security Council, including Great Britain, France, and the United States, succeeded in watering down the proposals. However, by the late 1970s, grassroots movements in Europe and the United States succeeded in pressuring their governments into imposing economic and cultural sanctions on Pretoria. After the U.S. Congress passed the Comprehensive Anti-Apartheid Act in 1986, many large multinational companies withdrew from South Africa. By the late 1980s, the South African economy was struggling with the effects of the internal and external boycotts as well as the burden of its military commitment in occupying Namibia. Defenders of the Apartheid regime, both inside and outside South Africa, had promoted it as a bulwark against communism. However, the end of the Cold War rendered this argument obsolete. South Africa had illegally occupied neighboring Namibia at the end of World War II, and since the mid-1970s, Pretoria had used it as a base to fight the communist party in Angola. The United States had even supported the South African Defense Force’s efforts in Angola. In the 1980s, hard-line anti-communists in Washington continued to promote relations with the Apartheid government despite economic sanctions levied by the U.S. Congress. However, the relaxation of Cold War tensions led to negotiations to settle the Cold War conflict in Angola. Pretoria’s economic struggles gave the Apartheid leaders strong incentive to participate. When South Africa reached a multilateral agreement in 1988 to end its occupation of Namibia in return for a Cuban withdrawal from Angola, even the most ardent anti-communists in the United States lost their justification for support of the Apartheid regime. The effects of the internal unrest and international condemnation led to dramatic changes beginning in 1989. South African Prime Minister P.W. Botha resigned after it became clear that he had lost the faith of the ruling National Party (NP) for his failure to bring order to the country. His successor, F W de Klerk , in a move that surprised observers, announced in his opening address to Parliament in February 1990 that he was lifting the ban on the ANC and other black liberation parties, allowing freedom of the press, and releasing political prisoners. The country waited in anticipation for the release of Nelson Mandela who walked out of prison after 27 years on February 11, 1990. The impact of Mandela’s release reverberated throughout South Africa and the world. After speaking to throngs of supporters in Cape Town where he pledged to continue the struggle, but advocated peaceful change, Mandela took his message to the international media. He embarked on a world tour culminating in a visit to the United States where he spoke before a joint session of Congress. On This Day in History: F.W. de Klerk was Sworn in as President of South Africa F .W. de Klerk was born in Johannesburg, South Africa, on March 18, 1936. After obtaining a law degree from Potchefstroom University, F.W. de Klerk began his political career in 1972 when he was elected to parliament as a member of the National Party. The National Party, which birthed apartheid, came into power in 1948 by promoting Afrikaner culture, or the control of Afrikaans-speaking white South Africans. The goal was not only to make Afrikaners superior to Black South Africans but also to English-speaking South Africans. The National Party was successful in not only stripping Black South Africans of rights but also of initiating South Africa leaving the British Commonwealth in 1961. The National Party continued to keep control for 33 years after South Africa became a republic, and eventually elected P.W. Botha, de Klerk’s predecessor, as South African Prime Minister in 1978 on the promise of upholding apartheid. Although he legalized interracial marriages in the 1980s, the majority of his policies offered mere lip service to improving race relations. For example, in 1984, Botha helped form a new constitution, which allowed for three different parliaments—one for White South Africans, one for Black South Africans, and one for Indian South Africans. However, the true purpose of this change was to still keep White South Africans in power, as their parliament had more seats than the Black and Indian South African parliaments combined. Additionally, he granted independence to some of the Black South African’s assigned “homelands” (reservations) with the goal of keeping Black and White South Africans separate with the White South Africans in control. Opposition against the Nationalist Party and Botha continued to rise in intensity through the African National Congress’s (ANC) advocacy for rights of Black South Africans. The ANC, founded in 1912, was banned in the 1960s by the National Party and the struggles between the two groups became violent. Eleven years into Botha’s presidency, he became ill. After suffering a stroke, he named de Klerk the leader of the National Party, while still retaining the presidency. Botha became more ill, difficult, and forgetful until his own cabinet and the National Party forced him to resign. F.W. De Klerk became acting president of South Africa on August 15, 1989, and was elected president for a five-year term on September 14, 1989. He was sworn into the presidency on September 20, 1989, marking the beginning of a new South Africa. There was nothing in de Klerk’s background that would have suggested he would reform the country. He served under Botha in various high-ranking positions and the National Party knew him as someone who sided with the verkrampte (an “unenlightened” National Party member who opposed liberal changes, such as reforming apartheid), although he considered himself moderate. However, de Klerk had already decided that he would be the one to end apartheid. He knew that apartheid wouldn’t last forever, and dictatorships all over the world such as the Soviet Union were collapsing. He believed it would be best to end the system of apartheid as quickly as possible, like “cutting off the tail of a dog in one fell swoop.” After being elected president, he lifted the ban on protest marches as well as began to release political prisoners. He began negotiations with Black South African leaders, including the still imprisoned Nelson Mandela, avoiding a possible civil war. In an interview about his choice to end apartheid, F.W. de Klerk said: “For many years I supported the concept of separate states. I believed it could bring justice for everyone, including the blacks who would determine their own lives inside their own states. But by the early 1980s, I had concluded this would not work and was leading to injustice and that the system had to change. I still believed, in 1990, that the independent states had a place, but in the end the ANC had put so much pressure on them that they didn’t want to go on.” F.W. de Klerk took time during his 1989 Christmas vacation to discern how to unify South Africa and end apartheid. He says of this time that he had “long come to the realisation that we were involved in a downward spiral of increasing violence and we could not hang on indefinitely. We were involved in an armed struggle where there would be no winners. The key decision I had to take now, for myself, was whether to make a paradigm shift.” By the end of this vacation, he decided he needed to make the shift. His speech on February 2, 1990, and the freeing of Nelson Mandela, would make it a reality. The speech preceding Mandela’s release shocked the entire world. De Klerk unbanned the ANC and other similar parties, released all political prisoners, and promised a future of democracy with rights for all citizens, Black and White alike. Nine days after de Klerk’s February 2 speech, Nelson Mandela was released after 27 years of imprisonment. Why has history decided to judge F.W De Klerk so lightly? When one thinks of Frederik Willem de Klerk, the 7th state president of South Africa, one almost instantly thinks of the image of himself and Nelson Mandela holding hands in the air. Our hearts are filled with joy as we recall how crucial this partnership was in building a reconciliatory, democratic South Africa. Students are taught in History that he was the National Party head who had the guts to stand up to the hard liners in his party, and point out the deeply flawed nature of the Apartheid government. More so, he emphasised and the need to engage in dialogues with the ANC to build a democratic society. These are in turn the exploits which resulted in him winning the Nobel Peace Prize. There are a few nuances to this historical picture of De Klerk which I feel are ignored. Firstly , the fact that he was able to stand up to hard liners within the National Party as its leader lays within it an obvious but overlooked truth. The truth that he himself was part of the National Party. As such he was in favour of the principles for which this morally bankrupt political party stood. These included the notion that black South Africans were not South Africans in the first place, and that as a result of this there should be separate development. More so, that due to those of colour being inferior to the superior white race they did not require the same standard of living as their white counterparts. The fact that he aligned himself with a party which held these views must make us question the rosy picture of this figure which we as South Africans have decided to paint. Before he became state president he was Minister of Education in the National party government. During this tenure he was notorious for telling white students to spy on their teaches .He is quoted as say if these teachers were spreading progressive ideas or agendas – like the fact that the apartheid system was morally reprehensible- they should report these teachers to the relevant authorities. Does this sound like a figure that is worthy of such acclaims within our history books and a Nobel Peace Prize? I will leave that question to you as the reader to ponder. To me it has almost seemed like we have applauded De Klerk for gaining a conscience and realising that Aparthied as a system, which he helped entrench, was morally bankrupt. And to make matters worse it seems De Klerk has not even done that. In an interview with the BBC in 2012 he is quoted as saying “What I haven’t apologised for is the original concept of seeking to bring justice to all South Africans through the concept of nation states.” Statements such as these unequivocally show that in his view De Klerk believed in the notion of separate development, but that it was just poorly implemented in the South African context. Throughout the interview De Klerk defended the concept of “separate but equal” nation states. Later in this interview De Klerk repudiates the effects of apartheid, but not the concept. I believe as South Africans we need to have a honest evolution with ourselves about how we remember our former president and how we engage with him and chose to remember him. Because as his sentiments, such as those shown in his BBC interview, he, in my view, is not a man whom truly espouses the views of an inclusive , representative and unified democratic South Africa. Mikhail Petersen holds a Bachelors of Social Science degree in Politics and Economic History as well as an LLB from UCT. Mikhail is an intern within the Sustained Dialogue Programme at the Institute for Justice and Reconciliation, based in Cape Town. F. W. de Klerk F. W. de Klerk is a South African politician. He served as State President of South Africa, from 15 August, 1989 to 10 May, 1994, and Deputy President of South Africa, from 10 May, 1994 to 30 June, 1996. He was awarded the Nobel Prize for Peace in 1993, with Nelson Mandela. He was the seventh and last head of state of South Africa under the apartheid era. He is the son of Hendrina Cornelia (Coetzer) and politician Johannes de Klerk. He is married to Elita Georgiadis. He has three children with his former wife, Marike Willemse. “The Peacemakers” won Time Magazine’s Person of the Year for 1993. F. W. was one of four persons chosen to represent that title, along with Yasser Arafat, Nelson Mandela, and Yitzhak Rabin. The surname de Klerk is derived from Le Clerc, Le Clercq, and de Clercq, and is of French Huguenot origin (meaning “clergyman” or “literate” in old French). Meanwhile, the surname Coetzer came from his ancestor, Kutzer, who stems from Austria. Some research suggests that F. W. has Finnish and Italian ancestry. It is not clear if these ancestries have been verified/documented. F. W. is a half-third cousin, once removed, of Namibian model Behati Prinsloo. F.W.’s maternal great-great-grandmother, Anna Sophia Erasmus, was also Behati’s paternal great-great-great-grandmother. F. W.’s patrilineal ancestry can be traced back to his 10th great-grandfather, Étienne le Clercq, a French Huguenot. A few of F. W.’s remote ancestors were slaves from African, India, Indonesia, and Madagascar. His 10th great-grandmother, Krotoa (also known as Eva), was a Khoikhoi interpreter. F. W.’s paternal grandfather was Willem Johannes de Klerk (the son of Barend Jacobus de Klerk and Maria Jacoba Grobler). Willem was born in Burgersdorp, Drakensberg District, Eastern Cape. Barend was the son of Johannes Cornelis de Klerk and Martha Margaretha Schoeman. Maria was the daughter of Jacobus Johannes Grobler and Johanna Susanna Lasya Coetzee. F. W.’s paternal grandmother was Aletta Johanna “Lettie” van Rooy (the daughter of Johannes Cornelis van Rooy and Aletta Johanna Smit). F. W.’s grandmother Aletta was born in Burgersdorp, Drakensberg District, Eastern Cape. F. W.’s great-grandfather Johannes was the son of Johannes Cornelis van Rooy and Anne Françoise Holsters. F. W.’s great-grandmother Aletta was the daughter of Jacobus Albertus Smit and Aletta Johanna Smit. Jacobus and F. W.’s great-great-grandmother Aletta were both born with the same surname. F. W.’s maternal grandfather was Frederik Willem Coetzer (the son of Jacob Erasmus Coetzer and Elizabeth Catharina Jacoba Johanna Buitendag). F. W.’s grandfather Frederik was born in Bloemfontein, Motheo, Free State. Jacob was the son of Jacob Coetzer and Anna Sophia Erasmus. Elizabeth was the daughter of Carel Hendrik Buitendag and Maria Magdalena de Beer. F. W.’s maternal grandmother was Anna Cecilia Fouchè (the daughter of Jacobus Paulus Fouché and Cornelia Hendrina Strydom). Anna was born in Rouxville, Xhariep, Free State. Jacobus was the son of Gustavus Wilhelmus Fouché amd Johanna Swanepoel. Cornelia was the daughter of Adriaan Stephanus Strydom and Elizabeth Johanna Maria Charlotte Swanepoel. F. W.’s matrilineal ancestry can be traced back to his 5th great-grandmother, Jacoba Johanna Kruger. He has an estimated net worth of about $46 million making him stand amongst one of the richest politicians in South Africa. - Order of Mapungubwe, State President of the Republic of South Africa - Co-Recipient of Nobel Peace Prize with Nelson Mandela - Philadelphia Liberty Medal, President Bill Clinton, USA - Prix de Courage International, France - Co-Recipient of UNESCO Houphouet-Boigny Peace Prize with Nelson Mandela - Honorary LLD, University of Potchefstroom - Honorary DPhil, University of Stellenbosch - Decoration for Meritorious Service, State President of SA - Honorary LLD, Bar-Ilan University - Honorary DPhil, National University Suggested use: Print a copy of this free research checklist, and keep track of the De Klerk genealogy resources that you visit. If your web browser does not print the date on the bottom, remember to record it manually. Today is 21/Jun/2021. To keep track of the latest transcriptions published by Genealogy Today, please follow Illya D'Addezio on Facebook, @illyadaddezio on Twitter, or +IllyaDAddezio on Google+. Copyright © 1998-2021 Genealogy Today LLC. All Rights Reserved. If you host the De Klerk blog or web page, please link to this surname-focused resource. Here's the HTML code for a basic link. Simply cut/paste this code on to your page. Person of the Year: A Photo History DE KLERK BY WILLIAM F. CAMPBELL MANDELA BY SELWYN TAIT F.W. de Klerk and Nelson Mandela joined Yasser Arafat and Yitzhak Rabin as the 1993 Men of the Year Mandela and De Klerk were international symbols of apartheid. As a leader of the African National Congress and a participant in the struggle to overthrow apartheid, Mandela spent more than 25 years as a political prisoner. When De Klerk assumed the presidency of South Africa in September 1989, he began to change the system of apartheid and abolish discriminatory laws. On February 11, 1990, De Klerk released Mandela from prison. Four years later, South Africa held its first democratic elections and Mandela was the overwhelming winner. "The exact nature of what Mandela and De Klerk together have achieved may not be clear for many years," TIME wrote. "The nation they share has an explosive history of racial, ethnic and tribal violence. If the chain of events they have set in motion leads to the conclusion they both want, then the future will write of them, that these were leaders who seized their days and actually dared to lead." (1/3/94) The Death Toll of Apartheid Verifiable statistics on the human cost of apartheid are scarce and estimates vary. However, in his often-cited book A Crime Against Humanity, Max Coleman of the Human Rights Committee places the number of deaths due to political violence during the apartheid era as high as 21,000. Almost exclusively Black deaths, most occurred during especially notorious bloodbaths, such as the Sharpeville Massacre of 1960 and the Soweto Student Uprising of 1976-1977.
<urn:uuid:93e2e6e2-49ee-4931-9d01-3a7ec9ba793f>
CC-MAIN-2022-33
https://rw.ussyms425.com/6459-fw-de-klerk-history.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00296.warc.gz
en
0.973178
6,979
2.796875
3
by Murray N. Rothbard by Murray N. Rothbard Leading the cultural struggle in America was H.L. Mencken, undoubtedly the single most influential intellectual of the 1920s; a notable individualist and libertarian, Mencken sailed into battle with characteristic verve and wit, denouncing the stodgy culture and the "Babbittry" of businessmen, and calling for unrestricted freedom of the individual. For Mencken, too, it was the trauma of World War I, and its domestic and foreign evils, that mobilized and intensified his concern for politics a concern aggravated by the despotism of Prohibition, surely the greatest single act of tyranny ever imposed in America. Nowadays, when Prohibition is considered a "right-wing" movement, it is forgotten that every reform movement of the nineteenth century every moralistic group trying to bring the "uplift" to America by force of law included Prohibition as one of its cherished programs. To Mencken, the battle against Prohibition was merely a fight against the most conspicuous of the tyrannical and statist "reforms" being proposed against the American public. And so, Mencken's highly influential monthly The American Mercury, founded in 1924, opened its pages to writers of all parts of the Opposition especially to attacks on American culture and mores, to assaults on censorship and the championing of civil liberties, and to revisionism on the war. Thus, the Mercury featured two prominent revisionists of World War I: Harry Elmer Barnes and Barnes's student, C. Hartley Grattan, whose delightful series in the magazine, "When Historians Cut Loose," acidly demolished the war propaganda of America's leading historians. Mencken's cultural scorn for the American "booboisie" was embodied in his famous "Americana" column, which simply reprinted news items on the idiocies of American life without editorial comment. The enormous scope of Mencken's interests, coupled with his scintillating wit and style (Mencken was labeled by Joseph Wood Krutch as "the greatest prose stylist of the twentieth century"), served to obscure for his generation of youthful followers and admirers the remarkable consistency of his thought. When, decades after his former prominence, Mencken collected the best of his old writings in A Mencken Chrestomathy (1948), the book was reviewed in the New Leader by the eminent literary critic Samuel Putnam. Putnam reacted in considerable surprise; remembering Mencken from his youth as merely a glib cynic, Putnam found to his admiring astonishment that H.L.M. had always been a "Tory anarchist" an apt summation for the intellectual leader of the 1920s. But H.L. Mencken was not the only editor leading the new upsurge of individualistic opposition during the 1920s. From a similar though more moderate stance, the Nation of Mencken's friend Oswald Garrison Villard continued to serve as an outstanding voice for peace, revisionism on World War I, and opposition to the imperialist status quo imposed at Versailles. Villard, at the end of the war, acknowledged that the war had pushed him far to the left, not in the sense of adopting socialism, but in being thoroughly "against the present political order." Denounced by conservatives as pacifist, pro-German, and "Bolshevist," Villard found himself forced into a political and journalistic alliance with socialists and progressives who shared his hostility to the existing American and world order.1 From a still more radical and individualist perspective, Mencken's friend and fellow "Tory anarchist" Albert Jay Nock cofounded and coedited, along with Francis Neilson, the new weekly Freeman from 1920 to 1924. The Freeman, too, opened its pages to all left-oppositionists to the political order. With the laissez-faire individualist Nock as principal editor, the Freeman was a center of radical thought and expression among oppositionist intellectuals. Rebuffing the Nation's welcome to the new Freeman as a fellow liberal weekly, Nock declared that he was not a liberal but a radical. "We can not help remembering," wrote Nock bitterly, "that this was a liberal's war, a liberal's peace, and that the present state of things is the consummation of a fairly long, fairly extensive, and extremely costly experiment with liberalism in political power."2 To Nock, radicalism meant that the State was to be considered as an antisocial institution rather than as the typically liberal instrument of social reform. And Nock, like Mencken, gladly opened the pages of his journal to all manner of radical, anti-Establishment opinion, including Van Wyck Brooks, Bertrand Russell, Louis Untermeyer, Lewis Mumford, John Dos Passos, William C. Bullitt, and Charles A. Beard. In particular, while an individualist and libertarian, Nock welcomed the Soviet revolution as a successful overthrow of a frozen and reactionary State apparatus. Above all, Nock, in opposing the postwar settlement, denounced the American and Allied intervention in the [Russian] Civil War. Nock and Neilson saw clearly that the American intervention was setting the stage for a continuing and permanent imposition of American might throughout the world. After the folding of the Freeman in 1924, Nock continued to be prominent as a distinguished essayist in the leading magazines, including his famous "Anarchist's Progress."3 Most of this loose coalition of individualistic radicals was totally disillusioned with the political process, but to the extent that they distinguished between existing parties, the Republican Party was clearly the major enemy. Eternal Hamiltonian champions of Big Government and intimate government "partnership" with Big Business through tariffs, subsidies, and contracts, long-time brandishers of the Imperial big stick, the Republicans had capped their antilibertarian sins by being the party most dedicated to the tyranny of Prohibition, an evil that particularly enraged H.L. Mencken. Much of the opposition (e.g., Mencken, Villard) supported the short-lived LaFollette Progressive movement of 1924, and the Progressive Senator William E. Borah (R-Idaho) was an opposition hero in leading the fight against the war and the League of Nations, and in advocating recognition of Soviet Russia. But the nearest political home was the conservative Bourbon, non-Wilsonian or "Cleveland" wing of the Democratic Party, a wing that at least tended to be "wet," was opposed to war and foreign intervention, and favored free trade and strictly minimal government. Mencken, the most politically minded of the group, felt closest in politics to Governor Albert Ritchie, the states-rights Democrat from Maryland, and to Senator James Reed, Democrat of Missouri, a man staunchly "isolationist" and anti-intervention in foreign affairs and pro-laissez-faire at home. It was this conservative wing of the Democratic Party, headed by Charles Michelson, Jouett Shouse, and John J. Raskob, which launched a determined attack on Herbert Hoover in the late 1920s for his adherence to Prohibition and to Big Government generally. It was this wing that would later give rise to the much-maligned Liberty League. To Mencken and to Nock, in fact, Herbert Hoover the pro-war Wilsonian and interventionist, the Food Czar of the war, the champion of Big Government, of high tariffs and business cartels, the pious moralist and apologist for Prohibition embodied everything they abhorred in American political life. They were clearly leaders of the individualist opposition to Hoover's conservative statism. Since they were, in their very different styles, the leaders of libertarian thought in America during the 1920s, Mencken and Nock deserve a little closer scrutiny. The essence of Mencken's remarkably consistent "Tory anarchism" was embodied in the discussion of government that he was later to select for his Chrestomathy: All government, in its essence, is a conspiracy against the superior man: its one permanent object is to oppress him and cripple him. If it be aristocratic in organization, then it seeks to protect the man who is superior only in law against the man who is superior in fact; if it be democratic, then it seeks to protect the man who is inferior in every way against both. One of its primary functions is to regiment men by force, to make them as much alike as possible . . . to search out and combat originality among them. All it can see in an original idea is potential change, and hence an invasion of its prerogatives. The most dangerous man, to any government, is the man who is able to think things out for himself, without regard to the prevailing superstitions and taboos. Almost inevitably he comes to the conclusion that the government he lives under is dishonest, insane and intolerable, and so, if he is romantic, he tries to change it. And even if he is not romantic personally [as Mencken clearly was not] he is very apt to spread discontent among those who are. . . . The ideal government of all reflective men, from Aristotle onward, is one which lets the individual alone one which barely escapes being no government at all. This ideal, I believe, will be realized in the world twenty or thirty centuries after I have . . . taken up my public duties in Hell.4 Again, Mencken on the State as inherent exploitation: The average man, whatever his errors otherwise, at least sees clearly that government is something lying outside him and outside the generality of his fellow men that it is a separate, independent and often hostile power, only partly under his control and capable of doing him great harm. . . . Is it a fact of no significance that robbing the government is everywhere regarded as a crime of less magnitude than robbing an individual, or even a corporation? . . . What lies behind all this, I believe, is a deep sense of the fundamental antagonism between the government and the people it governs. It is apprehended, not as a committee of citizens chosen to carry on the communal business of the whole population, but as a separate and autonomous corporation, mainly devoted to exploiting the population for the benefit of its own members. Robbing it is thus an act almost devoid of infamy. . . . When a private citizen is robbed a worthy man is deprived of the fruits of his industry and thrift; when the government is robbed the worst that happens is that certain rogues and loafers have less money to play with than they had before. The notion that they have earned that money is never entertained; to most sensible men it would seem ludicrous. They are simply rascals who, by accidents of law, have a somewhat dubious right to a share in the earnings of their fellow men. When that share is diminished by private enterprise the business is, on the whole, far more laudable than not. The intelligent man, when he pays taxes, certainly does not believe that he is making a prudent and productive investment of his money; on the contrary, he feels that he is being mulcted in an excessive amount for services that, in the main, are downright inimical to him. . . . He sees in even the most essential of them an agency for making it easier for the exploiters constituting the government to rob him. In these exploiters themselves he has no confidence whatever. He sees them as purely predatory and useless. . . . They constitute a power that stands over him constantly, ever alert for new chances to squeeze him. If they could do so safely, they would strip him to his hide. If they leave him anything at all, it is simply prudentially, as a farmer leaves a hen some of her eggs. This gang is well-nigh immune to punishment. . . . Since the first days of the Republic, less than a dozen of its members have been impeached, and only a few obscure under-strappers have been put into prison. The number of men sitting at Atlanta and Leavenworth for revolting against the extortions of government is always ten times as great as the number of government officials condemned for oppressing the taxpayers to their own gain. Government, today, has grown too strong to be safe. There are no longer any citizens in the world; there are only subjects. They work day in and day out for their masters; they are bound to die for their masters at call. . . . On some bright tomorrow, a geological epoch or two hence, they will come to the end of their endurance.5 In letters to his friends, Mencken reiterated his emphasis on individual liberty. At one time he wrote that he believed in absolute human liberty "up to the limit of the unbearable, and even beyond." To his old friend Hamilton Owens he declared, I believe in only one thing and that thing is human liberty. If ever a man is to achieve anything like dignity, it can happen only if superior men are given absolute freedom to think what they want to think and say what they want to say . . . [and] the superior man can be sure of freedom only if it is given to all men.6 And in a privately written "Addendum on Aims," Mencken declared that I am an extreme libertarian, and believe in absolute free speech. . . . I am against jailing men for their opinions, or, for that matter, for anything else.7 Part of Mencken's antipathy to reform stemmed from his oft-reiterated belief that "all government is evil, and that trying to improve it is largely a waste of time." Mencken stressed this theme in the noble and moving peroration to his Credo, written for a "What I Believe" series in a leading magazine: I believe that all government is evil, in that all government must necessarily make war upon liberty, and that the democratic form is as bad as any of the other forms. . . . I believe in complete freedom of thought and speech alike for the humblest man and the mightiest, and in the utmost freedom of conduct that is consistent with living in organized society. I believe in the capacity of man to conquer his world, and to find out what it is made of, and how it is run. I believe in the reality of progress. I But the whole thing, after all, may be put very simply. I believe that it is better to tell the truth than to lie. I believe that it is better to be free than to be a slave. And I believe that it is better to know than to be ignorant.8 Insofar as he was interested in economic matters, Mencken, as a corollary to his libertarian views, was a staunch believer in capitalism. He praised Sir Ernest Benn's paean to a free-market economy, and declared that to capitalism "we owe . . . almost everything that passes under the general name of civilization today." He agreed with Benn that "nothing government does is ever done as cheaply and efficiently as the same thing might be done by private enterprise."9 But, in keeping with his individualism and libertarianism, Mencken's devotion to capitalism was to the free market, and not to the monopoly statism that he saw ruling America in the 1920s. Hence he was as willing as any socialist to point the finger at the responsibility of Big Business for the growth of statism. Thus, in analyzing the 1924 presidential election, Mencken wrote: Big Business, it appears, is in favor of him [Coolidge]. . . . The fact should be sufficient to make the judicious regard him somewhat suspiciously. For Big Business, in America . . . is frankly on the make, day in and day out. . . . Big Business was in favor of Prohibition, believing that a sober workman would make a better slave than one with a few drinks in him. It was in favor of all the gross robberies and extortions that went on during the war, and profited by all of them. It was in favor of all the crude throttling of free speech that was then undertaken in the name of patriotism, and is still in favor of it.10 As for John W. Davis, the Democratic candidate, Mencken noted that he was said to be a good lawyer not, for Mencken, a favorable recommendation, since lawyers are responsible for nine-tenths of the useless and vicious laws that now clutter the statute-books, and for all the evils that go with the vain attempt to enforce them. Every Federal judge is a lawyer. So are most Congressmen. Every invasion of the plain rights of the citizen has a lawyer behind it. If all lawyers were hanged tomorrow . . . we'd all be freer and safer, and our taxes would be reduced by almost a half. And what is more, Dr. Davis is a lawyer whose life has been devoted to protecting the great enterprises of Big Business. He used to work for J. Pierpont Morgan, and he has himself said that he is proud of the fact. Mr. Morgan is an international banker, engaged in squeezing nations that are hard up and in trouble. His operations are safeguarded for him by the manpower of the United States. He was one of the principal beneficiaries of the late war, and made millions out of it. The Government hospitals are now full of one-legged soldiers who gallantly protected his investments then, and the public schools are full of boys who will protect his investments tomorrow.11 In fact, the following brief analysis of the postwar settlement combines Mencken's assessment of the determining influence of Big Business with the bitterness of all the individualists at the war and its aftermath: When he was in the Senate Dr. Harding was known as a Standard Oil Senator and Standard Oil, as everyone knows, was strongly against our going into the League of Nations, chiefly because England would run the league and be in a position to keep Americans out of the new oil fields in the Near East. The Morgans and their pawnbroker allies, of course, were equally strong for going in, since getting Uncle Sam under the English hoof would materially protect their English and other foreign investments. Thus the issue joined, and on the Tuesday following the first Monday of November 1920, the Morgans, after six years of superb Geschaft under the Anglomaniacal Woodrow, got a bad beating.12 But as a result, Mencken went on, the Morgans decided to come to terms with the foe, and therefore, at the Lausanne Conference of 1922–23, "the English agreed to let the Standard Oil crowd in on the oil fields of the Levant," and J.P. Morgan visited Harding at the White House, after which "Dr. Harding began to hear a voice from the burning bush counseling him to disregard the prejudice of the voters who elected him and to edge the U.S. into a Grand International Court of Justice."13 While scarcely as well known as Mencken, Albert Nock more than any other person supplied twentieth-century libertarianism with a positive, systematic theory. In a series of essays in the 1923 Freeman on "The State," Nock built upon Herbert Spencer and the great German sociologist and follower of Henry George, Franz Oppenheimer, whose brilliant little classic, The State,14 had just been reprinted. Oppenheimer had pointed out that man tries to acquire wealth in the easiest possible way, and that there were two mutually exclusive paths to obtain wealth. One was the peaceful path of producing something and voluntarily exchanging that product for the product of someone else; this path of production and voluntary exchange Oppenheimer called the "economic means." The other road to wealth was coercive expropriation: the seizure of the product of another by the use of violence. This Oppenheimer termed the "political means." And from his historical inquiry into the genesis of States Oppenheimer defined the State as the "organization of the political means." Hence, Nock concluded, the State itself was evil, and was always the highroad by which varying groups could seize State power and use it to become an exploiting, or ruling, class, at the expense of the remainder of the ruled or subject population. Nock therefore defined the State as that institution which "claims and exercises the monopoly of crime" over a territorial area; "it forbids private murder, but itself organizes murder on a colossal scale. It punishes private theft, but itself lays unscrupulous hands on anything it wants."15 In his magnum opus, Our Enemy, the State, Nock expanded on his theory and applied it to American history, in particular the formation of the American Constitution. In contrast to the traditional conservative worshippers of the Constitution, Nock applied Charles A. Beard's thesis to the history of America, seeing it as a succession of class rule by various groups of privileged businessmen, and the Constitution as a strong national government brought into being in order to create and extend such privilege. The Constitution, wrote Nock, enabled an ever-closer centralization of control over the political means. For instance . . . many an industrialist could see the great primary advantage of being able to extend his exploiting opportunities over a nationwide free-trade area walled in by a general tariff. . . . Any speculator in depreciated public securities would be strongly for a system that could offer him the use of the political means to bring back their face value. Any shipowner or foreign trader would be quick to see that his bread was buttered on the side of a national State which, if properly approached, might lend him the use of the political means by way of a subsidy, or would be able to back up some profitable but dubious freebooting enterprise with "diplomatic representations" or with reprisals. Nock concluded that those economic interests, in opposition to the mass of the nation's farmers, "planned and executed a coup d'e-tat, simply tossing the Articles of Confederation into the wastebasket."16 While the Nock-Oppenheimer class analysis superficially resembles that of Marx, and a Nockian would, like Lenin, look at all State action whatever in terms of "Who? Whom?" (Who is benefiting at the expense of Whom?), it is important to recognize the crucial differences. For while Nock and Marx would agree on the Oriental Despotic and feudal periods' ruling classes in privilege over the ruled, they would differ on the analysis of businessmen on the free market. For to Nock, antagonistic classes, the rulers and the ruled, can only be created by accession to State privilege; it is the use of the State instrument that brings these antagonistic classes into being. While Marx would agree on pre-capitalistic eras, he of course also concluded that businessmen and workers were in class antagonism to each other even in a free-market economy, with employers exploiting workers. To the Nockian, businessmen and workers are in harmony as are everyone else in the free market and free society, and it is only through State intervention that antagonistic classes are created.17 Thus, to Nock the two basic classes at any time are those running the State and those being run by it: as the Populist leader Sockless Jerry Simpson once put it, "the robbers and the robbed." Nock therefore coined the concepts "State power" and "social power." "Social power" was the power over nature exerted by free men in voluntary economic and social relationships; social power was the progress of civilization, its learning, its technology, its structure of capital investment. "State power" was the coercive and parasitic expropriation of social power for the benefit of the rulers: the use of the "political means" to wealth. The history of man, then, could be seen as an eternal race between social power and State power, with society creating and developing new wealth, later to be seized, controlled, and exploited by the State. No more than Mencken was Nock happy about the role of big business in the twentieth century's onrush toward statism. We have already seen his caustic Beardian view toward the adoption of the Constitution. When the New Deal arrived, Nock could only snort in disdain at the mock wails about collectivism raised in various business circles: It is one of the few amusing things in our rather stodgy world that those who today are behaving most tremendously about collectivism and the Red menace are the very ones who have cajoled, bribed, flattered and bedeviled the State into taking each and every one of the successive steps that lead straight to collectivism. . . . Who hectored the State into the shipping business, and plumped for setting up the Shipping Board? Who pestered the State into setting up the Interstate Commerce Commission and the Federal Farm Board? Who got the State to go into the transportation business on our inland waterways? Who is always urging the State to "regulate" and "supervise" this, that, and the other routine process of financial, industrial, and commercial enterprise? Who took off his coat, rolled up his sleeves, and sweat blood hour after hour over helping the State construct the codes of the late-lamented National Recovery Act? None but the same Peter Schlemihl who is now half out of his mind about the approaching spectre of collectivism.18 Or, as Nock summed it up, The simple truth is that our businessmen do not want a government that will let business alone. They want a government they can use. Offer them one made on Spencer's model, and they would see the country blow up before they would accept it.19 - Villard to Hutchins Hapgood, May 19, 1919. Michael Wreszin, Oswald Garrison Villard (Bloomington: Indiana University Press, 1965), pp. 75 and 125–30. - Albert Jay Nock, "Our Duty Towards Europe," The Freeman 7 (August 8, 1923): 508; quoted in Robert M. Crunden, The Mind and Art of Albert Jay Nock (Chicago: Henry Regnery, 1964), p. 77. - Albert Jay Nock, On Doing the Right Thing, and Other Essays (New York: Harper and Row, 1928). - From the Smart Set, December 1919. H.L. Mencken, A Mencken Chrestomathy (New York: Knopf, 1949), pp. 145–46. See also Murray N. Rothbard, "H.L. Mencken: The Joyous Libertarian," New Individualist Review 2, no. 2 (Summer, 1962): 15–27. - From the American Mercury, February 1925. Mencken, Chrestomathy, pp. 146–48. - Guy Forgue, ed., Letters of H.L. Mencken (New York: Knopf, 1961), pp. xiii, 189. - H.L. Mencken, "What I Believe," The Forum 84 (September 1930): 139. - H.L. Mencken, "Babbitt as Philosopher" (review of Henry Ford, Today and Tomorrow, and Ernest J.P. Benn, The Confessions of a Capitalist), The American Mercury 9 (September 1926): 126–27. Also see Mencken, "Capitalism," Baltimore Evening Sun, January 14, 1935, reprinted in Chrestomathy, p. 294. - H.L. Mencken, "Breathing Space," Baltimore Evening Sun, August 4, 1924; reprinted in H.L. Mencken, A Carnival of Buncombe (Baltimore: Johns Hopkins Press, 1956), pp. 83–84. - H.L. Mencken, "Next Year's Struggle," Baltimore Evening Sun, June 11, 1923; reprinted in Mencken, A Carnival of Buncombe, pp. 56–57. - Albert Jay Nock, Our Enemy, the State (1922; New York: William Morrow, 1935), pp. 162ff. - This idea of classes as being created by States was the pre-Marxian idea of classes; two of its earliest theorists were the French individualist and libertarian thinkers of the post-Napoleonic Restoration period, Charles Comte and Charles Dunoyer. For several years after the Restoration, Comte and Dunoyer were the mentors of Count Saint-Simon, who adopted their class analysis; the later Saint-Simonians then modified it to include businessmen as being class-exploiters of workers, and the latter was adopted by Marx. I am indebted to Professor Leonard Liggio's researches on Comte and Dunoyer. As far as I know, the only discussion of them in English, and that inadequate, is Elie Halevy, The Era of Tyrannies (Garden City, N.Y.: Doubleday and Co., 1965), pp. 21–60. Gabriel Kolko's critique of Marx's theory of the State is done from a quite similar perspective. Gabriel Kolko, The Triumph of Conservatism (Glencoe, Ill.: The Free Press, 1963), pp. 287ff. - Albert Jay Nock, "Imposter-Terms," Atlantic Monthly (February 1936): 161–69. - Nock to Ellen Winsor, August 22, 1938. F.W. Garrison, ed., Letters from Albert Jay Nock (Caldwell, Id.: Caxton Printers, 1949), p. 105.
<urn:uuid:e808d2ef-1140-44de-b90d-67196baf3253>
CC-MAIN-2022-33
https://www.lewrockwell.com/1970/01/murray-n-rothbard/origins-of-the-old-right-ii-the-tory-anarchism-of-mencken-and-nock-chapter-3-of-the-betrayal-of-the-american-right/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573760.75/warc/CC-MAIN-20220819191655-20220819221655-00694.warc.gz
en
0.969135
6,149
3
3
In this paper, the principal attention is paid to studying various aspects reflecting ethnic identity in world view and literary as well as journalistic prose of Chingiz Aytmatov. Relevance of the study is determined by the fact that identity issues have become a global problem of today, puzzling not just individual persons, but whole social groups, communities and states. In the modern science–psychology, ethnic education studies, cultural studies, philosophy, history, ethnography, sociology, local history–studying the issues of ethnic identity, inter-cultural engagement, mutual enrichment of cultures, tolerance in communication are getting much attention and quite reasonably studied. During the recent decades, more and more specialists in literature studies, folklore studies and linguistics turn their attention to the question of ethnic identity. Despite the fact that many Russian and foreign researchers were involved in deep and multi-aspect studies of Chingiz Aytmatov's oeuvres, the concept of reflection on ethnic identity in his literary and journalistic works is still understudied. This constitutes novelty of the present work. The study is based on several research methods; systemic (complex) method considers the object as a system, a holistic set of interrelated elements; analytic method is used to analyze literary and journalistic genres in the writer's works; comparative method is used to compare common literary phenomena with creative output of some other contemporary authors. Thus, the purpose of this paper is a detailed and complex study of aspects of ethnic identity as reflected in works of Chingiz Aytmatov, where this aspect finds vivid actualization. In the modern world, cultural identities (ethnic, national, religious, civilizational) take a central position, while "unions, antagonisms and state policy are formed with considerations for cultural proximity and cultural distinctiveness" – states Huntington in his most famous book (Huntington, 2004). Scholars analyze ethnic identity as a component of personal social identity, understanding of one's belonging to a certain ethnic community. This community, in its own turn, is being defined by parents' belonging, place of birth, language, culture (Smith, 1986). For most people, social identity that allows them to obtain the feeling of is related to one's place of birth (Giddens, 1991). The latter (on condition of continuing habitation) has been always serving as a determinative factor in formation of cultural, historical, social and territorial communality of people and has been participating in definition of their subethnic identity. In addition, until today's global migration processes became a thing, one's place of birth was strongly related to the concept of Motherland. Researchers are especially interested in constructing such a model on the creative output of ethnic authors, where specifics of artistic perception have been largely determined by self-identification process (Calhoоn, 1997, 1998). Chingiz Aytmatov was a representative of a monoethnic identity, as he was born in a family where parents were of different ethnicities but of the same racial and linguistic group. That is, by place of birth, language, upbringing and culture, the writer undoubtedly identified with his people and his was sharply reflected in both his literary and journalistic output. However, these aspects of his literary and journalistic activity stay beyond the interest of researchers. Us-identity of Aytmatov is especially vivid in one of his speeches titled: "We live in the mountains and between the mountains in valleys... This is our fatherland – Ala-Too... means motherland in Kyrgyz, means fate: – it means the fate of my Motherland! … who is he, whose soul is not getting full of filial affection and gratitude to one's native land that had given birth to his nation." Without a doubt, native language takes a prominent place in formation of ethnic identity. Through his many public and printed addresses, Chingiz Aytmatov several times emphasized importance of native language in formation of personality. Speaking from his personal experience, Chingiz Aytmatov said: "Only native speech, acquainted and mastered during childhood may ... awake first origins of national pride in a person... childhood is the period when true mastering of one's native speech is shaped and the period when a feeling of belonging to a certain culture arises". Developing his thoughts on the role of native speech, he proposes an idea interesting from the point of view of psychology of textual creation: "When I am creating my works in Kyrgyz, I again feel uniqueness in my self-expression." While raising concerns over the fate of small ethnicities, the great humanitarian wrote: "Being the most essential element of national culture, language is at the same time a means of its development" (Aytmatov, 1988, p. 220). Besides the topic of native language, Aytmatov's world view shows clear evidence of the problem of human identity, that of belonging to a family, kin, tribe, people. In psychology, the need to belong with one's people is among five basic human needs, akin to the need in love and friendship (five-level Maslow's hierarchy of needs). It takes the third place, after physiological needs and security. In its own turn, this need may be satisfied when there are conditions for formation of ethnic self-identity. This ethnic self-identity is however usually impossible without knowledge of language, culture and history of one's people. Importance of knowing he answers to "who am I?" and "where I come from?" kind of questions was recognized by many after reading Aytmatov's novel "The Day Lasts More Than A Hundred Years". The work, coined by Aytmatov, became a note of warning also to those who stopped feeling belonging to their people in our new global world. In one of his articles, Aytmatov was proud and admiring of his people, who created the: "Long existence of a nomadic people... together with high poetic talent led to arising and blooming of narrative epic genre... If someone asked me, what great persons I know from my people, I would name Sayakbay Karalaev first.:" Further, we may see the importance of ethnic belonging to the writer. "Being proud of history and creativity of one's people is intrinsic for everyone". As a person clearly understands their ethnic identity in the process of inter-ethnic communication in multicultural environment, Aytmatov expresses natural perception of one's ethnicity in the following phrases: "Connection to one's soil, people... feeds the culture with living, fruitful juices, helps it to come out to global human expanse, for there is a lot of common in life of various peoples and their world view." Bashkir writer Amir Aminev rightfully notes the role and importance of Chingiz Aytmatov's works in the global literature: "The main feat accomplished by great Chingiz Aytmatov was using antiquity, world view, history and culture of a people and wisdom collected through centuries to address future in order to predefine issues that await our global civilization. Unfortunately, we are still unable to appraise in full both the creations of the writer and the depth of the issues he has been touching upon." Literary character, protagonist, a human person is seen by Chingiz Aytmatov from the point of view of belonging to a certain group, be it social, ethnic or cultural. Chingiz Aytmatov created a row of vivid female characters reflecting the mentality of eastern women. Each character has its own individuality, emotional content, while simultaneously being connected with the common philosophy, links, concepts and bearing a significant ideological and aesthetic load. In the very beginning of novella, a character of Elder Mother is introduced. This image embodies the philosophy of wisdom and. Elder Mother is a symbol of patience and commitment of eastern woman, who is devoted to traditional family values and protecting the family. Through the text, Aytmatov provides characteristics of Elder Mother through the eyes of Seit, an adolescent boy. "Agreement and prosperity of our house and big family fully depend on my mother. She is the absolute mistress of both courtyards, a guardian of the hearth. She was very young when she came into the family of our nomad grandfathers and after that she always venerated their memory, managing the families following the laws of justice. In our ail, she was considered the most honorable, conscientious and experienced housewife. Mother was in charge of everything in the house. Truth be told, inhabitants of the ail did not hold father for the head of the family. On more than one occasion one could hear people saying something like: "Oh, don't go to– a respectful name for artisans in our part of the world–he know nothing but his axe. Elder Mother is the head there, go to her, it would be better". The image of Elder Mother is that of the guardian of the hearth. In her village, like in her family, she was admired and respected as a wise and conscientious person. According to Elder Mother, one's goals should be being faithful to God and husband, having plenty to live upon and giving birth to children. Speaking to Jamila, she said: "Praise Allah, my daughter, for you came into a strong and blessed house. This is your happiness. Woman's happiness is in giving birth to children, so that there is a plenty to live upon in the house. Happiness lives only with those who keep their honor and conscience. Remember this, and keep your honor." The words of Elder Mother reflect solid eastern family values. She educates her young daughter-in-law Jamila in such traditions where a woman in eastern and Turkic family is held as a guardian of the hearth and is responsible for strong familial ties. Even when the head of the family is away from home, his wife stays behind as a house manager. This situation is typical of many Turkic peoples and here Chingiz Aytmatov gives Elder Mother a central position in the text. The character of Younger Mother is painted by Aytmatov with specific warmness and love. "My Younger Mother, a kind, facile, humble woman, did not lag behind young people in work, be it digging irrigation ditches or watering, in other words, she held her grub hoe tight. Younger Mother in the work of Aytmatov reflects a type of eastern woman with a humble character. She became a widow and sent her sons off to war. However, she is never lamenting her fate and continues keeping her household uncomplaining. When describing Jamila, a representative of a younger generation, the author clearly sets the typical characteristics of an eastern woman. For example: "Jamila was pretty. Slim, handsome, with coarse straight hair braided into two tight plaits, she was artful in tying her white head scarf in such a was that it went through her forehead a bit slantwise, fitting her smooth swarthy skin. When Jamila laughed, her bluish-black almond eyes were filling with young ardor, and when she started singing salty ail songs, her eyes showed girly shine". Thus, Elder and Younger Mothers and Jamila from this novella are vivid national characters reflecting authorial concept of eastern woman as a guardian of the hearth. 3.3. While supporting development of ethnic culture and native language, the great humanitarian writer stayed internationalist: "It is very realistic to preserve existing languages of small peoples... both by internal linguistic self-development and by direct and indirect enrichment from cultures of more advanced languages of the world. Integration of national cultures does not lead to depersonalization and loss of uniqueness, but to their enrichment, development and growth, actualization of potential that presents in each and every people and is being brought from the best national traditions. It should be noted that Chingiz Aytmatov exerted a great influence over Bashkir literary and cultural environment through theater (Alibaev et al., 2016). Professor Baimov, a prominent specialist in Bashkir literature and winner of Salavat Yulayev Prize undertook a comparative typological analysis of of works authored by Karim (1986) and those by Aytmatov (1988) showing similar motifs in their work, notes: "Specific nature and conventionality are two mandatory components of arts. Peculiar use of symbolism in Aytmatov’s Realist prose, especially in his novellas Mother's Field and The White Ship found admirers in his home country and abroad" (p. 117). Analysis of parallelism in works of two great authors of the 20th century and personal friends, Chingiz Aytmatov and Mustai Karim (1986), shows some general similarities. Distinctive and original creativities of Aytmatov and Karim were always close, having common motifs and problems that wer concerning two great humanists. (Gareeva & Mustafina, 2019). So, Elder Mother Olo Iney the Wise Woman (Kendek Iney) in Karim's and Tolganay in Aytmatov's are both typical ethnic characters and close in the way they see their respective world systems. They both are supports, trees of life, supporters of children, young mothers, those in love, those aggrieved, dispossessed, unfortunate. Characters of Younger Mothers in Aytmatov's and Karim's both express the same general concept of a humble eastern woman. For example, protagonist's Younger Mother living surrounded by prosperity and love always respected the Elder Mother, like if she held herself guilty in front of her and feeling the latter's internal pain, the former always came in line with the latter as if in fear of injuring her soul. Just like the Younger Mother in Aytmatov's work, she is a submissive person of humble nature as it is proper among Turkic peoples, showing respect to elders and obliged to live in agreement with her husband's first wife. Purpose of the Study The purpose of this article is - to define ethnic world view and specifics of its reflection in Aytmatov's views and his literary and journalistic output; - to identify features of the ethnic identity model created by Aytmatov studying the concept of its reflection in a literary work; - to conduct comparative analysis of the ethnic identity model in works of Aytmatov and Karim in order to reveal common and distinctive traits in reflection of cultural and ethnic identity; - to identify elements of poetics and aesthetics typical for a given author, which may be seen as reflecting means of the ethnic identity model in a literary work. The methods of this research are represented by a set of scholarly ideas that allows conducting a conceptual synthesis of theoretical propositions and bringing to light the problem of formation of the ethnic identity model in the works of Aytmatov; among them are: - comparative method, employed to identify common and distinctive features in reflection of cultural and ethnic identity in the works of two writers belonging to two different literary traditions; - philosophical, ethnic psychological and ethnic sociological concepts of identity; - philosophical and psychological concepts of humanism; - the idea of multiculturalism, as well as a related discourse of cultural diversity. - the idea of internationalism. Research Results are as follows: From studies of reflections of ethnic identity in views and prose of Chingiz Aytmatov, the authors obtained the following results: - Through his many public speeches and printed addresses, Chingiz Aytmatov several times emphasized importance of native language in formation of personality. - Besides the topic of native language, Aytmatov's world view shows clear evidence of the problem of human identity, that of belonging to a family, kin, tribe, people. Literary character, protagonist, a human person is seen by Chingiz Aytmatov from the point of view of belonging to a certain group, be it social, ethnic or cultural. While supporting development of ethnic culture and native language, the great humanitarian writer stayed internationalist. Identity issues have become a global problem of today, puzzling not only individual people, but social groups, communities and states. Researchers are especially interested in constructing such a model in the creative output of ethnic authors, where specifics of artistic perception largely determined by self-identification process (Khazretali et al., 2018). Chingiz Aytmatov was a representative of a monoethnic identity, as he was born in a family where parents were of different ethnicities but of the same racial and linguistic group. That is, by place of birth, language, upbringing and culture, the writer undoubtedly identified with his people and his us-identity was sharply reflected in both his literary and journalistic output. So, examples from his articles and addresses created in the last century reflect us-identity of the writer himself and his understanding of ethnic identity as whole. Without a doubt, native language takes a prominent place in formation of ethnic identity (Mazhitayeva et al., 2016). Through his many public speeches and printed addresses, Chingiz Aytmatov several times emphasized importance of native language in formation of personality. Aytmatov's us-identity is vivid in one of his addresses titled, where the writer demonstrates his filial love and gratitude to his native soil that gave rise to his people.. The writer's view of ethnic identity embodied in his literary creations became a note of warning also to those who stopped feeling belonging to their people in our new global world. In his journalistic works, Aytmatov notes that a person clearly perceives their ethnic belonging in a multicultural space and in inter-ethnic communication, while at the same time the same multicultural environment helps them enrich their own culture and get to know those of others. Advocating ethnic culture, his native language, admiring the nature of his motherland, the great humanist writer stayed internationalist. Alibaev, Z., Galina, G., Gareeva, G., & Nabiullina, G. (2016). Genre-Stylistic Peculiarities of Bashkir Prose. The Social Sciences, 11, 6267. Aytmatov, C. G. (1988). Articles, addresses, dialogs, interviews. Publishing house of Novosty Press Agency. Calhoоn, C. (1997/1998). Nationalism and the contradictions of modernity. Berkeley journal of Sociology, 42(1), R. 1. Gareeva, G. N., & Mustafina, R. D. (2019). Artistic peculiarities of Mustai Karim works. The European proceedings of social & behavioural sciences EPSBS Conference: SCTCGM 2018 – Social and Cultural Transformations in the Context of Modern Globalism. Vol. LVIII. (pp. 2006–2014). Giddens, A. (1991). Modernity and Self-Identity: Self and Society in the Late Modern Age. Calif. Huntington, S. (2004). Who Are We? The Challenges to America's National Identity. AST Transit book. Karim, M. (1986). Long, Long Childhood. Mid-Urals Book Publishing. Khazretali, T., Amantai, Y., Girithlioglu, M., Orazkhan, N., & Berkimbaev, K. (2018). Kazakh-Turkish Cultural Relationship of the 20th Century: through a Scientific Biography and the Works of Shakarim Kudaiberdyuly. Astra Salvensis, VI(11), 210. Mazhitayeva, S., Kadina, Z., Aitbaeva, B., Zhunusova, M., & Sateeva, B. (2016). Appearance of semiotics in Kazakh mentality. Man in India, 96, 1011–1020. Smith, A. D. (1986). The Ethnic Origins of Nations. Oxford. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. About this article 29 November 2021 Print ISBN (optional) Cultural development, technological development, socio-political transformations, globalization Cite this article as: Alieva, S. A., Mustafina, R. D., Gareeva, G. N., Galliamov, A. A., & Akhmadrakhimova, O. V. (2021). Issues Of Ethnic Identity In Worldview And Creative Output Of Chingiz Aytmatov. In D. K. Bataev, S. A. Gapurov, A. D. Osmaev, V. K. Akaev, L. M. Idigova, M. R. Ovhadov, A. R. Salgiriev, & M. M. Betilmerzaeva (Eds.), Social and Cultural Transformations in The Context of Modern Globalism, vol 117. European Proceedings of Social and Behavioural Sciences (pp. 1117-1123). European Publisher. https://doi.org/10.15405/epsbs.2021.11.149
<urn:uuid:45153278-7be6-4f2c-9246-91850ff05324>
CC-MAIN-2022-33
https://www.europeanproceedings.com/article/10.15405/epsbs.2021.11.149
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570871.10/warc/CC-MAIN-20220808183040-20220808213040-00295.warc.gz
en
0.949791
4,471
2.75
3
A B C D E F G H I J K L M N O P R S T U V W Y Z Words begining with "A" In a system of moist air, the ratio of the mass of water vapor present to the volume occupied by the mixture; that is, the density of the water vapor component. Absolute humidity is normally expressed in grams of water vapor in a cubic meter of air (25 g/m3). absolute humidity = mass of water vapor/volume of air The process in which radiant energy is retained by a substance. A further process always results from absorption, that is, the irreversible conversion of the absorbed radiation into some other form of energy within and according to the nature of the absorbing medium. The absorbing medium itself may emit radiation, but only after an energy conversion has occurred. Acids form when certain atmospheric gases (primarily carbon dioxide, sulfur dioxide, and nitrogen oxides) come in contact with water in the atmosphere or on the ground and are chemically converted to acidic substances. Oxidants play a major role in several of these acid-forming processes. Carbon dioxide dissolved in rain is converted to a weak acid (carbonic acid). Other gases, primarily oxides of sulfur and nitrogen, are converted to stronq acids (sulfuric and Although rain is naturally slightly acidic because of carbon dioxide, natural emissions of sulfur and nitrogen oxides, and certain organic acids, human activities can make it much more acidic. Occasional pH readings of well below 2.4 (the acidity of vinegar) have been reported in industrialized areas. The principal natural phenomena that contribute acid-producing gases to the atmosphere are emissions from volcanoes and from biological processes that occur on the land, in wetlands, and in the oceans. The effects of acidic deposits have been detected in glacial ice thousands of years old in remote parts of the globe. Principal human sources are industrial and power-generating plants and transportation vehicles. The gases may be carried hundreds of miles in the atmosphere before they are converted to acids and Since the industrial revolution, emissions of sulfur and nitrogen oxides to the atmosphere have increased. Industrial and energy-generating facilities that burn fossil fuels, primarily coal, are the principal sources of increased sulfur oxides. These sources, plus the transportation sector, are the major originators of increased nitrogen oxides. The problem of acid rain not only has increased with population and industrial growth, it has become more widespread. The use of tall smokestacks to reduce local pollution has contributed to the spread of acid rain by releasing gases into regional atmospheric circulation. The same remote glaciers that provide evidence of natural variability in acidic deposition show, in their more recently formed layers, the increased deposition caused by human activity during the past half century. Acquisition of Signal (AOS) The time you begin receiving a signal from a spacecraft. For polar-orbiting satellites, radio reception of the APT signal can begin only when the polar-orbiting satellite is above the horizon of a particular location. This is determined by both the satellite and its particular path during orbit across the reception range of a Active System (Active Sensor) A remote-sensing system that transmits its own radiation to detect an object or area for observation and receives the reflected or transmitted radiation. Radar is an example of an active system. Compare with passive Analog to Digital. Used to refer to the conversion of analoq data to its diqital equivalent. Advanced Very High Resolution Radiometer (AVHRR) A five-channel scanning instrument that quantitatively measures electromagnetic radiation, flown on NOAA environmental satellites. AVHRR remotely determines cloud cover and surface temperature. Visible and infrared detectors observe vegetation, clouds, lakes, shorelines, snow, and ice. TIROS Automatic Picture Transmissions (APT) are derived from this instrument. See Particles of liquid or solid dispersed as a suspension in gas. The act or process of establishing a forest, especially on land not previously forested. Airborne Imaging Radar. Large body of air, often hundreds or thou- sands of miles across, containing air of a simi- lar temperature and humidity. Sometimes the differences between air masses are hardly noticeable, but if colliding air masses have very different temperatures and humidity values. storms can erupt. See front. The existence in the air of substances in concentrations that are determined unacceptable to human health and the environment. Contaminants in the air we breathe come mainly from manufacturing industries, electric power plants, exhaust from automobiles, buses. Primary Air Pollutants - sulfur dioxide - carbon monoxide - nitrogen dioxide - ground-level ozone - carbon particles The weight of the atmosphere over a particular point, also called barometric pressure. Average air exerts approximately 14.7 pounds (6.8 kg) of force on every square inch (or 101,325 newtons on every square meter) at sea level. Also Known As. The ratio of the outgoing solar radiation reflected by an object to the incoming solar radiation incident upon it. A mathematical relation between an observed quantity and a variable used in a step-by-step mathematical process to calculate a In the context of remote sensing, algorithms generally specify how to determine higherlevel data products from lower-level source data. For example, algorithms prescribe how atmospheric temperature and moisture profiles are determined from a set of radiation observations originally sensed by satellite sounding instruments. Substance capable of neutralizing acid, with a pH greater than 7.0. See pH. An active instrument (see active system") used to measure the altitude of an object above a fixed level. For example, a laser altimeter can measure height from a spacecraft to an icesheet. That measurement, coupled with radial orbit knowledge, will enable determination of the topoqraphy. Height above the Earth's surface. Standard unit to measure the strength of an electric current. One amp is the amount of current produced by an electromotive force of one volt acting through the resistance of one ohm. The ampere is .1 of the theoretical electromagnetic unit of current. Named for the French physicist Andre Marie Ampere. See ohm. The magnitude of the displacement of a wave from a mean value. For a simple harmonic wave, it is the maximum displacement from the mean. For more complex wave motion, amplitude is usually taken as one-half of the mean distance (or difference) between maxima and minima. Amplitude Modulation (AM) One of three ways to modify a sine wave signal in order to make it "carry" information. The strength (amplitude) of a signal varies (modulates) to correspond to the transmitted information. As applied to APT, an audible tone of 2400 Hz is amplitude modulated, with the maximum signal corresponding to light areas of a photograph, the minimum levels black, and the intermediate strengths various shades of qray. See grayscale. Transmission of a continuously variable signal as opposed to a discretely variable signal. Compare with digital. A system of transmitting and receiving information in which one value (i.e., voltage, current, resistance, or, in the APT system, the volume level of the video tone) can be compared directly to the information (in the APT system, the white, black. and gray values) in the image. Data other than instrument data required to perform an instrument's data processing. Ancillary data includes such information as orbit and/or attitude data, time information, spacecraft engineering data, and calibration information. Instrument used to measure wind speed, usually measured either from the rotation of wind-driven cups or from wind pressure through a tube pointed into the wind. - The deviation of (usually) temperature or precipitation in a given region over a specified period from the normal value for the - The angular distance of an Earth satellite (or planet) from its perigee (or perihelion) as seen from the center of the Earth (sun). See Keplerian elements for examples of use. A wire or set of wires used to send and receive electromagnetic waves. Two primary features must be considered when selecting antennas: beamwidth or the "width" of the antenna pattern (wide beamwidth suggests the ability to receive signals from a number of different directions) and gain or the increase in signal level. Generally beamwidth or gain can be increased only at the expense of the other. Gain can be increased by multiplying the number of antenna elements, although this adds "directionality" that reduces Important antenna considerations - The physical size of antenna components is determined by the frequency of the transmissions it will receive - the higher the frequency, the shorter the elements. At high frequencies, use of a satellite dish will compensate for the reduced amount of energy intercepted by shortened components. - The antenna design should fit the type of radio frequency (RF) signal polarization it will receive. The orientation of radio waves in space is a function of the orientation of the elements of the transmitting antenna. A circularly polarized wave rotates as it propagates through space. Antennas can be designed for either right or left-handed circular polarization. Earth-based communication antennas are either vertical or horizontal in polarization, and not suited for space communication. Police and cellular phone transmissions use vertical polarization because a simple vertical whip antenna is the easiest sort of omnidirectinal antenna to mount on a vehicle. - The antenna needs to produce sufficient signal qain to produce noise free reception. - The antenna should be clear of conductive objects such as power lines, phone wires, etc., so height above the ground becomes Basic antenna components are: - Driven element - the parts connected to and receiving power from the receiver/ trsnsmitter - Parasitic elements - the parts dependent upon resonance rather than connection to a power source - A director or parasitic element that rein forces radiation on a line pointing to it from the driven element - A reflector or parasitic element that rein forces radiation on a line pointing from it to the driven element. A fundamental form of antenna is a single wire whose length approximately equals half the transmitting wavelength. Known as a dipole antenna, it is the unit from which many more complex forms of antennas are constructed. One of the most common forms of VHF antenna is the Yagi/beam, named for the Japanese scientist who first described the principles of combining a basic dipole (driven element) and parasitic elements. A common TV antenna is an example of this type. A Yagi/beam antenna is directional and therefore includes a rotator to aim (direct) the antenna. See yagi. An omnidirectional antenna has a wide beamwidth and consequently does not require "tracking" (aiming the antenna toward the signal source). An example of an omnidirectional antenna is the turnstile antenna, a variation of the standard dipole antenna well suited for space communications. The quadrifilar helix antenna is omnidirectional and an inherently excellent antenna for ground station use. Quadrifilars are also used on NOAAs polarorbiting The parabolic reflector or satellite dish antenna collects RF signals on a passive dishshaped surface. A feedhorn antenna a simple dipole antenna mounted in a resonant tube structure (cylinder with one open end) - transfers the RF energy to a transmission line. The bigger the dish, the greater the amount of RF energy intercepted, and therefore the greater the gain from the signal. An ordered assembly of elementary antennae spaced apart and fed in such a manner that the resulting radiation is concentrated in one or The focused pattern of electromagnetic radiation that is either received or transmitted by an antenna. A high pressure area where winds blow clockwise in the Northern Hemisphere and counterclockwise in the Southern Hemisphere. See Apogee (aka Apoapsis or Apifocus) On an elliptical orbit path, point a which a satellite is farthest from the Earth. See Perigee Layer of water-bearing permeable rock, sand, or gravel capable of providing significant amounts af water French random-access Doppler data collection system. Used on NOAAs Polar-Orbiting Environmental Satellites (POES), ARGOS receives platform and buoy transmissions on 401.65 MHz. This data collection system now monitors more than 4,000 platforms worldwide, outputs data via VHF link, and stores them on tape for relay to a central Argument of Perigee (aka ARGP or w) One of the six Keplerian elements, it gives the rotation of the satellite on the orbit. The argument (argument meaning angle) of perigee - perigee is the point on an orbital path when the satellite is closest to the Earth - is the angle (measured from the center of the Earth) from the ascending node to perigee. Example: When ARGP = 0 degrees, the perigee occurs at the same place as the ascending node. That means that the satellite would be closest to Earth just as it rises up over the equator. When ARGP = 180 degrees, apogee would occur at the same place as the descending node. This means that the satellite would be farthest from Earth just as it rises over the equator. The parallel of latitude that is approximately 66.5 degrees north of the equator and that circumscribes the northern friqid zone. Artificial Intelligence (AI) Neural networks. The branch of computer science that attempts to program computers to respond as if they were thinking - capable of reasoning, adapting to new situations, and learning new skills. Examples of artificial intelligence programs include those that can locate minerals underground and understand human speech. The point in an orbit (longitude) at which a satellite crosses the equatorial plane from south to north. The ratio of image width to image height. Weather Facsimile (WEFAX) images have a 1:1 aspect ratio (square); a conventional TV aspect ratio is 4:3 (rectangle). Astronomical Unit (AU) The distance from the Earth to the sun. On average, the sun is 149,599,000 kilometers, or 93,440,974 miles from Earth. ATLAS (Atmospheric Laboratory for Applications and Science) The focus of ATLAS is to study the chemistry of the Earth's upper atmosphere (mainly the stratosphere/mesosphere) and the solar radiation incident on the Earth system (both total solar irradiance and spectrally resolved radiance, especially ultraviolet). Science operations onboard ATLAS 1 (March 1 992) and ATLAS 2 (March-April, 1993) began a comprehensive and systematic collection of data that will help establish benchmarks for atmospheric conditions and the The air surrounding the Earth, described as a series of shells or layers of different characteristics. The atmosphere, composed mainly of nitrogen and oxygen with traces of carbon dioxide water vapor, and other gases, acts as a buffer between Earth and the sun. The layers, troposphere, stratosphere, mesosphere, thermosphere, and the exosphere, vary around the qlobe and in response to seasonal chanqes. Troposphere stems from the Greek word tropos, which means turning or mixing. The troposphere is the lowest layer of the Earth's atmosphere, extending to a height of 8-15 km (5 - 9 mi), depending on latitude. This region, constantly in motion, is the most dense layer of the atmosphere and the region that essentially contains all of Earth's weather. Molecules of nitrogen and oxygen compose the bulk of The tropopause marks the limit of the troposphere and the beginning of the stratosphere. The temperature above the tropopause increases slowly with height up to about 50 km (31 mi). The stratosphere and stratopause stretch above the troposphere to a height of 50 km. It is a region of intense interactions among radiative, dynamical, and chemical processes, in which horizontal mixing of gaseous components proceeds much more rapidly than vertical mixing. The stratosphere is warmer than the upper troposphere, primarily because of a stratospheric ozone layer that absorbs solar ultraviolet energy. The mesosphere, 50 to 80 km above the Earth, has diminished ozone concentration and radiative cooling becomes relatively more important. The temperature begins to decline again (as it does in the troposphere) with altitude. Temperatures in the upper mesosphere fall to -70 to - 140 degrees Celsius, depending upon latitude and season. Millions of meteors burn up daily in the mesosphere as a result of collisions with some of the billions of gas particles contained in that layer. The collisions create enough heat to burn the falling objects long before they reach the ground. The stratosphere and mesosphere are referred to as the middle atmosphere. The mesopause, at an altitude of about 80 km, separates the mesosphere from the thermosphere - the outermost layer of the Earth's atmosphere. The thermosphere, from the Greek thermo for heat, begins about 80 km above the Earth. At these high altitudes, the residual atmospheric gases sort into strata according to molecular mass. Thermospheric temperatures increase with altitude due to absorption of highly energetic solar radiation by the small amount of residual oxygen still present. Temperatures can rise to 2,000 degrees Celsius. Radiation causes the scattered air particles in this layer to become charged electrically, enabling radio waves to bounce off and be received beyond the horizon. At the exosphere, beginning at 500 to 1,000 km above the Earths surface, the atmosphere blends into space. The few particles of gas here can reach 4,500 degrees F (2,500 degrees C) during the day. Atmospheric Infrared Sounder Advanced sounding instrument selected to fly on the EOS-PM I mission (intermediate sized, sun-synchronous, morning satellite) in the year 2000. It will retrieve vertical temperature and moisture profiles in the troposphere and stratosphere. Designed to achieve temperature retrieval accuracy of 1 degree C with a 1 km vertical resolution, it will fly with two operational microwave sounders. The three instruments will constitute an advanced operational sounding system, relative to the TIROS Operational Vertical Sounder (TOVS) currently flying on NOAA polar-orbiting satellites. See Earth Observing System, TIROS-N/NOAA The amount of force exerted over a surface area, caused by the weight of air molecules above it. As elevation increases, fewer air molecules are present. Therefore, atmospheric pressure always decreases with increasing height. A column of air, 1 square inch in cross section, measured from sea level to the top of the atmosphere would weigh approximately 14.7 Ib/in2. The standard value for atmospheric pressure at sea level is: - 29.92 inches of mercury - 760 mm of mercury - 1013.25 millibars (mb) - 101,325 pascals (Pa} Atmospheric Radiation Measurements Program (ARM) U.S. Department of Energy program for the continual, ground-based measurements of atmospheric and meteorological parameters over approximately a ten-year period. The program will study radiative forcing and feedbacks, particularly the role of clouds. The general program goal is to improve the performance of climate models, particularly general circulation models of the atmosphere. Atmospheric Response Variables Variables that reflect the response of the atmosphere to external forcing (e.g., temperature, pressure, circulation, and The range of wavelengths at which water vapor, carbon dioxide, or other atmospheric gases only slightly absorb radiation. Atmospheric windows allow the Earth's radiation to escape into space unless clouds absorb the radiation. See greenhouse A coral island consisting of a ring of coral surrounding a central lagoon. Atolls are common in the Indian and Pacific Oceans. The decrease in the magnitude of current, voltage, or power of a signal in transmission between points. Attenuation may be expressed in decibels, and can be caused by interferences such as rain, clouds, or radio frequency signals. Frequencies that the human ear can hear (usually 30 to 20,000 cycles per second). Automatic Picture Ttansmission (APT) System developed to make real-time reception of satellite images possible whenever an APT- equipped satellite passes within range of an environmental satellite ground station. Transmission (analog video format) consists of an amplitude-modulated audible tone that can be displayed as an image on a computer monitor when received by an appropriate ground station. APT images are transmitted by polar-orbiting satellites such as the TIROS-N/NOAA satellites, Russia's METEOR, and the Chinese Feng Yun, which orbit 500-900 miles above the Earth, and offer both visible and infrared images. An APT image has thousands of squares called picture elements or pixels Each pixel represents a four-km square. The direction, in degrees referenced to true north, that an antenna must be pointed to receive a satellite signal (compass W direction). The angular distance is measured in a Clockwise A B C D E F G H I J K L M N O P R S T U V W Y Z
<urn:uuid:9da1065a-6cf3-490f-bca4-f9a57400ee46>
CC-MAIN-2022-33
https://www.grc.nasa.gov/WWW/K-12/TRC/laefs/laefs_a.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571993.68/warc/CC-MAIN-20220814022847-20220814052847-00697.warc.gz
en
0.860271
5,021
3.34375
3
Talk about Scythian mythology should begin with the fact that before the emergence of the Scythians in the steppes of Ukraine, the Trypillian and Pit cultures existed in this region in the 3rd mill. BC. The question of the ethnicity of Trypillian culture is controversial, but the creators of Pit culture were the Turkic tribes (see section Ethnicity of the Neolithic and Eneolithic cultures of Eastern Europe). I associate with the Scythians the Turkic tribe of the Bulgars, whose ancestors are the modern Chuvash, whose language has preserved the most archaic features of the Proto-Turkic language (STETSYUK VALENTYN. 1998, 65-66, STETSYUK V.M. 1999, 85-95). Studying Scythian mythology, one can not lose sight of the cultural continuity that could take place for millennia. There is extensive literature devoted to Scythian mythology in the world, but most researchers initially erroneously refer the Scythians to Iranian people, and then selectively adopt more or less pertinent facts to explain the meaning of the names of gods, myths, and legends or scenes depicted on the Scythian vases and so on. The limited nature of such an approach was not without attention and in fact, became a subject of considerable criticism: Attempts to reconstruct the "Scythian model of the world" were made in Scythology. However, from our point of view, they were not comprehensive because the reconstruction was carried out on an Indo-Iranian linguistic and mythological basis, the possibilities of which, as already mentioned, are limited for the interpretation of Scythian myths (HASANOV ZAUR. 2002: 354). The author of these lines undertook a serious attempt to reconstruct the "Scythian model of the religious and mythological system of the world on the basis of Turkic languages" (ibid: 303-358). This attempt causes great confidence, but it remains incomprehensible why the Chuvash mythology and the Chuvash language were completely ignored in the study, though they preserved the archaic worldview and language of the ancient Turks. The model presented by Hasanov would be more perfect when referring to the spiritual heritage of the Chuvashes. In this essay, we will try to eliminate this shortcoming in our approach to studying the mythology of the Scythians. Many linguistic and archaeological pieces of evidence convince us that the Scythians were the Turkic Bulgars. The same is confirmed also by the Scythian mythology. By identifying the ancient Bulgars with the Scythians, we get a satisfactory explanation for the significant fact of the absence of fire, the wheel, and the chariot in Scythian worship, which are so characteristic for Iranian people. At the same time, Scythian mythology has an acceptable explanation for deciphering the names of Scythian gods by means of the Chuvash language. Scythian monuments of culture have a certain reflection of the Chuvash beliefs and customs. Although some of them are considered not as Scythian ones, such as the mysterious stone idol (see the photo at left). The totem pole found in the Zbruch River, the left tributary of the Dniester, in 1848, was traditionally considered as a Slavic monument, however, some scientists believe it has no analogies in Slavic mythology and reveals a great similarity with Scythian stone sculptures. This case gives grounds to look for clues of this carved image in Scythian mythology. According to Herodotus, the Scythian cult did not know the images of gods, but M. Rostovtsev observed contradiction of these words with facts. He noted the presence of the image of the Scythian gods, which were made for the Scythians by the Greeks. However, ignoring anthropomorphic stone sculptures in the steppes, he gave an explanation of this fact as if they allegedly were a result of the Hellenization of the Iranian population in the steppes at a later period (ROSTOVTSEV M.I. 2002: 55). D. Rayewski argues that human images were borrowed by the Scythians from ancient oriental and Ionian traditions. He examines the evolution of sculptures and comes to the conclusion that they reflect the way of modeling the universe, pointing out that some researchers give them mythological interpretation (RAYEVSKIY D. 2006: 246-254)/ At right: Steppe stone sculpture. The park-museum of stone "babas" (Luhansk, the city on eastern Ukraine). In 2011, a granite pillar was found in the village of Ruchaivka in the Zaporizhzhia Region. As it turned out after investigation this is a Scythian anthropomorphic stele (see. photo below). Left: Stella from Ruchaivka, front side . Photo from the magazine "Archeologia" (OSTAPENKO M.A., PANCHENKO I.V. 2014. 2014, 61. Fig. 2) According to the description of the figures, they refer to a variety of anthropomorphic pillars. There are on the breast of the figure representing a man two circles, obviously protective plates. The man holds a massive rhyton in his right hand. One clearly sees a belt with buckle-clasp, on which hang an acinaces (short sword) and a quiver. The hands of the man are bent at the elbows, and, as it usually happens in such figures and in the tombs, the right hand is pressed to his chest, and the left goes to the stomach (compare with the Zbruch idol and the photo below). The enigmatic meaning of the hand position does not yet have an explanation, but it has been practiced in the graves for thousands of years. In 1992, an expedition of the Association of Lion (Lviv) excavated under the direction of a Ph.D. V.S. Artyukh, a human burial with the same hand position on a Trypillian site near the village of Moshanets of Kelmentsi district in the Chernivtsi Region. Obviously this ritual was adopted by the Scythians-Bulgars from the Trypillians when they lived in close proximity. Ukrainian archaeologists, describing in detail the sculpture of Ruchaivka, made the following generalization: It seems that in the archaic time, these monuments became a dominant symbolism of the "world tree" or the phallus, which on anthropomorphic traits appear. Later, they become more and more humanoid in the form with a gradual transition to the anatomic sculptural plasticity and, possibly, to personalization of an image (ibid, 63). The human burial from the Trypillian site near the village of Moshanets. Photo of Valentyn Stetsyuk. However, first let us start ab ovo and consider Herodotus' legend about the origin of the Scythians. According to this legend, the first man in the once desolate country, was Targitaios (Ταργιτάον), the son of Zeus and the daughter of the Borysthenes River (HERODOTUS, 1993: IV, 5). The first partial word of the name has such correspondences in the Turkic languages: Old Turkic täŋri 1. "heaven", 2. "god", Balk., Karach. tejri, Tur. tanri, Chuv. tură, Yakut. taŋara etc "god". Having taken into consideration Old Turkic toj "feast", Balk., Karach. toj, Tat., Chuv. tuj "wedding" the name Targitaios can be explained as "the wedding of gods”. This wedding can be referred to the known in mythology category of "the sacred marriages" of old ancestors (LEVINTON G.A. 1991: 422-423). Targitaios had three sons: Lipoxaïs (Λιπόξαϊν), Arpoxaïs (Ἀρπόξαϊν) and Colaxaïs (Κολάξαιν). V. Abayev, which is considered to be a great authority in Scythian linguistic, asserted that the second part of these names is -ksay and explained it as "a king-ruler" (Av *xayaš "to shine"). Therefore he gave for Colaksay such etymology: *Xola-xayaša "Sun-king" (ABAYEV V.I. 1965: 35). The first part of the restored name is questionable in the absence of Iran *xola though words xor/xur "a sun" are present in some Iranian languages. However V. Abaev considered the transition r → l untypical for the Iranian languages and searched for its explanations (Ibid: 36) but later in another paper, he nevertheless acknowledged that "the first part is not clear" (ABAYEV V. I., 1979: 310). A.K. Shaposhnikov asserts that this name is not Indo-Iranian (SHAPOSHNIKOV A.K., 2005: 41). Two other names have been usually explained without details as Mountain-king and Depth-king – that allows seeing the connection of all names with the elements of the universe as the Upper, Middle and Lower Worlds (DUDKO D.M., 1988: 66). Another interpretation of all three names can be given by means of the Chuvash language. First of all, Turk. arpa "barley" (Chuv. urpa) attracts attention, further, the typically Turkic say which, as well as ksay, can be the second part of all three names. In that case, these names can be divided into two parts so: Arpak-say, Colak-say and Lipok-say. Chuv. săy "dish, course" together with arpa suits to the explanation of Targitaios name as no wedding goes without a banquet. Thus, Arpaxais means "the dish of barley”. The epenthentic sound k appeared in this word obviously due the similarity to two other names and for easy articulation. According to the sense of the word arpa, also kolak and lipok also should mean some food. Chuv. kayăk "bird" instead of kolak can be taken according to the sense as Chuv. a – corresponding Old Turc. o, and Chuv. ă in Old Turk. a (RONA-TAS A. -1, 1987-1: 47). The only objection is the discrepancy l → y. Old Turk. l was kept in the Chuvash language therefore this transition is not natural here but in principle it often takes place in other languages (in particular, in Hungarian, ly is pronounced as y). On the other hand, it is semantically close – Chuv. kălăk "brood-hen", which increases the likelihood of explanation of Colaxais as "dish of bird". As for the name Lipoxais, probably, it was somewhat distorted by Herodotus or by his informant. Mayby, Lipoxais should sound as Paliksais then the first part of the name could well correspond to Turk. balyk "fish" (modern-day Chuv. pulă). Hence, in the wedding of gods, three dishes would have been sent – the course of bird, barley, and fish. The first dish could correspond to the Scythian’s imagination of the "Upper World", the second dish would be referred to as the "Middle World", and the third one goes to the "Low World". Such personification of the elements of the universe corresponds better as the proposed idea of mountain as "the middle world" what can be accepted hardly. The mountain approaches to the concept of "Upper World" is more likely. Trinomial model of the universe was adopted by the Bulgars from Trypillians who divides the world into three planes, an underground, ground, and celestial sphere. This is evidenced by ornamental compositions on Tripoli culture monuments, going to the Chalcolithic period [ZALIZNIAK L.L. (Ed). 2005, 125]. According to Chuvash folk cosmogony, the world is also imaged in the form of three stories above ground, and the Earth is a square. This representation generally corresponds to the Zbruch idol, which we speak of. And, as it turns out, Chuvash till now represent ancient beliefs in the form of totem poles. The album with pictures of sculptures of ethnocultural park "Suvar" in the city of Cheboksary (Chuvashia) clearly demonstrates this. Among the several dozen wooden figures, we can find some of them having a resemblance to the Zbruch idol (see photos above). Note the typical position of the hands of the left figure. But in general, we have reason to assume that the erection of the Chuvash totem poles in ancient times had the same purpose as the anthropomorphic sculptures of the Scythians, which were "one of the objective embodiments of the cosmic pillar – the imagination of the world order" (RAYEVSKIY D. 2006: 251). In 1973, a stone idol was found in the village of Kernosivka in Novomoskovsk district of Dnepropetrovsk Region, which dates back to the end of the III – beginning of the II millennium BC. It has a quadrangular form just like the Zbruch idol. Images on the idol give a certain idea about the material and spiritual culture of the population of the Black Sea steppes of that time and its connection with Trypillian culture. At left: The idol of Kernosivka. Historical Museum named after Dmitro Yavornytsky. Photo from the site Ukrainian antiquities On the central part of the idol there is a hunting scene and three axes of various types are depicted. Above the girdle of the idol, the pattern resembles a turtle. In the lower part of the idol, there is a phallus, and below it is two horses. On the left side of the idol, there is an ornament, and under it, there are two people who seem to dance. Still, lower is the figure of a bull. On the back, there is the Tree of life. In addition, there are images of tools of a blacksmith or metallurgist. In the legend of the origin of the Scythians, it is narrated that during the reign of Targitai's sons the golden plow, yoke, ax, and bowl fell on the earth which were got by the youngest Kolaksay and along with them he became the whole Scythian kingdom. It is logical to assume that before falling from the sky, these objects were intended to be in the sky. Corresponding words related to these objects are absent among the astronomical names of the Indo-European peoples at present. In this regard, it is not possible to interpret the event by means of Indo-European languages. A plausible interpretation was suggested by M.Ch. Jurtubayev. He connects a plow with the constellation Ursa Major, which in the Karachai-Balkar language is called Myryt dzhulduzla "Constellation of Ploughshare" (Kar., Balk. myryt "plowshare"). The constellation of Libra is called by the Karachay-Balkarians Boyunskha yunsa dzulduzla, while the Kar., Balk. bojunskha is "yoke". The constellation North Crown resembles a cup and is called by Karachay-Balkar Chemyuch julduzla (Kar., Balk. chemyuch "cup") (LAYPANOV K.T., MIZIEV I.M. 2010: 44). The correspondence to the ax is seen by Jurtubayev in the name of the constellation Orion Guide dzhulduzl, but a suitable word in Karachai-Balkar language was not found. The Chuvash called an ancient plow akapuç and this word is present in the dialectal name of Ursa Major (Akapuç çăltăpĕ). The analogy with the Karachai-Balkarian name is obvious and the similarity of the constellation to the plow is obvious too (compare the photo of the Big Dipper on the right). And on the whole, the interpretation of the legend with the help of the Turkic languages is convincing, but how Balkars and Karachais were connected with the Scythians is yet to be formulated. The self-name of the Balkarians retains the ancient name of the Bulgars (Bolkar). There are also many Bulgarian-Balkarian lexical convergences (MIZIEV I.M. 2010: 305). The Balkarians were neighbors of the Bulgars during the Khazar Khaganate and could have borrowed from them the names of the constellations. At left: Goddess Tabiti (Ταβιτί) with a mirror in her hand . Gold plaque from Chertomlyk mound. Using the Chuvash vocabulary, we can explain the names of all Scythian gods mentioned by Herodotus. The Chuvash pantheon as an integral part of the common Turkic formed independently of the Greek, but Herodotus tries to find a similarity between the Scythian and Greek gods. The most worshipped goddess at the Scythians was Tabiti, who corresponds to chaste Greek Hestia, the goddess of a home. On this occasion, M. Rostovtsev said: At first glance, it seems strange to find in the Iranian pantheon a goddess with the non-Iranian name Tabithi occupying in it the highest place, while the supreme god takes only the second place (ROSTOVTZEFF M. 1922, 107). M. Rostovtsev gave his elucidation of this fact, which apparently did not satisfy V. Abayev. Looking for parallels between the Scythian and Ossetian mythology, Abayev found a match for Hestia and Tabiti in the Ossetian deity of hearth and a chain. His presentation of making the chain to the people is especially accentuated (A. ABAYEV V.I. 1979, 10). As you can see, the similarities of the male deity and the goddess Tabiti are quite far and even Safa's name has no interpretation in the Ossetian language. Apparently, over time, Abaev understood the far-fetched nature of his interpretation and found an explanation for the name Tabiti, i.e., supposedly Ir tapayati "warmer" (HASANOV ZAUR. 2002: 93). However, the meaning of the name is too prosaic and does not say anything about the particularity of the goddess. On the contrary, her chastity is reflected in the Chuvash expression tupa tu "to give an oath", which may mean "she who gave the vow of celibacy". To this day, the Chuvash have the custom of saying "tupa tu" (to give the oath of allegiance during the marriage), and Karachay-Balkar womenswear "Tobady!" (LAYPANOV K.T., MIZIEV I.M. 2010: 43). Dmitry Rajewsk mentions Herodotus' words about especially sacred oaths of the Scythians (τάς βασιλτηίας ίστίαζ) as oats for "Royal Hestias", this is the oats for Tabiti (RAYEVSKIY D. 2006, 65). He also explains that the mirror in the hand of the goddess by the existence of a wedding ritual and other ceremonies related to marriage, present an essential traditional attribute (ibid. 73). As you can see, the modern Chuvash tradition is directly linked to the custom of the Scythians. At right: The interior of a Bulgar cave sanctuary on the Dniester River near the village of Stinka. The sketch of the Valentyn Stetsyuk. 1989. The inscription "tupa tu" is present on the altar of the Bulgarish cave temple on the bank of the Dniester River. It was deciphered by the author with the help of one of the variants of Chuvash runic writing. Greek Zeus and Gaia, by Herodotus, had Scythian matches Papaios (Παπαῖος) and Api (Ἀπί) whose names can be understood as "Grandfather" and "Grandmother", h.e. "Primogenitors", according to Chuv. papay "a grandfather" and Chuv. epi "a midwife". Similar words are in all Turkic languages, but the Chuvash words correspond to the Scythian gods to the greatest extent. The functions of the Greek god Apollo were various but he acted as an arrow-shooter or a destroyer most frequently (LOSEV A.F., 1991. MFW, Volume 1: 92-95). He is connected by Herodotus with Oitosyros in Scythian mythology, whose name could be understood as "who calls down trouble” (Chuv. ayta "to call" and šar "trouble"). The name of Scythian goddess Argimpasa (Ἀργίμπασα), which was corresponded the Greek goddess of fertility Aphrodite, is possible to explain by means of Chuv. arăm "a wife" or ărăm "swear" and pusă "field". By the certain presumption, it is possible to explain also the name of the Scythian god Thagimasadas (Θαγιμασάδας) which corresponds to Greek Poseidon, the god of seas and all water elements. In due time Poseidon, trying to ruin Odyssey, has broken his raft therefore Chuv. takana "a trough" (might to be before "a boat" too) and šăt "to hole" can have interest in this case. The Balkarians and Karachais had a god of the water, rain, and natural disasters Fuqmashaq (LAYPANOV K.T., MIZIEV I.M. 2010: 42). At last, we shall talk about legendary Amazons. This name (from Gr. Αμαζων) is known us from Herodotus and means aggressive horsewomen. According to the ancient Greek national etymology, it was cleared as α-μαζωσ "breastless" (μαζωσ Gr poetically "breast") as amazons, according some Greek myth, had cut off the right breast themselves for better to shoot with a bow. The explanation is interesting but it doesn’t satisfy scientists. Left: Hercules comes in fight with an Amazon. (Metropolitan Museum of Art, Нью-Йорк, США) Another explanation could be given by means of Chuvash vocabulary. This mysterious name could have something similar to Chuv. çyn "a person, man". Taking into account Chuv. amă “a female, mother”, we find the explanation to the word the amazon – “the mother of people”. Chuvash scientists give a different explanation: amă "female beginning", çĕн – "warrior-winner" (Journal Life and Literature ) In the Chuvash national museum of local lore in the city of Cheboksary immediately at the entrance there is a statue of an Amazon woman (see the photo on the right). The explanation for this fact is that some of the customs of the Chuvash women are reminiscent of the way of life of the Amazons, and the elements of their traditional clothing are similar to combat armor. Amazon woman from the Chuvash national museum of local lore. (Photo from the site Arts and crafts fair) Herodotus explained the origin of Sauromatians from the marriage of the Amazons with Scythians and wrote: …now the Amazons are called by the Scythians Oiorpata, which name means in the Hellenic tongue "slayers of men", for "a man" they call oior, and pata means "to slay"… (HERODOTUS, 1993: IV, 110-116). Thus, Herodotus precisely specifies two Scythian words oyor and pata and gives their meaning. In modern Chuvash language ăyăr means "stallion", and patak means "a stick". The first word could mean as well "a male, he-", therefore it could also mean "a man". The second word can be a derivative from not fixed Chuv. pata "to beat, kill". Mr. Fatih Şengül (Turkey) informed me that er, eyr or uri means “man, husband” and pata means “to kill, to hit” in Turkic. The same version is confirmed by Zaur Hasanov, saying that "this point of view has a place in Azerbaijani science", drawing on Turk. *ar "a husband" and bat, batyrmaq "to kill", "to die" (HASANOV ZAUR. 2002: 59). Herodotus wrote that Scythians avoided borrowing customs of other folks including Hellenes, however in studying the beliefs of the Scythians, he absolutized Greek pantheon of gods and tried to look for some matching to them at the Scythians. He could not pay attention to some of the Scythian beliefs, considering them "barbaric". However, the worship of the Tree of Life, which survived at Chuvash until now, has its roots in the Scythian times and even in more ancient past: … original image of "tree of life"… can be found on plates of stone boxes or slabs overlapping graves, as well as on poles of burials of Pit (Yamna) culture (АLEKSEYEVA I.L. 1991: 22) Symbol of the Tree of Life is the main element of the state symbols of the Chuvash – the emblem and flag. Its form with branches hanging down like the form of the Tree of Life on the amphora of Chortomlyk Scythian burial mound. However, a stylized Tree of Life is also present on the pottery of Chornolis culture which from Scythian culture was developed (see the section "Genesis of Scythian Culture"): Among ornamented lugs (of Chornolis cups – VS), the earliest ones are decorated with motifs of the tree (KRUSHELNYTS'KA L.I. 1998, 165). Left: Ornamentation of a cup lug of Chornolis culture (KRUSHELNYTS'KA L.I.. 1998, 158, Fig. 95, 30). In center: The Scythian amphora from Chortomlyk burial mound. The Tree of Life on the vase is central. Right: The shape of the tree of life in the Chuvash emblem and flag. Larissa Krushelnits'ka believes that these motives tend to be a type of late Komariv culture (Ibid, 156). Continuity of Komariv (15-12 century BC.), Vysotska (11 – 7 century BC.), and Chornolis cultures she stressed repeatedly in his writings. From the above, it can be concluded that the Scythian mythology is better deciphered by means of the Chuvash language and traditions. One can only wonder of the conservatism of the human mind, when time and again we find statements that the Scythians were the ancestors of modern-day Ossetians. Indeed, inscrutable Thy ways, O Lord.
<urn:uuid:35ebb0b1-31a3-427f-a5d0-752197747882>
CC-MAIN-2022-33
https://v-stetsyuk.name/en/Scythian/Mith.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00496.warc.gz
en
0.938328
6,014
3.21875
3
Vaccine Effectiveness - How Well Does the Flu Vaccine Work? Questions & Answers On This Page - How effective is the flu vaccine? - What factors influence how well the vaccine works? - What are the benefits of flu vaccination? - Is the flu vaccine effective against all types of flu and cold viruses? - Does flu vaccine effectiveness vary by type or subtype? - Why is flu vaccine typically less effective against influenza A(H3N2) viruses? - How effective is the flu vaccine in the elderly? - If older people have weaker immune responses to flu vaccination, should they still get vaccinated? - How effective is the flu vaccine in children? - How are benefits of vaccination measured? - How does CDC present data on flu vaccine effectiveness? - Why are confidence intervals important for understanding flu vaccine effectiveness? - Is it true that getting vaccinated repeatedly can reduce vaccine effectiveness? - Why are there so many different outcomes for vaccine effectiveness studies? - How does CDC measure how well the vaccine works? - What do recent vaccine effectiveness studies show? - Do recent vaccine effectiveness study results support flu vaccination? - Where can I get more information? - Besides vaccination, how can people protect themselves against the flu? CDC conducts studies each year to determine how well the influenza (flu) vaccine protects against flu illness. While vaccine effectiveness can vary, recent studies show that flu vaccination reduces the risk of flu illness by between 40% and 60% among the overall population during seasons when most circulating flu viruses are well-matched to the flu vaccine. In general, current flu vaccines tend to work better against influenza B and influenza A(H1N1) viruses and offer lower protection against influenza A(H3N2) viruses. See “Does flu vaccine effectiveness vary by type or subtype?” and “Why is flu vaccine typically less effective against influenza A H3N2 viruses?” for more information. How well the flu vaccine works (or its ability to prevent flu illness) can range widely from season to season. The vaccine’s effectiveness also can vary depending on who is being vaccinated. At least two factors play an important role in determining the likelihood that flu vaccine will protect a person from flu illness: 1) characteristics of the person being vaccinated (such as their age and health), and 2) the similarity or “match” between the flu viruses the flu vaccine is designed to protect against and the flu viruses spreading in the community. During years when the flu vaccine is not well matched to circulating influenza viruses, it is possible that no benefit from flu vaccination may be observed. During years when there is a good match between the flu vaccine and circulating viruses, it is possible to measure substantial benefits from flu vaccination in terms of preventing flu illness. However, even during years when the flu vaccine match is good, the benefits of flu vaccination will vary, depending on various factors like the characteristics of the person being vaccinated, what influenza viruses are circulating that season and even, potentially, which flu vaccine was used. Each flu season researchers try to determine how well flu vaccines work as a public health intervention. Estimates of how well a flu vaccine works can vary based on study design, outcome(s) measured, population studied and the season in which the flu vaccine was studied. These differences can make it difficult to compare one study’s results with another’s. While determining how well a flu vaccine works is challenging, in general, recent studies have supported the conclusion that flu vaccination benefits public health, especially when the flu vaccine is well matched to circulating flu viruses. While how well the flu vaccine works can vary, there are many reasons to get a flu vaccine each year. - Flu vaccination can keep you from getting sick with flu. - Flu vaccination can reduce the risk of flu-associated hospitalization, including among children and older adults. - Vaccine effectiveness for the prevention of flu-associated hospitalizations was similar to vaccine effectiveness against flu illness resulting in doctor’s visits in a comparative study published in 2016. - Flu vaccination is an important preventive tool for people with chronic health conditions. - Flu vaccination has been associated with lower rates of some cardiac (heart) events among people with heart disease, especially among those who experienced a cardiac event in the past year. - Flu vaccination also has been associated with reduced hospitalizations among people with diabetes (79%) and chronic lung disease (52%). - Vaccination helps protect women during and after pregnancy. Getting vaccinated can also protect a baby after birth from flu. (Mom passes antibodies onto the developing baby during her pregnancy.) - A study that looked at flu vaccine effectiveness in pregnant women found that vaccination reduced the risk of flu-associated acute respiratory infection by about one half. - There are studies that show that flu vaccine in a pregnant woman can reduce the risk of flu illness in her baby by up to half. This protective benefit was observed for several months after birth. - And a 2017 study was the first of its kind to show that flu vaccination can significantly reduce a child’s risk of dying from influenza. - Flu vaccination also may make your illness milder if you do get sick. (For example a 2017 study showed that flu vaccination reduced deaths, intensive care unit (ICU) admissions, ICU length of stay, and overall duration of hospitalization among hospitalized flu patients.) - Getting vaccinated yourself also protects people around you, including those who are more vulnerable to serious flu illness, like babies and young children, older people, and people with certain chronic health conditions. Seasonal flu vaccines are designed to protect against infection and illness caused by the three or four influenza viruses (depending on vaccine) that research indicates will be most common during the flu season. “Trivalent” flu vaccines are formulated to protect against three flu viruses, and “quadrivalent” flu vaccines protect against four flu viruses. Flu vaccines do NOT protect against infection and illness caused by other viruses that can also cause flu-like symptoms. There are many other viruses besides flu viruses that can result in flu-like illness* (also known as influenza-like illness or “ILI”) that spread during the flu season. These non-flu viruses include rhinovirus (one cause of the “common cold”) and respiratory syncytial virus (RSV), which is the most common cause of severe respiratory illness in young children, as well as a leading cause of death from respiratory illness in those aged 65 years and older. Yes. The amount of protection provided by flu vaccines may vary by influenza virus type or subtype even when recommended flu vaccine viruses and circulating influenza viruses are alike (well matched). Since 2009, VE studies looking at how well the flu vaccine protects against medically attended illness have suggested that when vaccine viruses and circulating flu viruses are well-matched, flu vaccines provide better protection against influenza B or influenza A (H1N1) viruses than against influenza A (H3N2) viruses. A study[505 KB, 10 pages] that looked at a number of VE estimates from 2004-2015 found average VE of 33% (CI = 26%–39%) against H3N2 viruses, compared with 61% (CI = 57%–65%) against H1N1 and 54% (CI = 46%–61%) against influenza B viruses. VE estimates were lower when vaccine viruses and circulating viruses were different (not well-matched). The same study found pooled VE of 23% (95% CI: 2% to 40%) against H3N2 viruses when circulating influenza viruses were significantly different from (not well-matched to) the recommended influenza A(H3N2) vaccine component. There are a number of reasons why flu vaccine effectiveness against influenza A(H3N2) viruses may be lower. - While all influenza viruses undergo frequent genetic changes, the changes that have occurred in influenza A(H3N2) viruses have more frequently resulted in differences between the virus components of the flu vaccine and circulating influenza viruses (i.e., antigenic change) compared with influenza A(H1N1) and influenza B viruses. That means that between the time when the composition of the flu vaccine is recommended and the flu vaccine is delivered, H3N2 viruses are more likely than H1N1 or influenza B viruses to have changed in ways that could impact how well the flu vaccine works. - Growth in eggs is part of the production process for most seasonal flu vaccines. While all influenza viruses undergo changes when they are grown in eggs, changes in influenza A(H3N2) viruses tend to be more likely to result in antigenic changes compared with changes in other influenza viruses. These so-called “egg-adapted changes” are present in vaccine viruses recommended for use in vaccine production and may reduce their potential effectiveness against circulating influenza viruses. Other vaccine production technologies, e.g., cell-based vaccine production or recombinant flu vaccines, could circumvent this shortcoming associated with the use of egg-based candidate vaccine viruses in egg-based production technology, but CDC also is using advanced molecular techniques to try to get around this short-coming. Older people with weaker immune systems often have a lower protective immune response after flu vaccination compared to younger, healthier people. This can make them more susceptible to the flu. Although immune responses may be lower in the elderly, vaccine effectiveness has been similar in most flu seasons among older adults and those with chronic health conditions compared to younger, healthy adults. Despite the fact that older adults (65 years of age and older) have weaker immune responses to vaccine flu vaccines, there are many reasons why people in that age group should be vaccinated each year. - First, people aged 65 and older are at increased risk of serious illness, hospitalization and death from the flu. - Second, while the effectiveness of the flu vaccine can be low among older people, there are seasons when significant benefit can be observed. Even if the vaccine provides less protection in older adults than it might in younger people, some protection is better than no protection at all, especially in this high risk group. - Third, flu vaccine may protect against more serious outcomes like hospitalization and death. For example, one study concluded that one death was prevented for every 4,000 people vaccinated against the flu. - In frail elderly adults, hospitalizations can mark the beginning of a significant decline in overall health and mobility, potentially resulting in loss of the ability to live independently or to complete basic activities of daily living. While the protection elderly adults obtain from flu vaccination can vary significantly, a yearly flu vaccination is still the best protection currently available against the flu. - There is some data to suggest that flu vaccination may reduce flu illness severity; so while someone who is vaccinated may still get infected, their illness may be milder. - Fourth, it’s important to remember that people who are 65 and older are a diverse group and often are different from one another in terms of their overall health, level of activity and mobility, and behavior when it comes to seeking medical care. This group includes people who are healthy and active and have responsive immune systems, as well as those who have underlying medical conditions that may weaken their immune system and their bodies’ ability to respond to vaccination. Therefore, when evaluating the benefits of flu vaccination, it is important to look at a broader picture than what one study’s findings can present. Vaccination has consistently been found to provide a similar level of protection against flu illness in children to that seen among healthy adults. In one study, flu vaccine effectiveness was higher among children who received two doses of flu vaccine the first season that they were vaccinated (as recommended) compared to “partially vaccinated” children who only received a single dose of flu vaccine. However, the partially vaccinated children still received some protection. Flu vaccine can prevent severe, life-threatening illness in children, for example: - A 2014 study showed that flu vaccine reduced children’s risk of flu-related pediatric intensive care unit (PICU) admission by 74% during flu seasons from 2010-2012. - In 2017, a study in the journal Pediatrics was the first of its kind to show that flu vaccination also significantly reduced a child’s risk of dying from the flu. The study, which looked at data from four flu seasons between 2010 and 2014, found that flu vaccination reduced the risk of flu-associated death by half (51 percent) among children with underlying high-risk medical conditions and by nearly two-thirds (65 percent) among healthy children. Public health researchers measure how well flu vaccines work through different kinds of studies. In “randomized studies,” flu vaccination is randomly assigned, and the number of people who get flu in the vaccinated group is compared to the number that get flu in the unvaccinated group. Randomized studies are the “gold standard” (best method) for determining how well a vaccine works. The effects of vaccination measured in these studies is called “efficacy.” Randomized studies are expensive and are not conducted after a recommendation for vaccination has been issued, as withholding vaccine from people recommended for vaccination would place them at risk for infection, illness and possibly serious complications. For that reason, most U.S. studies conducted to determine the benefits of flu vaccination are “observational studies.” “Observational studies” compare the occurrence of flu illness in vaccinated people compared to unvaccinated people, based on their decision to be vaccinated or not. This means that vaccination of study subjects is not randomized. The measurement of vaccine effects in an observational study is referred to as “effectiveness.”Top of Page CDC typically presents vaccine effectiveness (VE) as a single point estimate: for example, 60%. This point estimate represents the reduction in risk provided by the flu vaccine. CDC vaccine effectiveness studies measure two outcomes: laboratory confirmed flu illness that results in a doctor’s visit or laboratory-confirmed flu that results in hospitalization. For these outcomes, a VE point estimate of 60% means that the flu vaccine reduces a person’s risk of an outcome by 60%. In addition to the VE point estimate, CDC also provides a “confidence interval” (CI) for this point estimate, for example, 60% (95% CI: 50%-70%). The confidence interval provides a lower boundary for the VE estimate (e.g., 50%) as well as an upper boundary (e.g., 70%). One way to interpret a 95% confidence interval is that if CDC were to repeat this study 100 times, 95 times out of 100, the true VE value would fall within the confidence interval (i.e., on or between 50% and 70%). There is still the possibility that five times out of 100 (a 5% chance) the true VE value could fall outside of the 50%-70% confidence interval. Confidence intervals are important because they provide context for understanding the precision or exactness of a VE point estimate. The wider the confidence interval, the less exact the point value estimate of vaccine effectiveness becomes. Take, for example, a VE point estimate of 60%. If the confidence interval of this point estimate is 50%-70%, then we can have greater certainty that the true protective effect of the flu vaccine is near 60% than if the confidence interval was 10-90%. Furthermore, if a confidence interval crosses zero, for example, (-20% to 60%), then the point value estimate of VE provided is “not statistically significant.” People should be cautious when interpreting VE estimates that are not statistically significant because such results cannot rule out the possibility of zero VE (i.e., no protective benefit). The width of a confidence interval is related in part to the number of participants in the study, and so studies that provide more precise estimates of VE (and consequently, have a narrower confidence interval) typically include a large number of participants. Some studies do suggest that flu vaccine effectiveness may be higher in people receiving flu vaccine for the first time compared to people who have been vaccinated more than once; other studies have found no evidence that repeat vaccination results in a person being less-protected against flu. Immune responses to vaccination may be higher among people who were not vaccinated in a previous season, but repeatedly vaccinated people (i.e., people who receive the flu vaccine each year) may still have increased immune responses after vaccination. Two reviews of multiple studies have found that for people vaccinated in the prior season, vaccination in the subsequent season provides additional protection against flu. Information regarding flu vaccination history is particularly important to these types of evaluations, and can be difficult to confirm, as accurate vaccination records are not always readily available. People who choose to get vaccinated every year may have different characteristics and susceptibility to flu compared to those who do not seek vaccination every year. CDC thinks that these findings merit further investigation to understand the immune response to repeat vaccination. CDC supports continued efforts to monitor the effects of repeat vaccination each year. However, based on the substantial burden of flu in the United States, and on the fact that most studies point to vaccination benefits, CDC recommends that yearly flu vaccination remains the first and most important step in protecting against flu and its complications. Vaccine effectiveness studies that measure different outcomes are conducted to better understand the different kinds of benefits provided by vaccination. Ideally, public health researchers want to know how well flu vaccines work to prevent illness resulting in a doctor visit, or illness resulting in hospitalization, and even death associated with the flu, to evaluate the benefits of vaccination against illness of varying severity. Because estimates of vaccine effectiveness may vary based on the outcome measured (in addition to season, population studied and other factors), results should be compared between studies that used the same outcome for estimating vaccine effectiveness.Top of Page Scientists continue to work on better ways to design, conduct and evaluate non-randomized (i.e., observational) studies to assess how well flu vaccines work. CDC has been working with researchers at universities and hospitals since the 2003-2004 flu season to estimate how well flu vaccine works through observational studies using laboratory-confirmed flu as the outcome. These studies currently use a very accurate and sensitive laboratory test known as RT-PCR (reverse transcription polymerase chain reaction) to confirm medically-attended flu virus infections as a specific outcome. CDC’s studies are conducted in five sites across the United States to gather more representative data. To assess how well the vaccine works across different age groups, CDC’s studies of flu vaccine effects have included all people aged 6 months and older recommended for an annual flu vaccination. Similar studies are being done in Australia, Canada and Europe. More recently, CDC has set up a second network the Hospitalized Adult Influenza Vaccine Effectiveness Network (HAIVEN) that looks at how well flu vaccine protects against flu-related hospitalization among adults aged 18 and older. CDC conducts studies each year to determine how well the flu vaccine protects against flu illness. These estimates provide more information about how well this season’s vaccine is working. Recent studies show vaccine can reduce the risk of flu illness by between 40-60% among the overall population during seasons when most circulating flu viruses are well matched to the flu vaccine. The large numbers of flu-associated illnesses and deaths in the United States, combined with the evidence from many studies that show flu vaccines help to provide protection, support the current U.S. flu vaccination recommendations. It is important to note, however, that how well flu vaccines work will continue to vary each year, depending especially on the match between the flu vaccine and the flu viruses that are spreading and causing illness in the community, as well as the characteristics of the person being vaccinated. CDC has compiled a list of selected publications related to vaccine effectiveness. Getting a flu vaccine each year is the best way to prevent the flu. Antiviral drugs are an important second line of defense against the flu. These drugs must be prescribed by a doctor. In addition, good health habits, such as covering your cough and frequently washing your hands with soap, can help prevent the spread of the flu and other respiratory illnesses. More information on Vaccine Selection.Top of Page - Page last reviewed: September 14, 2017 - Page last updated: October 3, 2017 - Content source: - Centers for Disease Control and Prevention, National Center for Immunization and Respiratory Diseases (NCIRD) - Page maintained by: Office of the Associate Director for Communication, Digital Media Branch, Division of Public Affairs
<urn:uuid:620a9ff7-d0ac-4ead-a97d-65b751de9a88>
CC-MAIN-2017-51
https://www.cdc.gov/flu/about/qa/vaccineeffect.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948617816.91/warc/CC-MAIN-20171218141805-20171218163805-00528.warc.gz
en
0.957512
4,257
3.421875
3
Table of Contents 1. Titus Andronicus and the Importance of Rome 2. The Character Rome 3. The Barbarians 4. “That Rome is but a wilderness of tigers?” 5. Conclusion: Rome´s near future 6. Works Cited The Roman tragedy ‘Titus Andronicus’ focuses on the fate of different characters inside and outside of Rome. The characters are influenced by the place and state of Rome, which seems to follow a clear order. However, this order is interrupted when people from outside Rome are able to gain power inside Rome. The following pages will focus on the presentation and relevance of Rome in Titus Andronicus and how the morals and ideas of the empire are struggling in the play. 1. Titus Andronicus and the Importance of Rome In Shakespeare´s Roman tragedies the importance of the place and state of Rome is always underlined. The Roman characters are willing to fight, kill and die for the cause of Rome. Moreover, they would even sacrifice their children and loved ones for it. This can be seen for example in Coriolanus where Volumnia would sacrifice her son Coriolanus for Rome or in Julius Caesar where Brutus decides to join the conspirators against his friend Caesar. Rome is the greatest good which is known to the Roman characters, this also becomes clear in the play Titus Andronicus. One indication for Rome´s greatness is the content in which the name Rome is used: If the characters in the play want to express how important or great another character is to them, they use the word “Rome” as a reference. Titus Andronicus is for example called “Rome´s best champion (I.1.68; 147) or “Rome´s best citizen” (I.1.167; 150). Hence, “Rome” is used as the highest climax when it comes to describing a character. Titus Andronicus, the main character of this play, seems to represent the ideal traditional, patriotic Roman character. In the first act Titus comes back from war which he was fighting for Rome. Titus states that of his “[…] five-and-twenty valiant sons” (I.1.82; 148) he brings home some alive but also some dead. It is not clear, whether he really had twenty-five sons, because if that would have been the case, then only four of them are still alive in the beginning of the play (Lucius, Quintus, Martius and Mutius). But even if the soldiers who died during the war were only as close to Titus, as if they were his sons, this strong choice of words points out his loss. Some men very close and probably related to Titus died for the cause of Rome, which in this case was the conquest of the Goths. He brings home the captured Queen of the Goths and her sons, and following a Roman tradition he sacrifices her eldest son. Titus in this first scene embodies the typical victorious Roman hero. Due to this and also because of his former successes for Rome, the people and his brother Marcus Andronicus are asking Titus to be a candidate for their emperor, but he declines (cf. I.1.180-190; 150). Titus gives up the chance to become emperor because his perpetual intention is to make Rome the best state possible. In his opinion he would not be the best potential emperor for Rome. He declares that “A better head her glorious body fits/ than his that shakes for age and feebleness” (I.1.190-191; 150). Due to Titus´s decline Saturnine becomes the new emperor. Shakespeare has chosen to set this play in a fictional Roman reign, because there was actually never an emperor with the name Saturninus. Katharine Eisaman Maus writes in her introduction to Titus Andronicus that even though the plot is fictional “at the same time, [Shakespeare] puts a great deal of emphasis on the play’s ‘Roman-ness,’ making constant reference to classical myths, to legendary and historical figures, to imperial institutions, to the places and customs of ancient Rome” (Maus, 136). Hence, Shakespeare consciously makes Rome and its fate one of the central aspects of the play. To show his gratitude to Titus Saturninus wants to choose Lavinia, Titus´s daughter, as his bride (cf. I.1.244; 151). However, Bassianus, Saturninus´s brother, insists that Lavinia was already chosen to be his bride (cf. I.1.278-279; 152). Lavinia´s forced refusal of the empress´s position leads following to the death of her brother Mutius: Mutius chooses to support his sister in acting against the will of the new Roman emperor (cf. I.1.292; 152). As a result, Titus kills his own son. As a reader or as part of an audience one can especially here very obviously realize the importance of Rome to Titus in this first act of the play: After killing Mutius, Titus states that Mutius was not longer his son because “[his] sons would never so dishonor [him]” (I.1.298; 152). His son turning against the will of the new Roman emperor seems to be the worst disgrace and scandal to Titus. Here again Titus is characterized as the most righteous, honorable and traditional Roman. He rather kills his own child than to support dishonoring Rome in any way. Titus is so ashamed of Mutius that he does not even want him to be buried in the family tomb, because this place was only made for “soldiers and Rome´s servitors” (I.1.355; 154). Marcus, Titus´s brother, can eventually convince Titus to bury Mutius in the family tomb, but even when Titus finally allows it, he points out again that this day is the worst day of his life, because he was “[…] dishonored by [his] sons in Rome” (I.1.387; 154). Therefore, the cause of Rome is defiantly more important to Titus than his own family. Titus, like all the other characters in this play, is aware “of the glorious Roman past as it is enshrined in narrative” (Maus, 138), but the depiction of his character even seems to surpass perceptions of past Roman heroes. Saturninus then decides to propose to Tamora, the captured Queen of the Goths, and takes her as his bride. Titus has to kneel down in front of his former captive, who is striving for revenge. Nevertheless, Titus does not doubt the new Roman emperor. In the end of the first act he still is the same loyal believer and follower of Rome whom we met in the beginning. 2. The Character Rome The real meaning of Rome in this play and specially the meaning of Rome to the Roman characters is hard to understand for an outside observer. Furthermore, the totality of Rome seems to be incomprehensible. Rome in this play seems not only to be a mystical “thing” but it can probably be viewed as an own character. Regarding the whole play Rome appears to be the most important and powerful leading character. Rome is personified by different adjectives throughout the whole play: glorious, forlorn, desperate, royal, kind, proud, ungrateful/ ingrateful and ambitious are some of the adjectives which depict Rome more as a “human” being than as a place or state. Viewing these adjectives it becomes clear, that Rome seems to be able to change its mood. Rome also takes action in the play, when it rewards love or gets miserable. But compared to the other characters Rome is still not as driven by feelings and as vulnerable as them. In addition to the personification of Rome also the sex of the character is given: Rome is a woman, this can be derived from several quotes in the play, e.g.: “her glorious body” (I.1.190; 150), “Let Rome herself […] and she […]” (V.3.72-72; 196), “To heal Rome´s harms and wipe away her woe” (V.3.147; 197) or “her enemies” (V.3.102; 196). Considering Shakespeare´s time one reason why Rome was chosen to be a female character might be because she could be connected to the Queen of England. The Shakespearean audience was used to a female leader, who was standing above them all. The Roman characters in the play do everything to please Rome; as a consequence, Rome exemplifies the image of a strong and powerful leader. But still she could not survive without her people: The old emperor is dead in the beginning of the play and his sons are fighting over the position, hence, Rome is “headless” (I.1.189; 150). She needs a new, strong Roman head in order to not fall apart. Rome needs a clear and strict system to function. The image of the human body is used to make this clear: The human body needs every part, every cell to work in the exact right, organized way to survive. If one cell is not functioning as part of a team the human body starts to get weak and begins to crumble. It gets even worse when the body is invaded by a cell which does not belong there. One parasite from outside can destroy the whole organism. The parasite in this play is Tamora, the other strong female character, a counterpart to royal Rome. Tamora herself states that she is “incorporate[d] in Rome” (I.1.464; 156), this sounds like she forced herself, or maybe even was forced, into the body. Saturnine gives Tamora the opportunity to attack Rome´s central body part: the head. He takes her to the Pantheon (cf. I.1.336; 153) to marry her, leads her into the Capitol and gives her power over Rome´s body. The parasite Tamora is not alone she brought her two sons, Chiron and Demetrius, and her lover, Aaron, into the head of Rome. None of them are familiar with the Roman traditions. They do not belong there, which is also made clear by their outlandish outward appearance (cf. Maus, 140). While the play is going on the dangerous consequences to Rome´s health/ traditions can be seen. The chosen leading head, infiltrated by the parasite, is not strong enough. These circumstances lead Rome to scatter and break her limbs (cf. V.3.70-72; 196). This decay of Rome´s body is strongly connected to the brutality in the play: The characters are all violently hurt and suffer massive pain. Not even Rome, this powerful character who rules them all, can avoid physical pain. Her suffering makes her, however, even more human. In contrast to the other main characters of the play Rome can survive the attack on her body in the end, which again underlines that she can be seen as mightier as the other characters. Regarding this depiction of the character Rome her decay seems to be based on the conflict between the barbarians who invade Rome and the noble Romans. But when the play is going on it becomes more and more difficult to differentiate between inside and outside Rome. Related to this, the reader or the audience might later wonder who the real barbarians are. 3. The Barbarians In the beginning of the play the difference between the cruel barbarians and the Romans is still clearly depicted: While the Romans are presented as heroes, the barbarians are the wild ones chained, not knowing anything about the Roman culture. During the whole play the barbarians are acting without any signs of regret or conscience against the Roman values. They are aware of their actions and as Aaron points out later in the play they enjoy it: “But I have done a thousand dreadful things As willingly as one would kill a fly, And nothing grieves me heartily indeed But that I cannot do then thousand more” (V.1.141-144; 189). To the audience the barbarian actions really begin to show once the play is leaving Rome and moving into a forest. This place outside of Rome is described by Aaron as “fitted by kind for rape and villainy” (II.1.117; 159). Furthermore, also Titus pictures the forest later as “by nature made for murders and for rape” (IV.1.58; 176). In contrast to this depiction Tamora seems to like the place outside Rome, she describes it as a “gleeful boast” and goes on picturing it nicely as followed: “The birds chant melody on every bush, The snake lies rolled in the cheerful sun, The green leaves quiver with the cooling wind And make a checkered shadow on the ground” (II.3.11-15; 161). Her description seems to fit to a fairytale, but for the Roman characters it will turn into a nightmare. The forest becomes a place without any Roman virtue, a total contrast to the safe living environment inside the city walls. It turns into a place without rules: there is no Senate to keep an order or to protect the people. Due to this, the known hierarchy is replaced by a system of “survival of the fittest”. The Romans might image a similar place to be the home of the barbaric Goths, likewise it is a place outside the civilization and only used for violent actions, like hunting. The forest firstly becomes the setting for the homicide of Bassianus, who was the old emperor´s second son. Bassianus is stabbed by Chiron and Demetrius, the crime was earlier planned by Aaron. The barbarian actions proceed when Tamora, the former Queen of the barbarian Goths and now empress of Rome, gives order to rape Lavinia (II.3.131-132; 163). Tamora justifies her command with the reason, that Titus has sacrificed her son. She is not only striving for a “just punishment” by just killing his daughter, but she states that “The worse to her, the better loved of me” (II.3.167; 164). To the reader Tamora appears as the most barbaric and cruel character so far. This depiction is underlined by Lavinia´s language in this scene. Tamora is pictured as an inhumane predator when Lavinia compares her to a tiger and a lion. Moreover, she calls Tamora a “beastly creature”, which emphasizes the image of an animalistic and wild inhumane or anti-Roman being (cf. II.3. 142-182; 164). When Titus´s sons, Martius and Quintus, arrive in the forest they also already sense that this is a dangerous place, even before they know what has happened to their sister (“A very fatal place it seems to me” (II.3.202; 165)). Getting in line with the chain of disaster, Martius and Quintus are then decoyed into a trap which was again planned by Tamora and Aaron. To Saturninus it now appears as if the two killed his brother Bassianus. Considering the events in this scene, no more indications of the Roman society and traditions can be found. While the first act in the capitol concentrated on Rome and the values of the Roman society, now the play moved to a total different state regarding location and moral values. Tamora stated in the first act that she is “A Roman now adopted happily” (I.1.465; 156) and that “all quarrels die” (I.1.467; 156). The audience might have sensed already at this point that Tamora was lying and now it becomes totally clear: Tamora did not become a Roman. She does not at all care about the Roman values, traditions or about honor. The cause of Rome does not even affect her life. In contrast to Titus the current “Roman” empress Tamora only thinks about her cause and she gives priority to her family and not to Rome. While Titus was just following a tradition when he killed Tamora´s first son, Tamora now is lead by animalistic desire. It becomes perceivable that in contrast to the Romans, the barbarians do not need the Roman system to function or act. This fact could already demonstrate the beginning of the end of this Roman system and, hence, the end of Rome. The next scene, II.4, seems to be moved even further away from Rome. The stage directions describe the beginning of this scene as followed: “Enter the Empress´ sons [Chiron and Demetrius] with Lavinia, her hands cut off and her tongue cut out, and ravished (II.4 stage directions, 167). Imaging this scene while reading, it appears to be grotesque and ludicrous. But for an audience which sees an actress presented following this description it is a horrifying and shocking moment. It might be difficult for them to even look at the bloody portrayal of a violated, suffering Lavinia. This scene reveals and highlights the terrible truth of the barbarians. Chiron and Demetrius even take a moment to make fun of Lavinia before they leave. After Lavinia is found by her uncle Marcus in the forest, the location changes back to Rome and to the trial against Titus´s sons Quintus and Martius. 4. “That Rome is but a wilderness of tigers?” But this time the depiction of Rome seems to have changed and Titus actually starts doubting. One could say that in this third act the beginning of the decay is pictured because the distinctions between in- and outside Rome, between Roman values and barbaric actions start to blur. One can find the turning point for Titus´s character when his sons are falsely accused of killing Bassianus. Titus starts to question the idyllic Roman world of the first act. The audience, who was able to see all the actions of the second act, already started questioning before Titus. If one now takes a closer look into the representation of Roman values and traditions in the first act one can recognize an illusion. This illusion was safely hidden behind the glamour of the triumph. It eventually turns out that the trigger for Tamora´s revenge was a Roman tradition. Lucius insists in the first act on the tradition of “Ad manes fratrum” (I.1.101; 148). The sacrifice of Tamora´s eldest son is an important part of Roman tradition. It has to be done and no Roman would question it even though Tamora tries to point out its barbaric and pointless function. In addition to that, Tamora also explains in the second act that Lavinia´s following rape happens based on the fact that “fierce Andronicus would not relent” (II.3.165; 164). It becomes clear, that Titus´s suffering, therefore, finds its roots in a Roman act and is not only based on barbaric influences. Grace Starry West writes in her essay “Going by the Book: Classical Allusions in Shakespeare´s Titus Andronicus” about these specific limits and dangers of the present Roman traditions. Shakespeare uses different hints to Roman literature during the play and many of them are connected to the prevailing brutality. West points out that “it is surely significant that none of the non-Romans ever seems to draw upon his own tradition for a brutal act; it is always a Roman source or in some way connected with Rome.” (West, 75). Accordingly, also Lavinia´s rape is constructed around the Roman narrative of Lucrece. The “Roman-ness” of the play is again underlined and Roman ideas are presented as the real origin of the barbaric actions in the play. The most notable example of a Roman source in this play is Ovid´s Metamorphoses, a work to which the characters reference at many occasions in the play. Titus starts to realize the “Roman illusion” when his sons have to go to trial. One might think that Titus at first appears very inconsistent: First he slain his own son for Rome without any hesitation but now he pleads for mercy for his other sons who are about to be judged because they killed the Roman emperor´s brother. This inconsistency shows the change in his character: Titus realized that Rome is not what he thought it was but a phantasmagoria. His Roman identity is challenged because he can suddenly realize a confusion of the given rules. His doubts become clear in his question “That Rome is but a wilderness of tigers?” (III.1.54; 169). This image of the tiger can be related again to Tamora: She was able to find a place in Rome without following the rules of Rome´s system. Rome is instable and those who can survive without rules are powerful now. Traditions lead to decay of Rome and Titus´s family. The location Rome is not anymore a state to Titus, but a “wilderness”. It is more relatable to the wild forest then to the empire Rome. Titus even starts to question the virtue and necessity of the Roman wars. After seeing his ravished daughter he says: - Quote paper - Anja Ellmann (Author), 2017, The Decay of Rome in Shakespeare´s "Titus Andronicus". The Conflict between the Roman and the Barbarian Influences, Munich, GRIN Verlag, https://www.grin.com/document/382580
<urn:uuid:5336731c-a512-4a70-9337-800c7fde511f>
CC-MAIN-2022-33
https://www.grin.com/document/382580
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00495.warc.gz
en
0.962868
4,535
3.5625
4
By LIONEL W. HINXMAN, H.M. Geological Survey. THERE is perhaps no more striking group of mountains in the British Islands than those strange isolated masses of red sandstone which rise like huge monoliths from the tumbled grey sea of primitive gneiss along the western seaboard of Sutherland and Ross-shire. They have been graphically described by Macculloch, who, writing at a time when the beauties of Highland scenery were yet undiscovered, characterises them in the following words:—" Round about there are four mountains, which seem as if they had tumbled from the clouds, having nothing to do with the country or each other, either in shape, material, position, or character, and which look very much as if they were wondering how they got there. Which of them all is the most rocky and useless, is probably known to the sheep; human organs distinguish little but stone,—black precipices when the storm and the rain are drifting by, and when the sun shines, cold, bright summits that seem to rival the snow." Hugh Miller, who was perhaps the first to perceive the unique character of these mountains from a scenic point of view, has drawn their picture with a pencil dipped in glowing colours, and invested them with a singular and poetic charm; while his description has been equalled, if not surpassed, by Dr A. Geikie, in his "Scenery of Scotland." Rising directly from the gneiss plateau, which, though carved into innumerable glens, hollows, and ridges, yet preserves in its eminences a tolerably uniform level, these heights possess more of the true mountain form than most of our Scottish hills, where the eye is gradually led up from spur to spur to the culminating peak, which often rises little above the surrounding ridges. Here, however, the whole mass and height of each mountain is taken in at a glance; while their strange isolation, and incongruity, both in form and colouring, with their surroundings, give to these peaks a fascination and impressiveness which we look for in vain amongst such mountain masses as those of the Cairngorm range. It is hard to decide to which of the group the superiority should be given. Quinag, with its mile-long wall of precipice fronting the western sea, and the magnificent bastions of Sail Garbh; the twin peaks of Coul Mbr, one keeping guard over the depths of the Corrie Dubh, the other frowning above those mighty terraces which fall, in steps of a thousand feet, down to the wild loneliness of Gleann na Laoigh; the towering cone of Coul Beag; the fantastic pinnacles of Stack Polly; the long serrated ridge of Ben More Coigach,—have all their particular charm. Suilven, however, though yielding in height to most of his neighbours, yet combines more of the characteristics of a true mountain than can perhaps be found in any one of them. Few mountains, too, present a greater diversity of aspect. The long knife-edged crest, deeply cloven in three by narrow couloirs, that rises above the lonely shores of Loch Veyattie; the double peak that starts up against the horizon, and arrests the attention as one approaches Alltnagealgach from the east; or, most striking of all, the wonderfully symmetrical cone that looks out over Loch Inver and the sea; in days of storm, when the black peak looms for a moment through the flying cloud-wrack, and the ragged mist swirls round the crags; or flaming up in the long afterglow of summer evenings, a pyramid of fire against the soft pale green of the eastern sky,—from every point of view, and under every condition, Suilven will always hold his place as unique amongst the mountains of the west. Suilven lies in the heart, and is indeed the sanctuary, of the deer forest of Glen Canisp, from which, at the time of our ascent, both tourists and geologists were rigorously excluded, for the then lessee was of the opinion that the latter at least were "of no use but to frighten the deer and upset the Bible." From the position and comparative inaccessibility of the mountain the ascent is not often made, and during the two previous summers, spent in surveying the surrounding country, I had cast many longing looks towards the formidable eastern peak, said to be very difficult to any one but an expert climber. It was not, however, till our third summer in these regions that we began to realise that our campaign in Assynt was drawing to a close, while Suilven still remained unconquered. We were then staying at the farmhouse of Achumore, which lies about two miles west of Inchnadamph, delightfully situated among the green flower-covered knolls and hollows of the limestone plateau that slopes up in alternate grassy lawn and miniature escarpment from the northern shore of Loch Assynt to the grey stony flanks of Glasven and Ben Uidhe. Past the farmhouse flows a burn of the purest water, which has its source in two powerful springs at the foot of Glasven, and is, even at its birth, a stream of considerable volume. These springs, supplied from the subterranean chambers of the limestone, seem almost unaffected by ordinary rain or drought; and the burn flows, in a perennially clear and full stream, through the grassy meadows of Achumore, gay in summer with purple orchis and yellow globe-flower, and, dashing over each successive escarpment in a series of miniature waterfalls, mingles with the waters of Loch Assynt in Ardvreck Bay, beneath the shadow of the old castle. It was from these pleasant quarters that I, with my friends H- and C--, started, on a lovely morning in early June, for the long contemplated climb over the ridge of Suilven. We had determined to drive as far as Loch Awe, which lies about six miles eastwards of Inchnadamph, on the road to Lairg, and is the nearest point on any accessible road to the eastern end of the mountain. Ardvreck Castle, the ruined house of Calda, and the comfortable hostelry of Inchnadamph—well known to every angler who has visited that paradise of the trout- fisher, Western Sutherland—were soon passed; and as we bowled along the smooth road beneath the grey cliffs of Stronchrubie, where the goats were picking their way along invisible ledges, the crisp morning air, filled with the music of bird voices,—the cheery crow of the grouse cock, the wild cry of the peregrine wheeling about the crags overhead, the whistle of curlew and greenshank along the river flats—produced in one that indescribable feeling of enthusiasm with which one starts for a mountain expedition in the Highlands. An hour's drive brought us to the shores of Loch Awe, where the trout were rising merrily along the edge of the reed-beds, dimpling the glassy surface, in which each little wooded islet lay reflected as in a mirror. Here we left our vehicle, and, crossing the Loanan burn at the point where it issues from the loch, began the ascent of the long quartzite slope, thickly strewn with boulders and moraine d/bris, which forms the eastern spur of Canisp. A rough and tedious climb of about two miles brought us to the crest of this subsidiary ridge, and, calling a halt, we sat down to enjoy a rest and a pipe before descending into the deep glen that lay between us and our goal. At our feet the ground fell in an abrupt escarpment to the plateau from which rises the long, steep, southern face of Canisp, stretching away on our right; its talus slopes, red with debris fallen from the porphyry precipices, which girdle it with successive lines of battlemented crag that are relieved here and there by greener spots where the alternating slopes of more yielding sandstone are covered by a scanty Away on our left, beyond the long trench-like loch— Loch Fada—which fills the narrow glen beneath, we could catch the glitter of the sunlight on the bays of Cama Loch and see, rising beyond, the green knolls and white houses of Elphin; while farther still in the distance stretched the smooth heather-covered slopes of the Cromalt Hills. Right in front, on the farther side of the glen, rose the great northern wall of Suilven; and as we looked at that seamed and rugged precipice, we realised that a stiff task lay before us. No time was to be lost, however, so knocking out our pipes, we scrambled down the steep descent to the plateau below, and keeping for a mile or so along the top of the cliffs that rise from the northern shore of Loch Fada, finally descended into the gloomy depths of Glen Dorcha (the glen of darkness), and crossed the stream that connects Loch Fada with Loch Ganimhich. The south side of the glen is here very steep, and overgrown with long heather, than which there is nothing more trying to climb through, one's boots slipping on the stems when inclined downwards in a peculiarly aggravating manner. However, the top was reached at last, and a tramp of nearly two miles over well-polished knolls of grey gneiss, interspersed with peaty flats and small shallow lochans, brought us to the foot of the eastern peak, where the real work of the day was to begin. For the benefit of those who are unacquainted with Suilven, a brief description of the form of the mountain may here be given. Rising steeply from a comparatively even base, it sweeps up rapidly in successive ledge and precipice, presenting an almost unbroken wall of rock save where the mountain torrents have cut deep gashes down its sides. In fact, looking at the mountain from a little distance either on the north or south, it appears as if these formidable-looking con/airs were the only possible means by which the top could be reached. The crest of the mountain forms a ridge about a mile and a half in length, divided by deeply- cut clefts into three peaks of unequal height. These are known respectively as Meall Bheag (little hill), Meall Mheadhonach (middle bill), and Caisteal Liath (the grey castle). The latter forms the western extremity of the ridge, overlooking Loch Inver, and is the highest of the three, the Ordnance cairn giving the summit as 2399 feet above sea-level. Meall Mheadhonach is about ioo feet lower, while Meall Bheag barely reaches The clefts—to which the striking appearance of the mountain, when seen in flank, is chiefly due—mark the position of two faults which cut through the ridge from north to south, letting down the sandstone strata in each instance to the west, though the forces of denudation have long since obliterated all difference of level at the summit, which at the,present time is in each case actually lower on the upthrow, or unmoved side, of the line of fracture. A probable explanation of this fact may be found by supposing that a band of harder rock was successively let down a step, and thus the wasting and wearing down process went on more rapidly in the softer strata on the upper or eastern side of each line of fault. The natural drainage of the hill, taking advantage of the course of these faults, has cut deep gullies filled with loose debris down the talus slopes. This debris is inclined at so steep an angle, that a touch of the foot is often sufficient to set the whole mass in motion. Where, however, the lines of fracture cross the ridge, one side of each cleft forms a more or less perpendicular wall of rock, the other a steep broken slope; and it is in crossing these nicks that the only real difficulty of the climb is found. The mountain is composed throughout of red gritty sandstone, which generally gives good foothold, and, though apt to crumble in places, is never slippery. The sandstone lies in almost horizontal beds of nearly uniform thickness, which can be traced, like lines of masonry, along the sides of the hill, and are carved along the wind-swept crest into a thousand forms of bastion, turret, and pinnacle, thus giving that architectural appearance which is so characteristic of these sandstone The western peak is symmetrically dome-shaped, and plunges down at its farther extremity in an almost perpendicular precipice to the talus slope, which sweeps out, in bold parabolic curves, from the foot of the cliff to the gneiss plateau below. Meall Bheag, though formidable enough, is less precipitous on its outer side, and, rising from a considerably higher level to a considerably lesser attitude, cannot compare in grandeur with the great mural precipieces of Caisteal Liath. To reach the top of Meall Bheag was now our aim, and, after reconnoitring it on all sides, we determined to attack the peak at the south-east corner, where the first slopes seemed less steep than elsewhere. A tolerably easy scramble up the grassy incline, strewn with fallen blocks of sandstone of all sizes, brought us to the foot of the escarpment, where the real climb might be said to begin. Precipitous though this part of the hill appears when seen from a distance, it is yet so broken into ledge and terrace by the unequal weathering of the sandstone courses, that to a firm foot and steady eye it presents no greater difficulty than that involved in going up a somewhat steep and irregular staircase, with steps varying from one to three feet in height. Occasionally, however, a higher step of six feet or more blocked the way, and had to be followed along until a break, or a succession of convenient crevices, was found, by which it could be surmounted. In this way, by a system of judicious zig-zagging, we soon reached the top, which forms a nearly fiat plateau, covered with scanty grass and loose sandy debris. Crossing to the western end, where it overlooks the cleft between Meall Bheag and Meall Mheadhonach, we became aware that between us and our next goal there was indeed a great gulf fixed. The cliff on the east side of this gully is not only vertical, but actually overhangs, as can be distinctly seen from any point on the south side of the mountain ; and to get down on to the narrow saddle that bridges the chasm between the two peaks seemed at first an impossibility. Of course we could have solved the problem by going down again to the foot of Meall Bheag, and ascending Meal! Mheadhonach by means of the dividing cleft. But this was an ignominious way out of the difficulty not to be entertained for a moment. We had come out to climb Suilven from end to end, and climb him we would. So, after craning over the horrid gulf for some little time, and examining the rocks on all sides, we came to the conclusion that nothing but a goat could get down there, and that the position must be turned in flank, or not at all. Crossing over, then, to the northern side of the peak, we let ourselves cautiously over the edge of the cliff, gradually worming our way down from ledge to ledge wherever a good opportunity for-a drop occurred, but always working westwards towards the cleft, until we found ourselves almost immediately underneath the overhanging rock from which we had just before looked down. This was a bit of work requiring great caution and a steady head, for at nearly every point the cliff fell sheer down to a depth of several hundred feet, and a slip at any time would have been fatal. Otherwise the foothold was good, and the ledges always sufficiently broad to enable one to move along with comparative safety, though here and there we had to crawl on hands and knees, the shelf above projecting too far to allow of walking upright. However, all went well, and, one after another, we crept round the last corner, and established ourselves on the narrow rock of porphyry which connects the two peaks. Looking down the tremendous gash, through which the wind was sweeping with fearful force, we saw the distant landscape set, as it were, in a narrow frame of perpendicular walls that plunged down on either side. The entire absence of middle distance, and the immense extent of atmosphere through which one looked to the country beneath, gave a very curious and striking character to this mountain picture, enhanced by the startling contrast between the dark walls of the cleft and the sunny landscape far below. In spite, however, of the wonderful view, this was too draughty a spot in which to linger, and we were soon attacking the farther side of the chasm. The first few feet surmounted, leading from the neck to the slope above, the rest was not difficult, this side of Meall Mheadhonach sloping at an easy angle compared with the face that we had just come down. A climb of about 400 feet brought us to the crest of the middle peak, which forms a narrow ridge, in places less than a yard in width, and falling abruptly on either hand to the edge of the cliffs that flank each side of the mountain. So narrow was the path, and so furious the wind that swept across it, that we found it advisable to descend a little on the leeward side, and thus escape the fierce gusts that threatened at times to sweep us off our feet. At the highest point of the ridge there is a shepherd's cairn, and round this were scattered many eagles' casts,- oval concretions of wool, hair, and feathers mixed with fragments of bone, which the eagle, like ail birds of prey, throws up, after assimilating the more digestible parts of its food. We also found a few feathers, which we carried off as trophies, though we were not fortunate enough to see any of the birds, a pair of whom at that time had their eyrie in a high rock in the glen below. Golden eagles are still tolerably plentiful in this part of Sutherland, and being preserved are believed to be increasing in numbers. My friend P—, who a few days before our expedition had observed the birds leave the eyrie, has seen as many as four eagles, probably a pair with their young, hunting together in Glen Canisp; and I have watched them sailing in lazy circles for hours together round the topmost point of Quinag. The sea eagle is now much more rare, but a pair used to build regularly on the cliffs near Cape Wrath, and another near the At the western end of the ridge we came to another rather nasty bit,—a drop over several feet of perpendicular rock on to the slope that leads down to the Bealach Mor, as the col between Meall Mheadhonach and Caisteal Liath is called. This however, I believe, might have been avoided by going a little farther down the ridge on the south side, and working round the corner as we had previously done on Meall Bheag. The rest of the descent to the Bealach was easy enough, and, having reached the ruined wall that here crosses the ridge,—put up at the time when Glen Canisp was a sheep farm to prevent the sheep from straying on to the dangerous parts of the mountain, -we called a welcome halt for luncheon. Half-an-hour sufficed for this and the necessary pipe, and climbing leisurely up the slope at the western end of the col, we were soon standing on the dome-shaped eminence that crowns the great cone of Here, for the first time, we stopped to take a long look at the magnificent prospect that lay before us. Beneath our feet, as we looked out to the west, lay the houses of Lochinver, fringing the sheltered bay, beyond which the wide Atlantic stretched away to where the long blue line of the Outer Hebrides lay like a cloud along the western horizon. On the north, the long wall of Canisp cut off much of the view, but we could see on the left the great precipices of Quinag, and the wide rolling expanse of hill and valley, studded with innumerable lochans, which stretches away from the northern shores of Loch Assynt to the low bare promontories of Stoer and Ardvar. To the east, the sharp peak and great corrie of Ben Dearg showed above the smooth contours of the Cromalt Hills; and farther away, against the south-eastern horizon, rose the beautiful cones of An Teallach in Dundonnel, the highest of the sandstone mountains. Turning to the south, we looked across the lonely waters of Loch Veyattie straight into the profound depths of Corrie Dubh, that magnificent amphitheatre carved out of the northern face of Coul Môr. Beyond the narrow winding shores of the Fionn Loch, and the line which marked the deep valley of the Kirkaig, lay the broad expanse of Loch Skinaskink,* dotted with wooded islets, and backed by the graceful cone of Coul Beag and the splintered spires and pinnacles of Stack Polly. Behind them rose the long ridges and needle-like peaks of Ben More Coigach and the Fiddler; while far away to the south-west stretched the Rhu Coigach and the scattered archipelago of the Summer Isles. The atmosphere was not clear enough for a very distant view, but beyond the faint line that showed where the Cailleach Head and Greenstone Point stretched into the Atlantic, we could just catch the dim outlines of the hills of Gairloch and Loch Maree. But we had now to think about turning homewards; and while C- sat down to make a sketch, H- and I went to prospect the farther end of the peak, fired with the wild idea of climbing down the western face, and thus really traversing the mountain from end to end. But after scrambling down for some distance, we found ourselves brought up suddenly by a sheer wall of rock plunging straight down for several hundred feet, which effectually put an end to our hopes in that direction. So rejoining C-.---- on the top, we determined to take the first practicable gully on the north side, and trust to chance that it would lead us to the foot of the hill. Down we went, the loose debris clattering and sliding under our feet, and in a very short space of time—by dint of glissading with the stones, when practicable, and clinging to the rocky side of the cleft at the steepest parts—found ourselves within twenty feet or so of the foot of the cliff. Here our progress was barred by a miniature waterfall, trickling over the nearly vertical rocks, green and shiny with moss and liverworts, and making a very unpleasant, if not impossible, place to get down. However, retracing our steps for a little way, we found a branch gully, which afforded an easy path down to the foot of the precipice. Our hard work was now over, and, rattling down the talus slope, a rough walk of half-an-hour brought us to Glen Canisp, and crossing the stream just above Loch-analitain Duich, we struck the forest path at Suileag. From this point we had a good road under our feet, and the three miles over An Leathad to the Inver were soon accomplished. Crossing the river at Little Assynt, we found our trap awaiting us at the shepherd's house, whence a drive of nine miles along the shores of Loch Assynt brought us back to Achumore about eight P.M., in good time for a very acceptable and well-earned dinner. The distances traversed are roughly as Inchnadamph to Loch Awe, 4 miles; Loch Awe to foot of Meal! Bheag, 6 miles; along ridge to west end of Caisteal Liath, 1˝ mile; from foot of Caisteal Liath to road at Little Assynt, about 5˝ miles; Little Assynt to Inchnadamph, 10 miles—Total, 27 miles, of which 14 can be driven. Suilven can also be reached very easily from Lochinver, the route taken being by Glen Canisp Lodge and the forest path to Suileag. There is no difficulty whatever in reaching the top of Caisteal Liath (the western peak) by going up the Bealach Môr, either from the north or south side, and it is only in crossing the gap between the eastern and middle peaks that any real difficulty or danger is to be found.
<urn:uuid:0bcb7adb-0c5a-426c-9310-999118a07a2e>
CC-MAIN-2022-33
https://www.electricscotland.com/history/mountaineering/chap7.htm
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00695.warc.gz
en
0.949667
5,676
3.46875
3
Symbolism in Hemingway's Cat in the Rain Essay Example | Topics and Well Written Essays - 500 words Imagery in Hemingway's Cat in the Rain - Essay Example A thump at the entryway gets the house keeper with a feline her hands which th... Tuesday, August 25, 2020 Imagery in Hemingway's Cat in the Rain - Essay Example A thump at the entryway gets the house keeper with a feline her hands which the inn proprietor requests that her bring to American spouse. This short story of Hemingway obviously shows one part of marriage life which is improved using imagery. The story begins with the lovely depiction of the spot outside the couple's inn with the perspective on the ocean and the beautiful scene that specialists can't avoid to paint. After this, Hemingway begins to fabricate the circumstance where the couples are-downpour trickled from the palm trees (Hemingway 1), engine vehicles are gone (1), and void square (1)- which are all rather than the recently portrayed magnificence of the spot. This portrayal can be viewed as Hemingway's delineation of the couple. At the point when they previously got hitched, everything is by all accounts so well between them. Be that as it may, they are currently confronted with the difficulty of making their relationship work due to their individual contrasts which is additionally depicted in the succeeding passages. Hemingway's utilization of feline which is attempting to make herself reduced that she would not be trickled on (2) can be straightforwardly connected to the enthusiastic enduring that the lady is experiencing. It ought to be noticed that like the feline, she is doing combating the chilliness of her better half and is attempting to cause him to comprehend what she needs. Saturday, August 22, 2020 Aztec The Aztec lived in the city of Tenochtitlan, which is a fruitful bowl around 50 miles in length and as wide. Encircled by mountain ranges and a few volcanoes, the Aztec has inexhaustible flexibly of water. With being 8000ft above ocean level the day were mellow and the evenings are cold during a significant part of the year. The Aztecs name implies heron individuals their name is gotten from the legendary country toward the north called Azatlan. This as a top priority their language(Nahuatl) likewise have a place with the phonetic family as the Soshonean, a tongue will spoke to among the Indians of the Untied States. In the Aztecs culture their primary chief yield was maize. Maize was typically cooked with lime at that point ground to make batter, at that point tapped into tortillas, other chief harvests were beans, squash, tomatoes, cotton, chilies. The two harvests maguey and agave were utilized as string, sacks and shoes and a substitute for cotton in apparel. From the juice of th e maguey was use in a mellow type of liquor called pulque, which was the stately beverage. Just the elderly people men of the advisory group had the option to drink pulque unreservedly, in any case among the more youthful age couldn't become inebriated aside from at certain strict dining experience. Tipsiness was viewed as a genuine offense even deserving of death. In the Aztecs culture there were groups, every faction there was clans and every clan was split. At that point every family were apportioned adequate land for its support, on the off chance that nobody else were alive in the family, at that point the land were returned to the clan. Urban people group, the land were common, each gathering called capulli was made out of a couple of families that together possessed a real estate parcel. At that point some portion of the yield was given to the state as an assessment. Rest of yield would be either sold, exchanged or for their own utilization. There were two sorts of rancher, f irst there was the general field laborers. They were in control with setting up the dirt, separating hunks, hoeing(with the coa burrowing sticks), leveling, defining limit markers, planting, inundating, winnowing and putting away grain. The second sort of rancher were the horticulturists their activity was planting of trees, transplanting, crop groupings, pivots and an administrative job, for they were required to peruse the Tonalamatl chronological registries to decide the ideal opportunity for planting and reap. One of the uncommon element of the Aztec horticulture were the skimming gardens. These nurseries were incorporated by burrowing trench with squares or square shape, at that point they would accumulate mud on the region which the trench encased. When that was done the mud was held in position by stick and parts of trees. This kind of horticulture can even now be seen today at Xochimilco, a couple of miles south of Mexico City. In most culture there were household creatures in the Aztecs culture there were turkeys, ducks, and mutts. The pooches were raised as food and were viewed as an incredible delicacy. The wild creatures that were eaten were hare, deer, gopher, iguanas, snake, turtles, lizards, bug eggs, numerous types of frogs, hatchling, grasshoppers, ants, worms, tadpoles and 40 types of water feathered creatures. The corixid water creepy crawlies a plentiful protein source, were gotten from the lake, crushed together in balls, enclosed by corn husks, and bubbled. The metal particular specialists were gold, silver and coppersmiths. In the cowhide division were the lapidaries who made expand structures of quills for capes, crowns and shield covers. The quill laborers accomplished the most elevated respect and esteem, living in networks of their own their procedures was passed down from age to age. These quill laborers (Amanteca) were saved for the honorability and the most elevated positioning authorities. One of the eminence of Aztec riches was jade, turquoise and plumes of the quetzal. With these prize belongings you were viewed as rich, yet just the respectability or high positioning authorities could get them. To get these prize belongings an overwhelming tribute was forced on vanquished individuals. Other tribute incorporate gold residue, cochineal color, shell, cocoa beans and produce, for example, beans and maize. In the Aztec culture cocoa beans were utilized as a money, items, for example, quetzal quills, gold, heaps of maize or slaves were esteemed regarding cacao beans. In Tenochtitlan the costs were higher Sunday, August 9, 2020 Friday Roundup Admitted Students Day, Alumni Day, and SIPA Faculty COLUMBIA UNIVERSITY - SIPA Admissions Blog Friday Roundup Admitted Students Day, Alumni Day, and SIPA Faculty COLUMBIA UNIVERSITY - SIPA Admissions Blog Weve been busy talking to so many fantastic students these past few weeks, past, current and future! Graduation for the SIPA Class of 2018 is coming up in a few weeks, and its bittersweet for us to watch the students weve known since they attended their first info session, graduate and go off into the world. On the other side, weve talked to many of our newly admitted students as they figure out what life at SIPA will be like. Well be giving some peeks into student life next week on the blog. Until then, heres what weve been up to at SIPA: Its been 10 days since Admitted Students Day, our annual open house for the new MIA, MPA, and MPA-DP incoming class. We welcomed the SIPA Class of 2020 to campus for the day, allowing them to get a feel for the vibrant and busy SIPA community. This past weekend was Alumni Day, where past students reunited for informative panels and to catch up. Clockwise from the top left is the SIPA Class of 2008, Class of 2013, Class of 1998, Class of 1993 two classes celebrating their 20- and 25-year anniversaries! The years our alumni spent here as current students led to lifetime bonds around the world. Finally, were giving a huge congratulations to economist Richard Clarida, who was nominated as vice chairman of the Federal Reserve Board, the second-ranking position in the United States central banking system. Were excited to see the SIPA community grow in so many diverse directions. Wishing you all a great weekend! Saturday, May 23, 2020 The next presidential election will be one like no one has ever seen before in terms of campaign funding and expenses. Even now, the GOP Presidential Primary races are already showing signs of how money will not be an object for their presidential candidate. The seemingly limitless budget exists for these candidates thanks to the so-called Super PACs (Political Action Committees). These Super PACs are allowed to come up with independent financing for the presidential campaign, sans any budgetary ceilings. The inner workings of such a committee has left a bad taste in the mouths of the voters even though very little is known about the actual history and reasons for the existence of the Super PACS. This paper will delve into the committeesâ⬠¦show more contentâ⬠¦That is one reason why the public has come to reject the idea of the Super PACs. It has the turned the political campaign into a shallow, reality television, mud-slinging type of contest from which the candidates can nev er return. The ads being run in the newspapers, television, and radio stations cost these candidates and Super PACs money that could have been used for better political means such as contributions to charitable organizations by the candidates or their support groups on their behalf. That sort of act would have had a greater political impact upon the voting public than an ad campaign explaining the ills of Newt Gingrich. Even more sickening, is the fact that most of the candidates will feign knowledge of participation in any negative campaign movements because of the independent nature of the Super PACs. The candidate can deny any involvement in the act all the while coordinating with his Super PAC under the radar of mass media. These negative campaigns leave the candidate free and clear of any involvement as all the Super PAC has to do is run the ad with a clear disclaimer absolving the candidate the ad supports of any wrong doing because the ad was not sanctioned by the candidate o r political party. In other words, Super PACs gives a voice to people with money. All corporations that have money to give, are giving millions and millions of dollars to the candidates across the board. Independent voters dont have that money to donate, so theirShow MoreRelatedSuper Pacs : The New Kind Of Committee That Operates Politically945 Words à |à 4 PagesBrianna Goodman Proliferated in 2010, Super PACs have played an immensely influential role in the outcomes of elections and collective action. Super PACs are a new kind of committee that operates politically. As reported by opensecrets.org, Super PACs acquire any amount of donated money in a phenomenon that aggregates towards a fund ââ¬Å"to advocate for or against political candidates and must report their donors to the Federal Election Commission on a monthly or quarterly basisâ⬠. They are not allowedRead MoreThe Super Pac940 Words à |à 4 Pagesfactors such as money, power, and connections that are questioned and accessed within these groups. PACS or Political Action Committees are involved. Yet, there is another form of PACs that are named ââ¬Å"Super PACSâ⬠where unlimited funds are raised (We the People). The ââ¬Å"Super Pacâ⬠strategy should be outlawed by the government so it will not abuse its devoted followers. The textbook, We the People defines a PAC as ââ¬Å"private groups with that raise and distribute funds for use election campaignsâ⬠(pg. 245).Read MoreIs Voting The Only Way An Average American Can Vote Or Influence A Any Party?927 Words à |à 4 Pagesshaped by Super PACS and Interest groups, and we see this all the time in the elections happening. A Super PAC is a political committee that is organized to raise money to support their candidate so they can pull ahead in the polls, and spend money to oppose the other candidates. An interest group supports a certain side on a topic and a candidate for a position in government would find it beneficial to appeal to interest groups in order to gain supporters. Interest groups and Super PACs has changedRead MoreThe United Vs Federal Election Commission1235 Words à |à 5 Pagesas ââ¬Å"Super PACs.â⬠Super PACs are organizations that operate independently from any candidate or political party. These organizations are allowed t o receive any amount of money from any person or organization, which they can they allot towards their own support of a political candidate. A good example of this would be Mitt Romneyââ¬â¢s Super PAC entitled ââ¬Å"Restore America,â⬠which spent over twelve million dollars launching an ad campaign that attacked Newt Gingrich (MacMillen). These new Super PACs haveRead MoreGlobal Economy And The American Dream1286 Words à |à 6 Pagespolitics itââ¬â¢s easy to see today, that world politics are in turmoil. Oil prices have sunk to record lows, putting regions in the Middle East, Russia, and South America in economic crisis. On top of that the whole global economy is in a recession; pushing super powers such as the United States, China, and the European union to take action. All across the world the wealth gap is widening. It seems like for every new billionaire there are another million people in poverty dying of disease. Our one saving graveRead MoreThe Daily Show On The Congressional Record1258 Words à |à 6 Pageshealth and financial aid to ill 9/11 workers, which was known as the James Zadroga 9/11 Health and Compensation Act. However, the bill did not pass because Democrats were incapable to break a Republican filibuster against the bill. The vote was 57 in favor of closing the debate and 42 against, but 60 votes were needed for the bill to go to an up-or-down vote. Republican Congressmen were concerned about the $7.4 billion bill cost, and the way the money would be spent. John Feal was one of the firstRead MoreJudges Should Be Appointed For Preserve Judicial Independence1121 Words à |à 5 Pagesshould have the right to choose who is best for them. An election by the people sounds like a fair and trustworthy method, but ultimately, large corporations pump millions into Super PACs to ensure their choice candidate is elected. Judicial elections are often exposed to attacks from special interest groups and Super PACs, thus sometimes making an uneven and corrupt playing field. The fates of fair and impartial courts are at stake, and they should not be for sale. The scary concept that courtsRead MoreThe Debate Of Corruption Versus Free Speech1594 Words à |à 7 PagesOver time unions and corporations would evolve their tactics in financing campaigns by creating political action committees (PACs). These PACs were basically cooperation entities that developed separately but were funded by the corporation and employees of the corporation, in order to influence the outcome of elections. Throughout the second half of the twentieth century, PACs had a lot of power in influencing voters, and swaying elections (Hughes 1:25-3:10). In the early 1970ââ¬â¢s the federal governmentRead MoreThe Court s View Of The Election Process767 Words à |à 4 Pagesno evidence supports the idea that candidates bend to contributorsââ¬â¢ pressure. However, as this law sets deeper in our jurisprudence and Super PACs feel more comfortable supporting candidates, some people might start suspecting that candidates are bending to donorsââ¬â¢ pressure. This suspicion could arise simply from knowing the millions of dollars that Super PACs spend supporting candidates. The average person only has a limited amount of hard earned money to spend on goods and services. They onlyRead MoreMoney Politics : Finding A Fair Approach For Campaigns1319 Words à |à 6 PagesNot only are these commercials annoying and help to create a negative connotation for the campaigning time, but give the people a false interpretation of different candidates. Most of these are not paid for by the prospective candidates but rather Super PACs that support the candidate ââ¬Å"without other motives.â⬠With the growing partisan between ââ¬Å"ordinaryâ⬠people and politicians it is of the upmost importance that everyone is fairly represented. But with the ever growing social, political, and financial Tuesday, May 12, 2020 Breast Cancer The thought of having breast cancer is frightening to every woman, and devastating to some. However, ignoring the possibility that you may get breast cancer, or avoiding the things you should do to detect and avoid cancer, can be even more dangerous. Breast cancer is a devastating disease that may affect one out of nine women in the United States. This year alone, a patient will be diagnosed every three minutes and a woman will die from breast cancer every thirteen minutes. Unfortunately, there is still little known about the diseaseââ¬â¢s cause or cure. Currently the only means of increasing a breast cancer victimââ¬â¢s chance of survival is early detection by annual breast exams and education about the disease. Aâ⬠¦show more contentâ⬠¦Of the cases of breast cancer diagnosed every year, 70% of the patients have none of the risk factors. It is important to understand what are the real risk factors and how they affect the chances of developing breast cancer. Some of the main risk factors that women should be aware of include: family history of breast cancer, increased age, and any previous diagnosis of other breast or ovarian cancers. While a womanââ¬â¢s family history and genetic makeup cannot be controlled, there are certain risk factors that can be modified in an attempt to reduce the risk of developing breast cancer. Many experts believe woman could prevent a lot of breast cancers with lifestyle changes. One of those factors includes being overweight. Numerous studies have linked an increase weight and height with a womanââ¬â¢s risk of developing breast cancer. After a woman goes through menopause, being overweight can increase a womanââ¬â¢s risk by about twenty to thirty percent. Excess body weight and extra fat increase the production of estrogen outside the ovaries and contributes to the overall level of estrogen in the body. Therefore, making healthy lifestyle choices can be good for a woman at any time in her life. Weight control also ties in with the amount of physical activity a woman has in her daily life. Exercise may lower a womanââ¬â¢s lifetime risk of acquiring the disease. Doctors believe that activity reducesShow MoreRelatedBreast Cancer And Cancer Prevention2347 Words à |à 10 Pages INTRODUCTION Statistics indicate that breast cancer-related complications are among the top causes of death among women for over 23% of all womenââ¬â¢s deaths in the world (Donepudi et al., 2014). The great cases of breast cancer are attributed to lack of information on and hard data on the disease, especially on early diagnosis and treatment options. In America, breast cancer is among the top causes of cancer-related deaths, and the mortality rate is relatively high as compared to the neighboring countriesRead MoreThe For Breast Cancer Action1612 Words à |à 7 Pagesintention to give some part of the profit towards breast cancer causes. Ironically, the money made from this marketing will often not significantly benefit somebody with breast cancer. The pink ribbon was originally created by the Susan G Komen foundation yet anybody can use this symbol, because there is no intellectual copyright on it. Pinkwashing is term was first coined by the organization called breast cancer action, whose m ission is to ââ¬Å"Breast Cancer Actionââ¬â¢s mission is to achieve health justiceRead MoreBreast Cancer : Cancer And Cancer Essay1433 Words à |à 6 PagesBreast cancer is a carcinoma that develops due to malignant cells in the breast tissue. Cancerous cells are more likely to produce in the milk-producing ducts and the glands, ductal carcinoma, but in rare cases, breast cancer can develop in the stromal, fatty, tissues or surrounding lymph nodes, especially in the underarm (Breast Cancer). For women, breast cancer is the most commonly diagnosed cancer and the 2nd leading cause of cancer death ââ¬â behind skin cancer. While treatment or surgeries canRead MoreBreast Cancer : Cancer And Cancer1346 Words à |à 6 Pagesinternational symbol for breast cancer support and awareness. Breast cancer knows neither racial boundaries nor age restrictions. Females of all ages and ethnicities can develop breast cancer and it is the leading most common cancer among women. Calling at tention to this often fatal disease is important by supporting its victims, families and friends of victims, as well as raising funds for breast cancer research. Though males are not immune from developing a breast cancer, for the purposes of thisRead MoreBreast Cancer : Cancer And Cancer946 Words à |à 4 PagesSkylar Steinman Period 6 Ms. Jobsz 12 February , 2016 Breast Cancer It is commonly known that Breast Cancer is one of the most insidious diseases that mankind has had to deal with. With the discovery of the BRCA1( BReast Cancer gene one) and BRCA2 (BReast Cancer gene two) genes, breast cancer can be detected with a great amount of certainty on a genetic level in some women and men. 40,000 women and men die of breast cancer each year. Knowing this it is very important to try to detect the mutationRead MoreBreast Cancer : Cancer And Cancer1530 Words à |à 7 Pagesââ¬Å"Cancerâ⬠is the name for a group of diseases that start in the body at the cellular level. Even though there are many different kinds of cancer, they all begin with abnormal cell growth with the potential to invade or spread to other parts of the body. These abnormal cells lump together to form a mass of tissue or ââ¬Å"malignant tumorâ⬠. Malignant means that it can spread to other parts of the body or Metastasize . If the breast is the original location of the cancer gr owth or malignant tumor, the tumorRead MoreBreast Cancer : Cancer And Cancer Essay1741 Words à |à 7 Pages Internationally, breast cancer is the most commonly diagnosed cancer and the leading cause of cancer related death amongst women. (CITE) Each year an estimated 1.7 million new cases are diagnosed worldwide, and more than 500,000 women will die of the disease. (CITE) According to (CITE), somewhere in the world one woman is diagnosed with breast cancer every 19 seconds and more than three women die of breast cancer every five minutes worldwide. (CITE) Breast cancer is a heterogeneous condition thatRead MoreBreast Cancer : Cancer And Cancer1372 Words à |à 6 PagesBreast Cancer Disease Overview Breast cancer is a disease in which certain cells in the breast become abnormal and multiply uncontrollably to form a tumor. Breast cancer is the second most commonly diagnosed cancer in women. (Only skin cancer is more common.) About one in eight women in the United States will develop invasive breast cancer in her lifetime. Researchers estimate that more than 230,000 new cases of invasive breast cancer will be diagnosed in U.S. women in 2015. Cancers occur when aRead MoreBreast Cancer : Cancer And Cancer1471 Words à |à 6 PagesBreast cancer Introduction to Breast cancer Breast cancer is one of the most common forms of cancer only surpassed by lung cancer. It involves a cancerous tumour located inside the breast but spreads if treatment is not administered. (Evert et al 2011) Breast cancer can be treated if diagnosed in its early stages but becomes progressively more difficult upon reaching more advancing malignant stages. Breast cancer can be confused with being a female only disease however both sexes suffer. AccordingRead MoreBreast Cancer : Cancer And Cancer1921 Words à |à 8 PagesIntroduction Cancer is a term that every individual on this planet wants to avoid hearing when they go to their yearly check up at the doctors. However, as person ages, they are prone to develop some sort of sickness and most of the time, they could develop cancer of some sort. For this research paper, I am going to go over breast cancer. Breast cancer is a well-known type of cancer with awareness events going on to support both women and men who has breast cancer. According to American Cancer Society Wednesday, May 6, 2020 The Supreme Court may, in its discretion, grant special leave to appeal from any Judgment, decree, determination, sentence, or order in any cause or matter passed or made by any court or tribunal . Art 136 confers a discretionary power on the Supreme Court to interfere in suitable ceases, such as, a breach of natural Justice by the order appealed against or in exceptional ceases. The Supreme Court will intervene in if there has been a resultant failure of Justices or violation of principles of natural Justices or without a proper appreciation of material on record or the submissions made, interference under Art. We will write a custom essay sample on Memorials or any similar topic only for you Order Now 6 is warranted. The Supreme Court grant leave to appeal in criminal matters when exceptional and special circumstances exist, substantial and grave injustice has been done, and the case in question presents features of sufficient gravity to warrant a review of the decision appealed against or there has been a departure from legal procedure such as vitiates the whole trial, or if the findings of fact were such as were shocking to the judicial conscience of the Court. It would interfere where High Courtââ¬â¢s order results in gross miscarriage of Justiceââ¬â¢s. That special leave petition against interim order maintainable. The Supreme Court exercise its Jurisdiction under Art. 136 of the Constitution in respect of an interlocutory/interim order in especial circumstances to prevent manifest injustice or abuse of process of the Court 1 or where it is unsustainable on the face of it or where the interim order passed by the Division Bench of the High Court, on facts, is perverse in natureââ¬â¢s or unreasonable. Where the interim order was not made in equity, interference by the Supreme Court was called forl.That the reasons for the decision must be given. A decision affecting the right of people without assigning any reason cannot be accepted as a procedure which is fair, Just and reasonable and hence violated of ââ¬Ëreasonsââ¬â¢ may also be implied in the principles of ââ¬Ënatural Justiceââ¬â¢17. Absence of reasoning is impermissible in Judicial pronouncementââ¬â¢s. It is the reasoning alone, that can enable a higher or an appellate court to appreciate the controversy in issue in its correct perspective and to hold whether the reasoning recorded by the Court whose order is impugned, is sustainable in law and whether it has adopted the correct legal approach. To sub-serve the purpose of Justice delivery system, therefore, it is essential that the Courts should record reasons for its conclusions, whether disposing of the case at admission stage or after regular hearing proper reasoning is the foundation of a Just and fair decision. Failure to give reasons amounts to denial of Justiceââ¬â¢s. When the reason of a law once ceases, the law itself generally ceases. That order passed in violation of natural Justice is void. The breaches of rules of natural Justice must have the effect of producing void decisions. Any action in violation of principles of natural Justice is a nullity and is altar-fires and hence suffers from Jurisdictional error. Thus, an order which infringes an fundamental freedom passed in violation of audit alters parted is a nullity. That decision of sub-ordinate court is in violation of Doctrine of Proportionality. The punishment imposed has to be reasonable because of the constraints of Art. 14. This means that if the punishment imposed is unreasonable, Art. 14 is infringed. The court can thus decide upon the proportionality of the punishment when it is strikingly disproportionate. The penalty imposed must be commensurate with the gravity of he misconduct, and that any penalty disproportionate to the gravity of the misconduct would be violated of Art. 14 of the Constitution. The freedom of speech is regarded as ââ¬Å"a species of which freedom of expression is a genusâ⬠29 That a company can challenge the violation of its Fundamental Rights under Article 19 of the Constitution of India. The Supreme Court has stated that the law with regard to a company challenging the violation of its Fundamental Rights under Article 19 is in a ââ¬Å"nebulous stateâ⬠. The Court has gone on to say: ââ¬Å"Thus apart from the law fundamental freedoms guaranteed by Art. 9, the rights of a shareholder and the company which the shareholders have formed are rather co-extensive and the denial to one of the fundamental freedom would be denial to the other. That intention is necessary for the offence of defamation under Section 499 of Indian Penal Code. In order to attract the offence of defamation under Section 499 of I. P. C. Mess area is required I. E. The publication must be made with intention to harm the reputation of a person against whom it was directed. The accused must have made the imputation with the intention of harming or with the knowledge that it will harm the reputation of the person defamed. Therefore, the intention to cause harm is the most essential ââ¬Å"sine qua nonâ⬠of an offence under Section 49934. That a company cannot be held criminally liable for the offence of defamation. In view of Section 3(42), General Clauses Act, 1897 a company or association or body of individuals answers the definition of person. So, prima facie a company may be prosecuted for demotion. But, to invoke Section 499, the defamatory publication must be associated by delinquentââ¬â¢s intention to cause harm. But company cannot be said to have the Mess area of forming an intention to cause harm because a company, a rustic entity cannot have any mind. If there is anything in the definition or context of a particular section in the statute which will prevent the application of the section to a limited company, certainly a limited company cannot be proceeded against. Then again a limited company cannot generally be tried when Mess area is essentially. The company is a legal entity which can be prosecuted if it is guilty of acts which make it punishable under the particular Criminal Statuette. So a company cannot be held to have committed an offence under Section 500, l. P. CO. That decision must be given after viewing publication as a whole. Publication must be Judged as a whole. The impact and effect of the imputations, if any, had to be considered in the background of the entire facts and circumstances stated therein. The bane and the antidote ought to have been considered together. If in one part of the publication there is something disreputable but it is removed by the other parts and the conclusions, then the disreputable part alone cannot be taken out in the process of picking and choosing in order to venture a prosecution for defamation. How to cite Memorials, Papers Saturday, May 2, 2020 Question: Describe about the Taxation? Answer: When I attended the first class on Taxation, the topic was about Family Trusts and right from the beginning I got the concept wrong. I was thinking on the lines of trust among family members as I had no idea about the topic. It was all confusing. I had not studied my course notes and found most of concepts which the lecturer taught in the class were beyond my knowledge. Although I expected difficulties in the taxation classes as I had little interest in tax matters, I had not anticipated that my skill in dealing with numbers could be so useful in this course. From here onwards, I started taking-up the challenge, especially because I had confidence in my strong analytical and numerical skills, as explained by Barkoczy, (2015). From that moment, I decided to pay more attention to the lectures and keeping my apprehensions aside, take the help of two of my colleagues whom I found to be quite proficient in the subject. However, I soon found that paying attention to the lectures and seeking help from colleagues was of little help. I had to be prepared well before attending the lecture. So I started reading the course notes before attending the class. Soon I started reaping the benefits of my strategy as I could clear my doubts with the lecturer whenever I had difficulty in understanding a complex problem, assert Alexander Fogarty, (2009). My numerical skills were now the foundation of the theoretical parts of the taxation matters which I started understanding more efficiently. This confidence helped me to understand some of the core concepts and soon my misconception about Family Trusts was resolved. In fact, I started taking keen interest in this branch of taxation as I realised that if I took it seriously it could become my specialisation when I start my professional practice as a Tax Practitioner. I also realised that more I indulged in solving the problems put to us by the lecturer the more it sharpened my focus and this not only generated a clarity with regard to the application of the concepts involved, it also changed the perception of my lecturer towards me, as per Renton, (2012). This further boosted my confidence as I realised that I could get more help from my lecturer in shaping my career. He had experience and knowledge but I had enthusiasm and commitment. If our enthusiasm about what we are taught leads us towards the successful application of the teachings, it is our commitment which teaches us to resolve the complex practical problems, asserts Renton, (2012). I intend to take this ideology with me in my professional practice and my lecturer too encouraged me of keeping my focus on getting my training while I am learning instead of learning while I am training, as suggested by Barkoczy, (2015). This recommendation shall help me to cope with case studies and ensure a better clarity of concepts through practical application of the theory. My advice to other learners, who face the same predicament as I faced, is to concentrate on the fundamentals of the subject they are learning as such an action shall reduce their timeline of the learning process, explains Barkoczy, (2015). List of References Alexander, Dr. R. and Fogarty, H. J. 2009. Australian Master Family Law Guide, 3rd ed. CCH Australia Limited, Sydney, NSW. Barkoczy, S. 2015. Australian Tax Case book, 12th ed. CCH Australia Limited, North Ryde, NSW. Renton, N. E. 2012. Family Trusts: A Plain English Guide for Australian Families of Average Means, 4th ed. John Wiley Sons, Milton, QLD.
<urn:uuid:1497be41-d3f5-41a1-bf59-83e07853924c>
CC-MAIN-2022-33
https://paytowriteessay921.blogspot.com/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571147.84/warc/CC-MAIN-20220810040253-20220810070253-00497.warc.gz
en
0.960948
7,392
2.578125
3
MINI REVIEW article Sec. Atmospheric Science Current challenges in understanding and forecasting stable boundary layers over land and ice - Meteorology and Air Quality Section, Wageningen University, Wageningen, Netherlands Understanding and prediction of the stable atmospheric boundary layer is challenging. Many physical processes come into play in the stable boundary layer (SBL), i.e., turbulence, radiation, land surface coupling and heterogeneity, orographic turbulent and gravity wave drag (GWD). The development of robust stable boundary-layer parameterizations for weather and climate models is difficult because of the multiplicity of processes and their complex interactions. As a result, these models suffer from biases in key variables, such as the 2-m temperature, boundary-layer depth and wind speed. This short paper briefly summarizes the state-of-the-art of SBL research, and highlights physical processes that received only limited attention so far, in particular orographically-induced GWD, longwave radiation divergence, and the land-atmosphere coupling over a snow-covered surface. Finally, a conceptual framework with relevant processes and particularly their interactions is proposed. The atmospheric boundary layer over land experiences a clear diurnal cycle driven by that of the incoming solar radiation. During the evening transition period, the Earth's surface radiation budget turns negative due to longwave radiative loss and so the surface cools to a temperature below that of the air above. Consequently, the potential temperature increases with height, producing a stable boundary layer (SBL). SBLs prevail at night, but also during daytime in winter in mid-latitudes, in polar regions, and during daytime over irrigated regions with advection. The SBL is governed by a multiplicity of processes such as turbulence, radiative cooling, the interaction with the land surface, gravity waves, katabatic flows, fog and dew formation. Despite extensive earlier research, these processes and their interactions are not sufficiently understood, primarily because of their diversity and their general non-stationarity, which prevent an unambiguous interpretation of observations (Mahrt, 2007, 2014; Fernando and Weil, 2010). This ambiguity is a major obstacle to the development of model parameterizations. As a result, the SBL is inadequately represented in weather and climate models (e.g., Beljaars and Viterbo, 1998; Bechtold et al., 2008; Medeiros et al., 2011; Steeneveld et al., 2011; Kyselý and Plavcová, 2012; Tastula et al., 2012; Sterk et al., 2013; Bosveld et al., 2014). For instance, Atlaskin and Vihma (2012) studied the dependence of the 2-m temperature bias in multiple limited-area models on atmospheric stability for a winter period in Europe, and they found a warm bias in the 2-m temperature, increasing rapidly with stability. Some models overestimate surface vegetation temperatures during calm nights (e.g., Steeneveld et al., 2008; Atlaskin and Vihma, 2012), while other models experience unrealistic decoupling of the atmosphere from the surface, resulting in so-called runaway surface cooling (e.g., Mahrt, 1998; Walsh et al., 2008). This contrasting behavior depends on differences in model formulation, resolution, and land-use properties. Furthermore, in order to obtain accurate forecasts of the synoptic flow, atmospheric models generally require a larger turbulent drag at the surface and in the boundary layer than can be justified from field observations (e.g., Holtslag et al., 2013). Hence the model representation of turbulent transport is generally based on model performance rather than on a physical basis. Unfortunately, the enhanced drag results in an underestimation of the wind turning with height within the SBL (Svensson and Holtslag, 2009). The SBL depth is usually too high, and the low-level jet speed underestimated when compared to observations. Also, models appear to underestimate the near-surface temperature and wind-speed gradient, and their diurnal cycle (Edwards et al., 2011). Those issues occur typically under very stable conditions. Moreover, model results are very sensitive to parameter values in the turbulence and orographic drag schemes (e.g., Beljaars et al., 2004; Sandu et al., 2013), which implies that it is challenging to achieve a high model skill for a wide range of states of the atmosphere-soil-vegetation system, and that compensating errors make it difficult to identify deficiencies in individual schemes. For these reasons, an enhanced understanding of the SBL and a more physical representation of the SBL in models is required. The overall aim of this mini-review is to present briefly the state-of-the-art, highlight the recent research activities on physical processes that received only limited attention so far, and to build a picture of the key processes and their interactions. This review is organized as follows. Section Societal Impact summarizes the societal relevance of SBL processes. Section Physical Processes provides an overview of the physical processes acting in the SBL, their role, their interconnections and relevance for different SBL regimes. Section Role of SBL in Climate Debate briefly addresses the role of the SBL in understanding climate change. Finally, conclusions are drawn in Section Conclusion. The SBL is relevant to numerous applications in society. For instance, correct forecasting of near-surface temperatures and wind speed may improve road de-icing, as well as timely warnings to the transportation sector for low visibility caused by nocturnal fog or haze (van der Velde et al., 2010; Cuxart and Jiménez, 2011; Bartok et al., 2012). Agriculture relies on accurate near-surface frost forecasts to take measures to protect plants and yields (Prabha et al., 2011). Air quality forecasts and CO2 inverse modeling studies call for reliable estimates of boundary-layer depth, wind speed, drainage flows, and turbulence intensity (Salmond and McKendry, 2005; Gerbig et al., 2008; Tolk et al., 2009). The wind energy sector requires hourly estimates of wind energy production, and thus relies on wind speed forecasts, particularly around hub height at 100 m above ground level (Storm and Basu, 2010). In addition, Bony et al. (2006) found that the polar regions, which are generally stably stratified, are foreseen to warm 1.4–4 times faster than the global average in the period 1990–2090, but a clear reason for this is unknown. Recently, Pithan and Mauritsen (2014) evaluated the relative roles of the possible feedbacks responsible for this amplification. Surprisingly, the ensemble of the “Coupled Model Intercomparison Project-Phase 5” climate models indicates that it is not the ice-albedo-feedback that dominates, but temperature variations related to the surface energy balance and the vertical temperature structure provide the largest contribution to the amplification. The complexity of the SBL originates partly from the multiplicity of the processes involved. This section summarizes the main processes, their current state of knowledge, and associated open issues. The nature of the atmospheric flow is characteristically turbulent, in which eddies of different scales absorb energy from the mean flow. These eddies break up into smaller eddies until they dissipate because of the action of molecular viscosity. All eddy motions of different length scales, from millimeters to the scale of the boundary-layer height (of order 100 m for the SBL), transport momentum, heat, humidity and contaminants. The turbulence intensity is influenced by wind shear and buoyancy. During daytime, the solar insolation heats the surface, and creates thermal instability and thermals, i.e., buoyancy dominates the turbulent kinetic energy budget. In contrast, in the SBL turbulence is suppressed by buoyancy during calm nights, and is produced only by wind shear. The net result is a precarious balance that is extremely sensitive to changes in the wind profile and the mean temperature profile. Several turbulence regimes have been proposed. Although they differ in formulations (in terms of governing variables and threshold values), they all roughly distinguish between a so- called “weakly stable boundary layer” (WSBL), for which turbulence is the dominant transport process, and the “very stable boundary layer” (VSBL), for which turbulence is relatively weak. Within the WSBL, Nieuwstadt (1984) showed that scaling of local fluxes with the local gradients of wind and potential temperature works satisfactorily. Within the VSBL, a well-established scaling of turbulence variables and thermodynamic profiles is missing (e.g., van de Wiel et al., 2012). Qualitatively, this regime is determined by waves, drainage flows, weak turbulence and other (sub-)mesoscale motions, which are not necessarily of local nature. Recently, Mahrt et al. (2012) pointed out that for near-calm nocturnal conditions, significant turbulence is mainly generated by short-term (minutes-long) accelerations of unknown origin. Moreover, observations in the VBSL identified global and local intermittency of turbulence, but a conclusive framework for this phenomenon is still lacking (e.g., Nappo, 1991; van de Wiel et al., 2003; Costa et al., 2011). The radiation budget of the SBL addresses two aspects, i.e., the net radiation balance at the surface (Q*), and radiation divergence within the atmosphere. Q* is governed by the down- and upwelling longwave radiative fluxes. The first is largely determined from the atmospheric temperature and humidity profiles, and the latter is dominated by the surface temperature. Internal variability of these quantities may induce high-frequency harmonics of Q* within the SBL. Moreover, cloud cover variations and the evening transition trigger rapid Q* changes, which are usually challenging to represent in models (van de Wiel et al., 2003). The energy transport by atmospheric radiation depends on the capacity to absorb and radiate energy to and from different atmospheric layers. This capacity is governed by the temperature of the layers, and the concentration of gases that are sensitive to interaction with radiation in the relevant range of wavelengths (e.g., water vapor, carbon dioxide, methane). Vertical radiation divergence is greater for larger vertical variations of temperature and especially humidity. Since these variations are large close to the surface, in particular for calm conditions, one may expect substantial radiation divergence near the surface. Indeed, numerous modeling studies reported such a divergence (Ha and Mahrt, 2003; Savijärvi, 2013). Field observations by Hoch et al. (2007) and Steeneveld et al. (2010) (Figure 1) reported radiation divergence values of several K/h in favorable conditions, particularly during sunset. Numerical models were found to underestimate substantially the radiative cooling for the case shown in Figure 1. Future research should clarify whether this model bias is a result from poor input to the radiation scheme, from the relatively coarse model resolution, or from deficiencies in the formulation of the radiation scheme (Wild et al., 2001; Rinke et al., 2012). Figure 1. Observed longwave heating rate in three atmospheric layers for a series of clear calm days in May 2006, Wageningen, The Netherlands. Orographically Induced Waves Stratified flows allow for the propagation of gravity waves, generated for instance by hills and surface roughness transitions. Here, we limit ourselves to orographically induced waves, whose role in the SBL dynamics remains unclear (e.g., Brown et al., 2003). Since NWP models require more drag than is explained by turbulence observations, alternative processes that provide drag are worth to examine. Gravity waves generate drag, which might influence the dynamical evolution of the SBL. This mechanism is well understood for large mountain ridges. However, the SBL is shallow, and one can expect that small-scale orography can also significantly influence the SBL flow through gravity wave propagation. Using linear theory, Nappo (2002) indeed showed theoretically that the magnitude of the wave drag and turbulent drag can be of the same order for weak wind conditions. Considering the complexity of real terrain, i.e., irregular hills, an alternative approach to estimate wave drag for these conditions is required. Figure S1 shows the estimated gravity wave drag (GWD) for four contrasting nights during the “Cooperative Atmospheric Surface Exchange Study 1999” (Steeneveld et al., 2009). During all nights the estimated GWD is of the same order of magnitude as the measured turbulent drag. During one night (9/10 Oct) the GWD is substantially larger than the turbulent drag for most of the night. In addition, the GWD is highly variable throughout the night, and varies on a timescale that is close to that of the observed global intermittent turbulence. Overall, these results suggest that orographically induced GWD is a possible candidate to explain in fact that drag is too small in NWP models. The relevance of GWD is further illustrated by Burgering (2014) who studied the sensitivity of a numerical model to the application of GWD in the SBL on the large-scale flow development. That study evaluated the model score for sea-level pressure for an 8-day forecast over the Atlantic Ocean and Europe. When a relatively simple approach to account for GWD is implemented (Steeneveld et al., 2008; Lapworth, 2014), the root-mean- square error reduces by ~4 hPa (~40%) over a large portion of Europe for the studied cyclone. Also, the bias in the modeled cyclone core pressure was reduced by ~66%. Clearly, accounting for GWD in the SBL substantially improves the model accuracy compared to a run without this GWD. In general, the Earth's surface orography is relatively complex. Katabatic flows are ubiquitous features of SBLs on sloping surfaces that are cooled by a radiation deficit, for example over glaciers. In some areas, katabatic flows can govern the local climate substantially. Katabatic flows are characterized by a pronounced low-level jet and large near- surface temperature gradient. Hence katabatic flows affect the surface fluxes of heat, moisture and momentum, and consequently the ice mass budget over glaciers, but also over Greenland and the polar regions. The simplest model of katabatic flow represents a balance between negative buoyancy due to the surface potential temperature deficit, as the driving force, and turbulent drag that dampens the flow. On relatively long glaciers and at high latitudes, the Coriolis effect also influences katabatic flows, and induces a cross-slope wind component (Stiperski et al., 2007). This cross-slope wind is balanced by the Coriolis force and turbulent drag. Its vertical scale is larger than the characteristic height of the low-level jet of the down-slope component. The representation of katabatic flows in numerical weather prediction models is a challenging task (Grisogono et al., 2007; Jeričević et al., 2010). Coupling to the Surface Considering the fact that turbulent fluxes may vanish in the surface energy budget in calm conditions, the net radiation must then balance the ground heat flux in order to conserve the surface energy. Hence, it is evident that the land-surface coupling is important and should be accurately represented in atmospheric models, and its complexity should match the model complexity of parameterizations for other processes. Since the coupling with the land surface is an integral part of the SBL physics, studies using prescribed temperature, particularly with prescribed fluxes should be avoided. This aspect is further discussed in Holtslag et al. (2007), who showed within a model intercomparison context that model output variability is strongly reduced when the atmospheric model is coupled to the land surface instead of prescribing the surface temperature. Weather and climate models require numerical values for the heat conductivity, which are highly uncertain at the grid scale, especially for snow-covered surfaces. Dutra et al. (2012) quantified the EC-EARTH model performance and concluded that a correct thermal insulation of the snowpack is essential to improve the realism of the near-surface atmospheric temperature. Moreover, their multilayer snow scheme outperforms the single-layer scheme in deep snowpacks. Furthermore, an increased snow thermal insulation removed a warm bias over snow-covered regions during winter and spring; Cook et al. (2008) reported an analogous sensitivity in which high vs. low insulation led to soil cooling of up to 20 K in winter and 2-m temperature warming of 6 K. Figure 2 summarizes the mentioned processes and their interactions; herein positive (negative) feedbacks indicate a strengthening (weakening) of the process at the end of the arrow. First we identify the pressure-gradient force, the Coriolis force, cloud cover, free-flow stability, and deep-soil temperature as external driving variables. Low cloud cover strengthens the net radiative surface cooling, and thereby reduces the surface temperature and builds up the stratification. While stratification builds up, it slowly erodes by radiation divergence which acts to overcome the temperature contrast. On the other hand, an increased pressure gradient raises the wind speed, and consequently the turbulent mixing. A stronger mixing erodes the thermal stratification, resulting in a smaller magnitude of the sensible heat flux in the WSBL. Stronger mixing tends to deepen the SBL against the free flow stability and Coriolis parameter. However, a contrasting direction is found in the VSBL, where a reduced stratification might result in an increased sensible heat flux. In both cases the other surface energy budget components (soil heat flux and dew) and the surface vegetation temperature will be modified. The altered surface temperature establishes a new stratification, which consequently feedbacks to the surface radiation balance. Figure 2. Schematic overview of physical processes in the stable boundary layer over land, including their interactions and positive (——) and negative feedbacks (-----). Gray lines indicate processes that can have either a positive or negative feedback, depending on the state of the boundary layer. Another feedback loop evolves via the proportionality between GWD and wind speed, thereby strengthening the cyclone filling rate, and thus reducing the pressure gradient and geostrophic wind. Near the surface, GWD enhances the low-level jet wind speed, providing additional downward turbulent mixing from the jet, and thereby moderating the stratification again. In case of sloping terrain, cold air pooling triggers pronounced local temperature effects. Hence the SBL evolution is driven by a complex interplay between a myriad of processes, which presents a challenge for an accurate representation of the SBL in models. Role of SBL in Climate Debate The ongoing climate change is mostly observed at night and under stable conditions (Vose et al., 2005). The vertical distribution of added heat is essential to an interpretation of the 2-m temperature. Recently, Steeneveld et al. (2011) and McNider et al. (2012) performed a single-column model experiment in which the impact of enhanced CO2 concentration on the 2-m temperature was quantified for a wide range of geostrophic wind speeds. They found that feedbacks in the SBL and the land surface provide a 2-m temperature rise that is rather constant over a relatively broad range of geostrophic wind speeds. Apparently, the enhanced longwave downward radiation at the surface alters the surface temperature, reducing the surface stability, which consequently enhances the mixing in the whole SBL. Hence, vertical redistribution of heat through the SBL may amplify or dampen the 2-m temperature signal. This means that the 2-m temperature as a climate diagnostic needs to be re-evaluated. As a contribution to this discussion, Esau et al. (2012) hypothesized that the spatiotemporal variability of climate change is partly related to the effective heat capacity of the atmosphere, i.e., it is related to the boundary-layer depth. They showed that the largest temperature changes occur in areas with a relatively shallow boundary-layer depth. The SBL is governed by a myriad of physical processes. The weakly-stable boundary layer is dominated by well-developed turbulence that follows local scaling, and this regime can be relatively well modeled and forecast. Within the very stable regime, processes such as radiation divergence, orographic drag, land-surface coupling and (sub-)mesoscale motions can play a major role in the evolution of the SBL. These processes have not yet been fully understood and their relative impact has not yet been quantified. To advance our knowledge of the SBL it is essential that atmospheric models represent these processes as purely as possible. In terms of model development, this means that a strict splitting of the processes should be preferred over the current approach that lumps the net effect of many small-scale processes within a single parameterization scheme, e.g., a stability function in the boundary-layer scheme. This preferred approach will open the way for a better understanding of the SBL and an improved representation of it in NWP and climate models. Conflict of Interest Statement The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The author acknowledges NWO VENI grant “Lifting the fog” (contract number 863.10.010), and all co-workers who contributed to this mini-review, i.e., Bert Holtslag, Liduin Burgering, Michal Kleczek, Marina Sterk, Bert Heusinkveld, Christine Groot Zwaaftink, Marcel Wokke, Sander Pijlman (Wageningen University), Bas van de Wiel (TU Eindhoven), Richard Bintanja (Royal Netherlands Meteorological Institute), Carmen Nappo (CJN Research Meteorology). The Supplementary Material for this article can be found online at: http://www.frontiersin.org/journal/10.3389/fenvs.2014.00041/abstract Figure S1 | Modeled surface wave stress components (lines), and measured turbulent stress (+) for a series of nights during the Cooperative Atmospheric Surface Exchange Study 1999. In the header the classification of van de Wiel et al. (2003) (Turb, Rad, Non) is indicated. (Ug, Vg) identify the geostrophic wind for the simulation. Atlaskin, E., and Vihma, T. (2012). Evaluation of NWP results for wintertime nocturnal boundary-layer temperatures over Europe and Finland. Q. J. R. Meteorol. Soc. 138, 1440–1451. doi: 10.1002/qj.1885 Bechtold, P., Köhler, M., Jung, T., Doblas-Reys, F., Letbecher, M., Rodwell, M. J., et al. (2008). Advances in simulating atmospheric variability with the ECMWF model: from synoptic to decadal time-scales. Q. J. R. Meteorol. Soc. 134, 1337–1351. doi: 10.1002/qj.289 Beljaars, A. C. M., and Viterbo, P. (1998). “Role of the boundary layer in a numerical weather prediction model,” in Clear and Cloudy Boundary Layers, eds A. A. M. Holtslag, and P. G. Duynkerke (Amsterdam: Royal Netherlands Academy of Arts and Sciences), 372. Bony, S., Colman, R., Kattsov, V. M., Allan, R. P., Bretherton, C. S., Dufresne, J.-L., et al. (2006). How well do we understand and evaluate climate change feedback processes? J. Clim. 19, 3445–3482. doi: 10.1175/JCLI3819.1 Bosveld, F. C., Baas, P., Steeneveld, G. J., Holtslag, A. A. M., Angevine, W. M., Bazile, E., et al. (2014). The GABLS third intercomparison case for model evaluation, Part B: SCM model intercomparison and evaluation, Bound. Lay. Meteorol. 152, 157–187. doi: 10.1007/s10546-014-9919-1 Burgering, L. M. T. (2014). Modeling Orographic Gravity Wave Drag in the Stable Boundary Layer and Determining the Influence on Cyclonic Filling. MSc thesis report, Wageningen University, Wageningen, 68. Costa, F. D., Acevedo, O. C., Mombach, J. C. M., and Degrazia, G. A. (2011). A simplified model for intermittent turbulence in the nocturnal boundary layer. J. Atmos. Sci. 68, 1714–1729. doi: 10.1175/2011JAS3655.1 Dutra, E., Viterbo, P., Miranda, P. M. A., and Balsamo, G. (2012). Complexity of snow schemes in a climate model and its impact on surface energy and hydrology. J. Hydrometeorol. 13, 521–538. doi: 10.1175/JHM-D-11-072.1 Edwards, J. M., McGregor, J. R., Bush, M. R., and Bornemann, F. J. A. (2011). Assessment of numerical weather forecasts against observations from Cardington: seasonal diurnal cycles of screen-level and surface temperatures and surface fluxes. Q. J. R. Meteorol. Soc. 137, 656–672. doi: 10.1002/qj.742 Gerbig, C., Körner, S., and Lin, J. C. (2008). Vertical mixing in atmospheric tracer transport models: error characterization and propagation. Atmos. Chem. Phys. 8, 591–602. doi: 10.5194/acp-8-591-2008 Hoch, S. W., Calanca, P., Philipona, R., and Ohmura, A. (2007). Year-Round observation of longwave radiative flux divergence in Greenland. J. Appl. Meteorol. Clim. 45, 1469–1479. doi: 10.1175/JAM2542.1 Holtslag, A. A. M., Steeneveld, G. J., and van de Wiel, B. J. H. (2007). Role of land-surface feedback on model performance for the stable boundary layer. Bound. Lay. Meteorol. 125, 361–376. doi: 10.1007/978-0-387-74321-9_14 Holtslag, A. A. M., Svensson, G., Baas, P., Basu, S., Beare, B., Beljaars, A. C. M., et al. (2013). Stable atmospheric boundary layers and diurnal cycles: challenges for weather and climate models. Bull. Am. Meteorol. Soc. 94, 1691–1706. doi: 10.1175/BAMS-D-11-00187.1 Jeričević, A., Kraljević, L., Grisogono, B., Fagerli, H., and Večenaj, Ž. (2010). Parameterization of vertical diffusion and the atmospheric boundary layer height determination in the EMEP model. Atmos. Chem. Phys. 10, 341–364. doi: 10.5194/acp-10-341-2010 Kyselý, J., and Plavcová, E. (2012). Biases in the diurnal temperature range in Central Europe in an ensemble of regional climate models and their possible causes. Clim. Dyn. 39, 1275–1286. doi: 10.1007/s00382-011-1200-4 McNider, R. T., Steeneveld, G. J., Holtslag, A. A. M., Pielke, R. A. Sr., Mackaro, S., Pour-Biazar, A., et al. (2012). Response and sensitivity of the nocturnal boundary layer to added longwave radiative forcing. J. Geophys. Res. 117:D14106. doi: 10.1029/2012JD017578 Prabha, T. V., Hoogenboom, G., and Smirnova, T. G. (2011). Role of land surface parameterizations on modeling cold-pooling events and low-level jets. Atmos. Res. 99, 147–161. doi: 10.1016/j.atmosres.2010.09.017 Rinke, A., Ma, Y., Bian, L., Xin, Y., Dethloff, K., Persson, P. O. G., et al. (2012). Evaluation of atmospheric boundary layer–surface process relationships in a regional climate model along an East Antarctic traverse. J. Geophys. Res. 117:D09121. doi: 10.1029/2011JD016441 Salmond, J. A., and McKendry, I. G. (2005). A review of turbulence in the very stable boundary layer and its implications for air quality. Prog. Phys. Geogr. 29, 171–188. doi: 10.1191/0309133305pp442ra Sandu, I., Beljaars, A., Bechtold, P., Mauritsen, T., and Balsamo, G. (2013). Why is it so difficult to represent stably stratified conditions in numerical weather prediction (NWP) models? J. Adv. Modell. Earth Syst. 5, 117–133. doi: 10.1002/jame.20013 Steeneveld, G. J., Holtslag, A. A. M., McNider, R. T., and Pielke, R. A. (2011). Screen level temperature increase due to higher atmospheric carbon dioxide in calm and windy nights revisited. J. Geophys. Res. 116:D02122. doi: 10.1029/2010JD014612 Steeneveld, G. J., Holtslag, A. A. M., Nappo, C. J., van de Wiel, B. J. H., and Mahrt, L. (2008). Exploring the possible role of small-scale terrain drag on stable boundary layers over land. J. Appl. Meteorol. Climatol. 47, 2518–2530. doi: 10.1175/2008JAMC1816.1 Steeneveld, G. J., Nappo, C. J., and Holtslag, A. A. M. (2009). Estimation of orographically induced wave drag in the stable boundary layer during CASES99. Acta Geophys. 57, 857–881. doi: 10.2478/s11600-009-0028-3 Steeneveld, G. J., Wokke, M. J. J., Groot Zwaaftink, C. D., Pijlman, S., Heusinkveld, B. G., Jacobs, A. F. G., et al. (2010). Observations of the radiation divergence in the surface layer and its implication for its parametrization in numerical weather prediction models. J. Geophys. Res. 115:D06107. doi: 10.1029/2009JD013074 Sterk, H. A. M., Steeneveld, G. J., and Holtslag, A. A. M. (2013). The role of snow-surface coupling, radiation, and turbulent mixing in modeling a stable boundary layer over Arctic sea ice. J. Geophys. Res. Atmos. 118, 1199–1217. doi: 10.1002/jgrd.50158 Svensson, G., and Holtslag, A. A. M. (2009). Analysis of model results for the turning of the wind and related momentum fluxes in the stable boundary layer. Bound. Lay. Meteorol. 132, 261–277. doi: 10.1007/s10546-009-9395-1 Tastula, E.-M., Vihma, T., and Andreas, E. L. (2012). Evaluation of Polar WRF from modeling the atmospheric boundary layer over antarctic sea ice in autumn and winter. Mon. Wea. Rev. 140, 3919–3935. doi: 10.1175/MWR-D-12-00016.1 Tolk, L. F., Peters, W., Meesters, A. G. C. A., Groenendijk, M., Vermeulen, A. T., Steeneveld, G. J., et al. (2009). Modelling regional scale surface fluxes, meteorology and CO2 mixing ratios for the Cabauw tower in the Netherlands. Biogeosciences 6, 2265–2280. doi: 10.5194/bg-6-2265-2009 van de Wiel, B. J. H., Moene, A. F., Hartogensis, O. K., De Bruin, H. A. R., and Holtslag, A. A. M. (2003). Intermittent turbulence and oscillations in the stable boundary-layer over land. Part III: a classification for observations during CASES99. J. Atmos. Sci. 60, 2509–2522. doi: 10.1175/1520-0469(2003)060<2509:ITITSB>2.0.CO;2 van de Wiel, B. J. H., Moene, A. F., Jonker, H. J. J., Baas, P., Basu, S., Donda, J. M. M., et al. (2012). The minimum wind speed for sustainable turbulence in the nocturnal boundary layer. J. Atmos. Sci. 69, 3116–3127. doi: 10.1175/JAS-D-12-0107.1 van der Velde, I. R., Steeneveld, G. J., Wichers Schreur, B. G. J., and Holtslag, A. A. M. (2010). Modeling and forecasting the onset and duration of severe radiation fog under frost conditions. Mon. Wea. Rev. 138, 4237–4253. doi: 10.1175/2010MWR3427.1 Walsh, J. E., Chapman, W. L., Romanovsky, V., Christensen, J. H., and Stendel, M. (2008). Global climate model performance over Alaska and Greenland. J. Clim. 21, 6156–6174. doi: 10.1175/2008JCLI2163.1 Keywords: stable boundary layer, turbulence, radiation, gravity waves, numerical weather prediction Citation: Steeneveld G-J (2014) Current challenges in understanding and forecasting stable boundary layers over land and ice. Front. Environ. Sci. 2:41. doi: 10.3389/fenvs.2014.00041 Received: 21 February 2014; Accepted: 18 September 2014; Published online: 07 October 2014. Edited by:Miguel A. C. Teixeira, University of Reading, UK Reviewed by:Charles Chemel, University of Hertfordshire, UK Pierre Gentine, Columbia University, USA Daniel F. Nadeau, Polytechnique Montreal, Canada Copyright © 2014 Steeneveld. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. *Correspondence: Gert-Jan Steeneveld, Meteorology and Air Quality Section, Wageningen University, Droevendaalsesteeg 3, PO Box 47, 6708 PB Wageningen, Netherlands e-mail: email@example.com
<urn:uuid:bf487c46-c54e-4308-aa65-2234391e868d>
CC-MAIN-2022-33
https://www.frontiersin.org/articles/10.3389/fenvs.2014.00041/full
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571090.80/warc/CC-MAIN-20220809215803-20220810005803-00096.warc.gz
en
0.846166
7,778
2.96875
3
Mycoplasma is neither a virus nor a bacterium, but a tiny creature having both traits. In some situations, environmental chemicals can irritate or induce inflammation in the airways or lungs, resulting in infection. These are some of them: Tobacco smoke, dust, chemicals, vapors, and fumes, as well as allergies, pollute the air. Risk Factors of the Lower Respiratory Tract Infections Risk factors that make a person more likely to develop a lower respiratory tract infection include: Recent cold or flu Below 5 years old Weak immune system Above 65 years Diagnosis ofLower Respiratory Tract Infections During an examination, a doctor will usually identify a lower respiratory infection after reviewing the symptoms and how long they have been present. Using a stethoscope, the doctor will listen to the person’s chest and breathing during the examination. To assist diagnose the problem, the doctor may request tests such as: Pulse oximetry is a test that determines how much oxygen is in the blood. X-rays of the chest to rule out pneumonia Bacteria and viruses are being looked for in mucus samples. Bacteria and virus testing are performed on the blood. Treatment of Lower Respiratory Tract Infections Some infections of the lower respiratory tract clear up on their own. These less-severe viral infections can be treated at home with: plenty of rest, drinking plenty of fluids, and administration of over-the-counter treatments for a cough or fever. A doctor may also recommend further treatment sometimes. Antibiotics for bacterial infections or respiratory therapies, such as an inhaler, are examples. A person may need to go to the hospital for IV fluids, antibiotics, or respiratory support in some circumstances. Infants and very young children may require more treatment than older children or adults. Doctors frequently monitored premature infants and infants with a congenital heart abnormality, for example, because they are at a higher risk of severe infections. Also, a doctor may be more likely to urge hospitalization in certain situations. Doctors can also prescribe similar medication for patients aged 65 and up, as well as those with compromised immune systems. Recovery Time of Lower Respiratory Tract Infections A healthy young adult can recover from a lower respiratory tract illness, such as pneumonia, in about a week, according to the American Lung Association. However, elderly patients may take several weeks to recover fully. A person can take several precautions to avoid catching a lower respiratory infection, including: They should wash their hands frequently Avoid touching their faces with dirty hands. Avoiding persons who have respiratory problems Cleaning and disinfecting surfaces on a regular basis Obtaining immunizations, such as the pneumococcal vaccine and the MMR vaccine Getting a flu shot every year avoiding known irritants, such as chemicals, fumes, and tobacco Avoiding known irritants, such as chemicals, fumes, and tobacco 2. Throat Infections It might be difficult to eat and even speak when you have a sore throat. The throat may also feel gritty and irritated, which can make swallowing more difficult. A viral infection, such as a cold or the flu, and bacteria are both common culprits. The majority of sore throats aren’t worrisome, but severe symptoms might make it difficult to breathe. The degree and origin of a sore throat determine how a person handles it. Home treatments can usually relieve the discomfort till it goes away. However, it is sometimes necessary to seek medical help. Throat infections can make you experience a loss of appetite. Causes Throat Infections Note that viruses and bacteria commonly caused sore throats. Viruses cause many sore throats, such as Colds and influenza The Epstein Barr virus (EBV can cause infectious mononucleosis, also known as glandular fever or mono If the symptoms are severe, the person should consult a physician. A doctor, on the other hand, will not prescribe antibiotics for a virus. Strep throat is a frequent throat ailment caused by exposure to a Streptococcus bacteria strain. Signs and symptoms are: Sudden throat pain Throat spots of white Swollen and red tonsils Crimson patches on the mouth’s roof Lymph nodes in the neck that are swollen or uncomfortable Antibiotics may be required to combat the infection and prevent consequences. In youngsters, strep throat, if left untreated, can lead to rheumatic fever or kidney irritation. According to the Centers for Disease Control and Prevention (CDC)Trusted Source, strep throat causes 20–30% of sore throats in children and roughly 10% in adults. The following are some more common reasons for a sore throat: Dry heat, pollution, or chemicals aggravates allergies, which cause stomach acids to reflux into the back of the throat, resulting in the chilly air. The following are some of the more serious but less prevalent conditions that can cause a sore throat: Tumors of the throat, tongue, or larynx Epiglottitis is a rare but potentially deadly throat infection in which the epiglottis swells and closes the airway, making breathing difficult. It’s a medical situation that requires immediate attention. Anyone experiencing persistent or severe symptoms should see a doctor, since they may suffer from an underlying disease that requires additional treatment. Treatment of Sore Throat After about a week, most sore throats go away on their own, although it depends on the situation. A doctor may prescribe antibiotics if a bacterial infection causes a sore throat. Even if they feel better before finishing all the medication, people should always complete the course. A sore throat caused by a viral infection normally does not require medical attention. However, Acetaminophen or other moderate pain killers can aid with discomfort and fever, and pediatric versions of these drugs are available. Also, a pharmacist can help you decide which ones to use and how much to take. It is critical to always read and follow the directions on any prescription, and not to take more than is recommended. A person with epiglottitis might need to stay in the hospital for a while. They may require intubation to help them breathe in severe circumstances. If testing identifies a tumor or another reason, the doctor will talk to the patient about treatment choices. Prevention of Sore Throat A sore throat can be prevented by following a few simple actions. Hands should be washed frequently, including after sneezing and coughing. Maintain a healthy diet and exercise routine to improve your overall health If symptoms suggest a SARS-CoV-2 infection, get advice on COVID-19 testing Cough or sneeze into a tissue, discard it, and wash both hands right away Keep your hands away from your nose and mouth. If you have an infection, avoid close contact with others who have it and stay away from others Surfaces that are often touched, such as tabletops, should be disinfected. 3. Blood Infections Sepsis is a severe infection-related immunological reaction. The immune system of a person with sepsis can harm tissues and organs, which can be fatal. Sepsis can occur because of an infection in the skin, lungs, urinary tract, or elsewhere in the body. Additionally Septicemia, a bacterial infection in the blood, is a prevalent cause. People sometimes mix up the phrases “sepsis” and “septicemia,” although the two are not the same thing. Symptoms of Blood Infections Anyone suffering from an illness who develops the following sepsis symptoms should seek medical help immediately: When sepsis is severe, it can also lead to the following complications: Extreme low blood pressure Faintness or dizziness Insufficient urine volume is Skin that is pale, discolored Other alterations in the person’s mental state include disorientation, diminished alertness, and other mental disturbances. Sense of impending calamity Sudden fear of death Speech that is slurred Nausea, or vomiting Causes of Blood Infections Infections caused by bacteria COVID-19 is a virus that causes The infection can enter the body through a wound, as well as during and after surgery. Risk Factors of Blood Infections Anyone with an infection can get sepsis, although the risk is higher for the elderly. However, people over the age of 65, babies under the age of one, and people with weaker immune systems. Also, persons suffering from long-term illnesses like diabetes, HIV, and cancer are more susceptible to having a blood infection. Treatment of Blood Infections For sepsis, a doctor will provide prompt therapy, which may include: Providing oxygen and intravenous fluids If appropriate scheduling surgery, to remove damaged tissue treating the cause of the infection Providing a means of assisted breathing Administration of antibiotics, if the infection is bacterial Sepsis frequently causes hospitalization, and some patients require critical care. Diagnosis of Blood Infections A doctor makes a diagnosis. sepsis by: Taking a medical history, including any recent infections or other incidents, while considering the person’s symptoms Monitoring blood pressure, temperature, and other indicators during a physical examination Performing tests in the lab to determine the infection While it is critical to treat sepsis as soon as possible, detecting it early can be difficult. Many of the symptoms, such as high fever, are also associated with other illnesses. Prevention Of Blood Infections Sepsis can be avoided by taking preventative measures and obtaining quick treatment for any infections that occur. Other options include: Receiving standard immunizations, such as flu and pneumonia shots Taking precautions to avoid sores and wounds, as well as keeping any that occur clean. Following the hand-washing instructions Seeking medical help right once if symptoms of infection are getting worse. A bacterial or viral infection of the middle ear is known as an ear infection. This illness causes inflammation and fluid buildup in the ear’s interior cavities. The middle ear is a space behind the eardrum that is filled with air. It has vibrating bones that translate the sound from outside the ear into messages that the brain can understand. Ear infections hurt because the swelling and accumulation of extra fluid put pressure on the eardrum. Acute or persistent ear infections are both possible. Chronic ear infections might harm the middle ear permanently. Symptoms of Ear Infections The signs and symptoms in adults are straightforward. However, adults with ear infections suffer from ear pain and pressure, as well as ear fluid and hearing loss. Earache, especially while resting down Tugging or yanking on one’s ear Types of Ear Infections Chronic otitis media with effusion (COME) Acute otitis media (AOM) Otitis media with effusion (OME) Causes of Ear Infections A cold, flu or allergic reaction frequently preceded an ear infection. These increase mucus in the sinuses, causing the Eustachian tubes to discharge fluid slowly. The nasal passages, throat, and Eustachian tubes will all be inflamed during the initial sickness. Diagnosis of Ear Infections Ear infection testing is a simple procedure, and a diagnosis can often be determined solely based on symptoms. To examine for fluid behind the eardrum, the doctor will usually use an otoscope, which is a light-attached tool. A doctor sometimes uses a pneumatic otoscope to check for infection. This device uses a puff of air to check for retained fluid in the ear. However, the eardrum will move less than normal if there is any fluid behind it. If the doctor is unsure, he or she may perform further tests to confirm a middle ear infection. Treatments of Ear Infections For persistent infections, the AAFP recommends acetaminophen, ibuprofen, or eardrops as pain relievers. These are useful for reducing fever and pain. A warm compress, such as a towel, can help to relieve the pain in the affected ear. Also, if you have recurrent ear infections for several months or a year, your doctor may recommend a myringotomy. A surgeon creates a slight cut in the eardrum to allow the build-up of fluid to be released. Furthermore, to help air out the middle ear and prevent future fluid buildup, a tiny myringotomy tube is implanted. These tubes are typically left in place for 6 to 12 months before falling out naturally rather than requiring manual removal. Prevention of Ear Infections Ear infections are very frequent, particularly among children. This is linked to a developing immune system and variations in ear architecture. There is no surefire way to avoid infection, however, there are a few things you can do to lower your chances: Antibiotics should only be used when absolutely necessary. those who have had an ear infection in the early years are more prone to have ear infections, especially if they were treated with antibiotics. Both you and your child should wash frequently hands. This reduces the risk of bacteria spreading to your child and can help them avoid colds and flu. However, they can help people manage it in most instances. Prevention of Hypothyroidism Although there is no way to prevent hypothyroidism, persons who are at a higher risk of thyroid disorders, such as pregnant women, should consult their doctor about the need for more iodine. Screening is not suggested for persons who have no symptoms unless they have one or more of the risk factors listed below: Autoimmune disease history Goiter caused by previous radiation treatment to the head or neck Thyroid disease in the family The use of drugs that have an effect on thyroid function These individuals can be screened for early signs of the disease. They can take steps to prevent the disease from worsening if the tests are positive. There is no evidence that a specific diet may prevent hypothyroidism, and there is no method to avoid hypothyroidism unless it is genetically transmitted. 9. Diabetic Ketoacidosis DKA is a potentially fatal diabetes complication that develops when the body breaks down lipids for energy instead of carbohydrates. Insulin lets sugar enter cells, which use it for fuel, in people who do not have diabetes. A diabetic can not produce enough insulin to transport sugar properly, which means their body cannot utilize it for energy. When there isn’t enough sugar in the body, the liver converts part of the fat into acids called ketones. Ketones accumulate in the bloodstream and are excreted in the urine. When extra ketones enter the bloodstream, the blood becomes acidic, resulting in DKA. DKA is a life-threatening condition. Anyone with diabetes should be aware of the signs and symptoms of DKA. However, DKA is more common in patients with type 1 diabetes, but it can affect anyone with type 1 or type 2 diabetes. Also, those who require insulin have a more severe form of diabetes and, as a result, are more likely to develop DKA. Ketosis is a condition that can affect people with type 2 diabetes. Persons in their later years Ethnic groups that aren’t white Symptoms of Diabetic Ketoacidosis Symptoms of diabetic ketoacidosis can appear suddenly and include: Aversion to food Nausea and vomiting Soreness and pain in the abdomen Skin and mouth that is dry, among many others. People with diabetes who monitor their blood sugar levels regularly may notice that their numbers have risen dangerously high. However, others may experience DKA symptoms as the first sign of diabetes, leading to a diagnosis. Also, note that bacterial infections are one of the most common causes of diabetic ketoacidosis. In this situation, procrastination of antibiotic treatment has been linked to an increase in morbidity and mortality. Antimicrobial therapy that is administered unnecessarily, on the other hand, may have a negative impact on the prognosis, which can induce loss of appetite. Causes of Diabetic Ketoacidosis Extremely high blood sugar levels and low insulin levels cause diabetic ketoacidosis. Even with regular diabetes medication, a person can develop high blood sugar or low insulin because of illness or a complication with insulin therapy. Moreso, illness, and infection alter some of the body’s hormones, such as cortisol and epinephrine. These hormones alter the way insulin works in the body and can impair its efficacy, which some people may need to compensate for by taking more insulin while sick. Issues with prescription insulin therapy can also cause DKA. Several difficulties with insulin therapy can trigger DKA, including: Insulin injection was missed An insulin pump that is plugged Failing to use the proper insulin dosage The following factors can also trigger DKA: Misuse of drugs or alcohol Furthermore, DKA is most likely to develop in those with type 1 diabetes or those who routinely miss insulin doses. Moreso, even if blood sugar is not high, some diabetic treatments may raise the risk of DKA. Treatment of Diabetic Ketoacidosis Doctors try to stabilize blood sugar levels when treating diabetic ketoacidosis. They may recommend the following therapies: Radiation therapy in this part of the body for cancer A variety of factors can cause smell disorders, including: Nasal growths disorders affecting the nervous system, such as Alzheimer’s disease or Parkinson’s illness, among many others Diagnosis for Loss of Sense of Taste It’s fairly uncommon to have a problem with your sense of taste. Before the pandemic, more than 200,000 people in the United States went to the doctor every year because they couldn’t taste or smell anything. According to some estimates, 5% of Americans have dysgeusia, and nearly one in every five Americans over the age of 40 has a change in their sense of taste. Otolaryngologists are specialists who can diagnose and treat both smell and taste problems. These specialists specialize in problems of the ear, nose, and throat, as well as head and neck conditions. Also, the doctor may examine the mouth and nose for growths, listen to the patient’s breathing, and look for any indicators of illness. However, they’ll also look through the person’s medical history and inquire about any drug use or probable harmful chemical exposure. The doctor will also examine a person’s mouth and teeth to look for signs of disease and inflammation. Moreso, the doctor may apply certain chemicals directly to the individual’s tongue or mix them into a solution that they subsequently swirl in their mouth to assist diagnose the loss of taste. The way a person reacts to these compounds can help determine which part of the taste is affected. It can take some time to figure out what kind of sensory loss the person has and how to treat it. Treatment of Loss of Sense of Taste The therapy options will be determined by the underlying illness that is causing the loss of taste. Doctors would normally wait until the infection resolves in mild cases, such as those caused by the common cold or flu. When the sickness is gone, most people’s sense of taste should recover. However, data suggests that after SARS-CoV-2 infection, smell and taste abnormalities may linger, especially in cases of chronic COVID. If a person has post-viral olfactory dysfunction or smell and taste problems following a viral illness, a doctor may explore employing olfactory training and topical corticosteroids. However, the research is still uncertain. They may prescribe antibiotics for patients who have bacterial infections, such as sinus or middle ear infections. Additionally, treatment for more significant problems, such as nervous system disorders, is individualized treatment. The above-listed illnesses can induce loss of appetite in older people, sometimes together with fatigue. However, we hope this article (loss of appetite) has been insightful to you. Nevertheless, in more serious or critical situations, visit your doctor rather than taking over-the-counter medications. Also, note that loss of appetite can result from psychological illness as well. What the case may be, you can visit a physician for more professional counsel. Share with your loved ones and active media accounts.
<urn:uuid:912c2144-c85e-4b91-9741-2e1a93f79d58>
CC-MAIN-2022-33
https://suntrustblog.com/loss-of-appetite-2/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.37/warc/CC-MAIN-20220809003642-20220809033642-00694.warc.gz
en
0.932232
4,475
3.5
4
G. MOHAN GOPAL explains how the Constitution replaces the Brahmanic idea of merit with a modern, democratic, rational, scientific and constitutional idea of merit RESERVATIONS for Other Backward Classes (OBC), Scheduled Castes SC) and Scheduled Tribes (ST) are under unprecedented attack in India. The main anti-reservationist attack is that reservations kill merit. The anti-reservationists are right. Reservations are indeed killing merit — the Brahmanic idea of merit. That is what it is supposed to do. The Constitution replaces the Brahmanic idea of merit with a modern, democratic, rational, scientific and constitutional idea of merit. The Brahmanic idea of Merit “Merit” literally means “state or fact of deserving well”. Merit has no standard content. Its benchmarks — “who deserves what” — are determined by the dominant values of society. Merit does not have static content. The Brahminic idea is that merit is God-given, inherited and acquired exclusively through birth. Those who do not have it can never acquire it — such as women, avarnas and dogs. The idea of “guna” is the basis of the Brahmanic idea of merit. The renowned Monier-Monier Sanskrit Dictionary defines “guna” in this context as the “chief quality of all existing beings consisting of a mixture of sattva (goodness or virtue), rajas (passion or darkness) and tamas (foulness and ignorance). The Bhagvad Gita declares that God divides humans into four “Varnas” (castes) on the basis of their merit — “gunas” — and “karma” (deeds) acquired in (unverifiable) previous births and places each human being in every birth in the appropriate Varna. Status, power, wealth and opportunities are reserved for the four varnas, especially the first three, in descending order of abundance. Individual merit — what each individual deserves — is determined by the Varna of his or her birth. Those without any merit whatsoever (90% of the people) are outside the pale of the Varna system and are considered sub-human or non-human. Their lot is to work for the Savarnas in any capacity that they are asked to. They are slaves. They are degraded souls without any merit — undeserving of anything but dehumanization, exclusion and penury. Obedience to the Varna system is their only path to salvation. Shorn of its phantasmagoric superstitions, the Brahminic system of merit is simply a feudal system of inherited merit — a feudal system of reservation of opportunities, wealth and power for the children of the feudal elite intended to main a stagnant society that preserves the hegemony of caste elitism. The real aim of the anti-reservationist movement is to restore this Brahminic idea of merit and Chaturvarna reservations. Their purpose is to protect and save elitism. The Brahmanic Idea of Reservations Brahmanism created and operated for centuries the longest running and most extensive system of reservations in world history. The caste system is nothing but a massive system of reservations. Every single occupation, job and profession was catalogued and reserved for specific social groups. Merit was inherited. Privileges were inherited. There was no need for any selection system to identify the meritorious. God took care of that. Merit was clear from the Varna and jaati of birth. Each individual and each group was forced to do the same or similar jobs. This destroyed horizontal and vertical mobility and the ability to acquire new knowledge and skills. Slavery, indentured labour and forced labour (another Brahmanic scheme of reservations) thrived. Surplus value created by producers was expropriated. All value, profits, benefits and assets belonged to the Brahmanical castes. Education and learning were carefully controlled. God and religion were liberally harnessed to reinforce the system. It was a brutal and fascist system of reservations that exacted an incalculable human cost. The Brahmanic idea of merit and the Brahmanic system of reservations are anti-social, anti-democratic, fascist, unacceptable and unimplementable in today’s world. The Constitutional idea of Merit The Constitutional idea of merit is derived from Constitutional values such as equality, liberty, fraternity, dignity, social justice, equity, due share and representation as well as the Constitutional ideologies of democracy, secularism and socialism. The Constitutional world view is radically different from Brahmanism. It is derived from our freedom movement and our indigenous dalit cultures. It recognizes an infinite variety of talent and intelligence (literally, the ability to see what is not obvious) in every sentient being — human and non-human — and attributes equal value to all qualities and capabilities. It is non-hierarchical. The Constitutional idea of merit rejects the idea of disparities of merit, talent and intelligence between people. The Constitutional world view rejects the idea that possession of any particular set of qualities, attitudes, skills or knowledge (QASK) makes a human being wholly superior to another for all purposes and all contexts. Instead, it focuses on aptness of QASK with reference to specific tasks and roles. It focuses on excellence of tasks rather than on excellence of souls. The constitutional idea is that merit is human made, acquired through life experience and effort. Merit is born in action and in effort. Mere humanity and existence of life creates its own merit for every human being. No one lacks merit. Merit is not inherited. It is not acquired or reduced by mere membership in a social group. As we cannot control the opportunities for action and effort available to us, all action and effort deserves equal recognition, value and equal reward. Every human being has the opportunity to build merit. Society has an obligation to recognize and reward merit in a rational and scientific manner. The benchmarks of merit must be determined democratically, not by an elite Merit in a Constitutional system of selection would for example focus on benchmarks based on Constitutional values such as: (i) the potential of the candidate to advance representation of social groups and the advancement of marginalized social groups and their members; (ii) life experience in facing and overcoming socially inflicted suffering, discrimination and oppression; (iii) contributions to the struggles for liberty, equality, dignity and fraternity and for the social transformation of the country; (iv) empathy and compassion with the powerless; (v) knowledge of the human condition of the masses and of solutions that would help common people to address the problems they face; (vi) commitment to Constitutional values especially of democracy and secularism; and (vii) interest in, and commitment to, the opportunity that is sought. Knowledge and skills related to work or study, such as domain-matter expertise and communication skills are “teachable” skills that can always be acquired as needed. The Constitutional Idea of Reservations Constitutional reservations are based on the Constitutional idea of merit — “who deserves what” is determined with reference to Constitutional values. The first purpose of the Constitution — born in the middle of the blood bath of the partition — is to ensure that Indians should never slaughter each other ever again as they did in 1947. The Constitution is based on the lesson of our history that civil war and invasion are inevitable when power is in the hands of elites. To this end, a fundamental goal of the Constitution is the inclusion and representation of the voice of all segments of the people in affairs of the State, and their integration in five critical domains from which common people have been excluded for centuries — the legislature, the executive, the judiciary and education. This is essential for peace, harmony, social stability and for the survival of the Republic. The Constitutional project of reservation seeks the restoration to backward classes of their due share of opportunities based on population share that had been stolen from them due to caste discrimination. It seeks the democratization of the State and of society. Reservation will also require disgorgement of undue share that was hoarded by oppressor castes. It responds to the sentiments expressed by Mr. S. Nagappa, a Dalit member of the Constituent Assembly on 27th August 1947 who told the Assembly, “I want my due share; though I am innocent, ignorant dumb, yet I want you to recognise my claim. Do not take advantage of my being dumb. Do not take advantage of my being innocent. I only want my due share and I do not want anything more. I do not want, like others, weightage or a separate state. Nobody has a better claim than us for a separate state. We are the aboriginals ‘of this country.” The Constitutional system of reservations would be based on a clear recognition that reservations are a fundamental right triggered by group backwardness arising from discrimination and non-representation. The Constitutional system of reservations would not be artificially restricted by limitations such as those imposed by the judiciary — the 50% ceiling, creamy lawyer exclusion and non-applicability to promotions (for OBCs). Reservations, including determination of the population share of social groups and would be based on data generated by community/caste census. The Constitutional selection and reservation system must be open to all sections of society and should not favour the knowledge or culture of any group over another. As noted, the reservation system should use Constitutional values as the basis for constructing the benchmarks that would define merit. Social considerations would be fully integrated into the selection process as they flesh out and thicken merit. They are not, and should not be misunderstood as, “non-merit” criteria. There will be no sub-speciation into “superiors” and “inferiors” through standardized testing. Instead of standardised testing, Constitutional reservation would use decentralized and customized multi-point and multi-modal systems of selection tailored to the needs of the hiring/admitting institution. Constitutional reservation would identify specific areas of study and work for which admissions and recruitment are to be undertaken and the qualities, attitudes, skills and knowledge (QASK) needed by institutions to enhance their excellence in each area. This would then be matched to the QASK of candidates while maintaining the objective of representation. Exorcising Elitism from Reservations The Constitution makers decided to leave to the political executive the task of devising the “special measures” to be taken to operationalize the Constitutional idea of merit and the Constitutional vision of reservations. Unfortunately, this decision has turned out to be a big mistake. Instead of creating a new, democratic, rational system of selection in line with the Constitutional idea of merit and the Constitutional vision of reservations, the bureaucracy and judiciary preserved — and vastly expanded — the colonial era elitist system of selection through standardized testing based on examinations and interviews. Standardized testing is a method of selection with an illicit purpose — to neutralize equality and legitimize inequality. A brief detour into the biography of testing is required to understand its true nature. In “The Measure of Merit” (2007), an epic comparative study of merit in France and the United States (1750-1940), Prof. John Carson describes a dilemma faced by France and the United States, which, incidentally, India avoided in its Constitutional idea of merit and vision of reservations. Prof. Carson writes, “[H]aving toppled aristocratically organized societies in the name of natural rights and the people’s sovereignty, what would be put in their place? How could a new elite be selected and justified within a political ideology also celebrating equality and human and universal rights? How in other words could inequality be rendered legitimate?” Prof. Carson argues that “these two new democratic republics, dedicated to some version of equality turned to understandings of human nature to reinstitute inequality on a new, seemingly more “rational” footing”. Prof. Carson says that both France and the United States “confronted the issue of difference, especially natural inequality” and “both in one way or another eventually turned to testing as a way of establishing merit, and in each the response was as much one of anxiety approbation. Some worried that the tests might be wrong: inaccurate, ill-conceived, and doomed to choose the wrong people based on the wrong criteria. Others, however, were unsettled by the opposite: that the tests might conceivably be right, and thus that some people really were naturally better than others. What if inequality was the product not of poor environment or personal choices but the luck of the genetic draw, and what if scientists could “see’ the difference?” Testing fits neatly into the ideological world view of France and the United States. They are constitutionally committed to equal protection of the law and equality before the law. They are also committed to equality of opportunity. However, they are not constitutionally or politically committed to equality of outcomes as they are market economies that rely on inequality of outcomes as incentives for creation of wealth — France claims to be relatively more committed to equality of opportunity through maintaining a welfare state than the United States is. France and the United States rejected an elite composed of a hereditary aristocracy. But they did not reject the idea of elitism. They accept the idea of a meritocracy run by the “best and the brightest”. Therefore, France and the United States have a system of selection based on testing that will identify those with greater endowment of privileged qualities (merit) and constitute them into a new elite who will rule as a meritocracy. As democracies, however, France and the United States sought to use “affirmative action” to compose a new elite drawn from all sections of society, not just from an upper “caste”. They universalised the catchment area from which superior talent and intelligence would be identified to join the elite. Prof. Carson cites Sciences Po ’s justification of their programme for graduates of selected high schools in economically depressed neighbourhoods on the basis that “the traditional selection system…privileged those coming from good cultural backgrounds who were exposed throughout their lives to books and learning”, whereas “the supplemental affirmative action procedure sought to “find intelligence where learning had been more scarce”. Prof. Carson also comments on the “the curious rule of the concept of intelligence” in the two countries. He argues that “In the American case, intelligence — particularly in the form of standardized test results —was a weapon of the plaintiffs, one of the means of trying to demonstrate that…. admissions procedures were choosing less able candidates over more deserving ones. In the French case, however, it was the reverse. There, supporters of affirmative action appropriated the language of intelligence, using it to suggest that there was a criterion of merit not completely captured by the gruelling admissions examinations that formed the usual route into the ‘grandes ecoles’. Intelligence has been used in India also as a weapon against the masses. This was a turn of events that caused anxiety to Mr. S. Nagappa in the Constituent Assembly on 27th August 1947, quoted earlier. In “The Tyranny of Merit”, Prof. Michael J. Sandel makes a strong argument against elitism, meritocracy and standardized testing. Sandel says, “Measures of merit are hard to disentangle from economic advantage. Standardized tests such as the SAT purport to measure merit on their own, so that students from modest backgrounds can demonstrate intellectual promise. In practice, however, SAT scores closely track family income. The richer a student’s family, the higher the score he or she is likely to receive…In an unequal society, those who land on top want to believe their success is morally justified. In a meritocratic society, this means the winners must believe they have earned their success through their own talent and hard work….. [H]igher education is not the meritocracy it claims to be…We need to ask whether the solution to our fractious politics is to live more faithfully by the principle of merit, or to seek a common good beyond the sorting and the striving.” Prof Sandel “offers an alternative way of thinking about success–more attentive to the role of luck in human affairs, more conducive to an ethic of humility and solidarity, and more affirming of the dignity of work.” Jeff Brenzel, Dean of Undergraduate Admissions, Yale University, one of the top Universities in the world, says, “[Admissions] testing [is] actually one of the less important elements in the file. …The most important part of your application — bar none, no question, any college — is your high school transcript. Probably the next most important are your teacher recommendations, particularly if you’re applying to any kind of selective college or university. In the context of the question whether the CBSE Board exam should be held, Prof. Disha Nawani, Professor and Dean, School of Education, Tata Institute of Social Sciences, Mumbai, wrote in the electronic media on June 3,2021, “It must be understood that the focus in these exams is to master the techniques to crack them and not necessarily engage with learning……Exams, especially board exams, continued to flourish and shape learning as memorizing textbook content, designing textbooks so as to contain information, dictating teachers to elucidate content and telling students to memorize content, with or without comprehension.” The National Education Policy, 2020, sets out a framework for a shift in assessments of merit towards a multi-dimensional approach. It says, “The [proposed] National Testing Agency (NTA) will work to offer a high-quality common aptitude test, as well as specialized common subject exams in the sciences, humanities, languages, arts, and vocational subjects, at least twice every year. These exams shall test conceptual understanding and the ability to apply knowledge and shall aim to eliminate the need for taking coaching for these exams. Students will be able to choose the subjects for taking the test, and each university will be able to see each student’s individual subject portfolio and admit students into their programmes based on individual interests and talents.” Entrance examinations such as the one we use in India test a very narrow range of rote memory. They tell us nothing about the candidate other than his or her knowledge about what is asked. Yet, based on this single point, primitive method of testing, our primitive oligarchic castes assert their “merit” — and the lack of merit of OBCs, SC and ST — with a certainty that reveals nothing more than their utter illiteracy on the matter. Standardized testing is structured to give an advantage to the kind of skills and knowledge that are valued in Savarna-dominated middle class and upper-class cultures through rote learning of trivial facts on issues that are their bread and butter, easily accessible to them in print or electronic form. Standardized testing, a tool of elitism, undermines the Constitutional idea of merit and the Constitutional vision of reservations. Yet it remains our principal method of selection even in the eighth decade of our Constitution. Social Justice Considerations and Merit Most public institutions in India isolate and keep social considerations outside the scoring system. They are not considered as an integral part of the merit of the candidates. They are seen as diminishing merit. As a result, candidates (including those entitled to reservations) feel that selections are driven by considerations other than merit. And that those selected “lack merit”. “..[A]s the post gets higher, it may be necessary, even if a proportionality test to the population as a whole is taken into account, to reduce the number of Scheduled Castes and Scheduled Tribes in promotional posts, as one goes upwards. This is for the simple reason that efficiency of administration has to be looked at every time promotions are made. As has been pointed out by B.P. Jeevan Reddy, J.’s judgment in Indra Sawhney (1) (supra), there may be certain posts right at the top, where reservation is impermissible altogether.” This may be contrasted with the system in the Jawaharlal Nehru University, New Delhi, where social considerations are integrated within the framework of assessment of merit of the applicants It may also be contrasted with the approach in the United States and France where also social considerations are included within the framework of merit rather than applied separately from merit. Two alternatives are competing to replace the elitist, feudal and fascist Brahminic idea of merit and Brahminic reservations that was overthrown in 1950. One option, currently used, is to continue the colonial practice of making selections through a neo-elitist system of standardized testing that narrows the benchmark for merit into marks scored in low quality mass examinations with no rational correlation to the needs of the institutions for whom which selections are made or to Constitutional goals. The second option is to develop a new rational, non-elitist, pragmatic approach to selection that aims to achieve the Constitutional goals of representation, restoration of equal opportunity and advancement of those excluded in order to create a just social order — a selection system based on Constitutional rationality. Social reservations are a unique and unparalleled Constitutional social justice innovation that introduces in India a rational, scientific, and just basis for democratic power sharing in a deeply unequal social order that for centuries had its own system of fascist reservations — reserving status, power, wealth and education for a Savarna oligarchy. It is but natural that there is a strong backlash from the old elite against Constitutional reservations, demanding the de facto restoration of Brahminic reservations and the preservation of elitism. The survival of the Republic depends on the ability of India to integrate and democratize its most important institutional spaces — the legislature, the executive, the judiciary, and education — as the foundation of a just unity that will preserve peace and offer an opportunity for ending socially inflicted poverty and suffering in our country. [Professor G. Mohan Gopal is former Director, National Judicial Academy, and former Director (VC), National Law School of India University, Bengaluru. The views expressed are personal.]
<urn:uuid:ad515894-d0eb-4008-8a7e-69ceedb5a222>
CC-MAIN-2022-33
https://theleaflet.in/rethinking-the-debate-on-reservations/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571536.89/warc/CC-MAIN-20220811224716-20220812014716-00296.warc.gz
en
0.952773
4,659
2.828125
3