text
stringlengths 5.43k
47.1k
| id
stringlengths 47
47
| dump
stringclasses 7
values | url
stringlengths 16
815
| file_path
stringlengths 125
142
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 4.1k
8.19k
| score
float64 2.52
4.88
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
In the last issue of the newsletter, we explored the anatomy and physiology of the liver from a natural health perspective. In this issue, we get to reap the reward for our diligence. We cover what can go wrong with the liver, how doctors test for it, and what you can do about it — again from a natural health perspective. In addition, we will spend some time on the gallbladder and biliary tree, the bile ductwork that ties everything together. Considering that gallbladder removal (cholecystectomy) is now one of the most common surgeries in the world, with over a half million performed each year in the U.S. alone, that should be of interest to a number of people. In fact, roughly 20 million Americans suffer from gallstones, and 750,000 of them undergo cholecystectomies each year. There are 800,000 hospitalizations and $2 billion spent annually on gallbladder disease in the U.S. The bottom line is that gallbladder surgery pays for many boats for many doctors every year — and there are far better, less expensive ways to deal with the problem.
What can go wrong with the liver
As we learned in the last issue of the newsletter, the liver is amazingly resilient and, at the macroscopic level, not much goes wrong with it. Because it is so well protected, it is rarely affected by trauma, but when it is (automobile accidents, war, etc.), it is often fatal because of the large blood supply that serves it. Likewise, although primary liver cell cancer is common in Africa and Asia (related to a very specific combination of “insults” to the liver’s cells), it is very rare in the United States and the rest of the developed world where those insults tend not to exist. Although hepatitis (particularly hepatitis B) and cirrhosis can be contributing factors, the primary cause of hepatocellular carcinoma is aflatoxin B1.
Aflatoxin B1 is the most potent liver cancer-forming chemical known. It is a product of a mold called Aspergillus flavus, which is found in food that has been stored in a hot and humid environment (common storage conditions in much of the third world, especially Southern China and Sub-Saharan Africa). This mold is found in such foods as peanuts, rice, soybeans, corn, and wheat (all staples in the third world). It is thought to cause cancer by producing changes (mutations) in the p53 gene. These mutations work by interfering with the gene’s important tumor suppressing (inhibiting) functions. Generally, both hepatitis B and aflatoxin B1 are required for hepatocellular cancer.
That said, metastatic cancer, which is carried to the liver from other organs (think back on how the portal system feeds blood from the intestinal tract, pancreas, and spleen through the liver) is very common.
Hepatitis A is a viral disease that affects the liver. Transmission can occur through:
- Direct person-to-person contact
- Exposure to contaminated water or ice
- Contaminated shellfish (think oysters on the half shell)
- Fruits, vegetables, or other foods that are eaten uncooked and that were contaminated during harvesting or subsequent handling.
The symptoms of hepatitis A are fever, lack of appetite, nausea, and fatigue, and then jaundice. Jaundice is a yellow or orange tint to the skin or whites of the eyes. Some persons with hepatitis A will have no symptoms at all — especially children. The symptoms of hepatitis A, if you have them, usually last about one or two weeks, and, in most cases, no specific treatment is required in order to get better. Infected persons shed the virus in their stools from a week or two before symptoms begin until a few days after jaundice begins. Because of this, persons who are ill with hepatitis A should not work in restaurants, child care centers, or nursing homes until their symptoms have resolved.
The hepatitis A IgM test is used to screen for early detection of infection and is used to diagnose the disease in patients with evidence of acute hepatitis. Hepatitis A IgM is the first antibody produced by the body when it is exposed to hepatitis A. On the other hand, hepatitis A IgG antibodies develop later and remain present for many years, usually for life, and protect you against further infection by the same virus. There is no test specifically for hepatitis A IgG antibodies, although a total antibody test (which detects both IgM and IgG antibodies) detects both current and former infection with hepatitis A and will remain positive even after receiving the hepatitis A vaccine.
The hepatitis B virus results from exposure to infectious blood or body fluids containing infected blood. Possible forms of transmission include (but are not limited to) unprotected sexual contact, blood transfusions, re-use of contaminated needles & syringes (which explains why the incidence of hepatitis B among drug users is so high), and transmission from mother to child during childbirth. It should also be noted that if you are into the latest fashion trends centered around body piercing and tattooing, you have to be extremely careful with the equipment that is used on you. Make sure the equipment is totally sterile. Using non-sterile equipment can transfer the hepatitis B virus or other blood born diseases to your body.
Also, be careful when eating out. Eating uncooked, raw food or eating from outside vendors can infect you with hepatitis B. This is of particular note when visiting third world countries, but can still be a problem in any developed country.
Symptoms of hepatitis B include:
- Loss of appetite
- Nausea and vomiting
- Itching all over the body
- Pain over the liver (on the right side of the abdomen, under the lower rib cage)
- Urine becomes dark in color — not yellow, but dark like tea
- Stools are pale in color (grayish or clay colored)
The danger of hepatitis B is that it can become acute, and then chronic — ultimately leading to severe liver damage. Unfortunately, there is no treatment that can prevent acute HBV infection from becoming chronic once you get it. The degree of liver damage is related to the amount of active, replicating (multiplying) virus in the blood and liver. Antiviral agents, the medical treatment of choice for chronic hepatitis B, do not work in all individuals with the disease, and may not even be required as in many cases the infection may resolve itself over time.
Although, it’s difficult to prevent hepatitis B from progressing if you get it, it is possible to protect yourself from getting it in the first place through immunization. The primary test for hepatitis B is for HBsAg (the hepatitis B surface antigen). Its presence indicates either acute or chronic hepatitis B infection.
Hepatitis C (HCV) is the most dangerous of the hepatitis viral infections, and it is the most common cause of chronic liver disease in North America. It is difficult for the human immune system to eliminate the virus from the body once infected, and infection with HCV usually becomes chronic. Over time (often decades), hepatitis C damages the liver and can lead to liver failure. As mentioned, it is difficult for the immune system to clear the virus — with up to 85% of newly infected people failing to clear it — and thus most people become chronically infected. It is estimated that in the U.S. alone more than three million people are chronically infected with hepatitis C, with between 8,000 to 10,000 people dying each year. In the U.S., hepatitis C is the leading cause of liver transplant surgery.
Treatment usually involves a combination of an antiviral (most often ribavirin) and alpha interferon. Alpha interferon is an antiviral protein normally made in the body in response to viral infections. The alpha interferon used in treating hepatitis C, however, is not natural. It is a recombinant form that usually involves the addition of a large molecule of polyethylene glycol to “improve” uptake, distribution, and excretion of the interferon, not to mention prolonging shelf life — and of course, increasing profits for the companies holding patents.
Peginterferon (owned by Roche), the current alpha interferon of choice, can be given once weekly and provides a constant level of interferon in the blood, whereas standard interferon must be given several times weekly and provides intermittent and fluctuating levels. In addition, peginterferon is more active than standard interferon in inhibiting HCV and yields higher sustained response rates with similar side effects. Because of its ease of administration and better efficacy, peginterferon has replaced standard interferon both when used alone and as part of a combination therapy for hepatitis C.
Combination therapy can indeed lead to rapid improvements in up to 70 percent of patients, but it often doesn’t last. Long-term improvement only occurs in 35-55 percent of patients. And unfortunately, there are side effects, which frequently include profound fatigue, headache, fever, muscle pain and chills. In fact, that’s just the tip of the iceberg.
Fortunately, there are natural alternatives. Ten years ago, I was introduced to someone who had hepatitis C and who reacted badly (extremely so) to his interferon treatments. By the time I met him, he had reached the point that he had stopped his interferon treatments, as death was preferable to the side effects associated with his treatment. As I said, those side effects can be profound. Fortunately, using a different approach, which we’ll talk more about at the end of this report, he was able to drop his numbers to undetectable levels — and maintain those for years. When I last spoke to him about two years ago, he was still symptom free after eight years — and that’s despite never giving up many bad habits including heavy, daily cigarette smoking. Since then, I have personally seen that experience duplicated several more times with other HCV patients.
Testing for hepatitis C, usually involves a series of five tests — each filling in a piece of the puzzle.
- Anti-HCV tests detect the presence of antibodies to the virus, indicating exposure to HCV. These tests cannot tell if you still have an active viral infection, only that you were exposed to the virus at some point in the past.
- HCV RIBA testing confirms the presence of antibodies to the virus. It is used to verify the results of the Anti-HCV test.
- HCV-RNA testing identifies whether your infection is active.
- Viral Load or Quantitative HCV tests determine the level of infection and are used to determine if treatment is working.
- Viral genotyping is used to determine exactly which type of hepatitis C is present. As it turns out, there are 6 major types of HCV, and they all respond differently to treatment. This test is often ordered before treatment to give your doctor an idea of the likelihood of success and how long treatment may be needed.
Cirrhosis of the liver
Cirrhosis is a degenerative disease of the liver that is often caused by alcoholism, but also may result from hepatitis and even parasites. It is characterized by formation of fibrous tissue, nodules, and scarring, which interfere with liver cell function and blood circulation and can often lead to blood backflow. Symptoms include weakness, weight loss, fatigue, abdominal swelling due to fluid accumulation, clotting defects, jaundice, and tenderness and enlargement of the liver. Tests for cirrhosis include prolonged prothrombin time and decreased albumin. Cirrhosis is untreatable and when advanced ends in portal hypertension, liver failure, hepatic coma, and death. As already mentioned, the primary tests for cirrhosis include prothrombin time (a test that measures how long it takes blood to clot) and decreased albumin. As discussed last issue, the liver makes all prothrombin and fibrinogen (clotting factors) for the blood, as well as albumin, the major blood protein. Thus, tests indicating low levels of these proteins would be indicative of liver problems.
Liver enzyme tests
A simple liver blood enzyme test is often your doctor’s first step in determining liver problems. The test is simple. Under normal circumstances, liver enzymes reside exclusively within the cells of the liver, but if the liver is injured for any reason, these enzymes spill out into the blood stream. Thus, if tests reveal them in the bloodstream, it’s an “indication” of problems. Specifically, your doctor is looking for the two aminotransferase enzymes: aspartate aminotransferase (AST or SGOT) and alanine aminotransferase (ALT or SGPT). Again, if these enzymes are found in the bloodstream, they are indicative of liver problems. They are not, however, conclusive.
Higher-than-normal levels of these liver enzymes do not automatically mean that you have liver problems. For example, high levels of these enzymes can be caused by muscle damage — such as that produced by intense exercise. Moderate alcohol intake can also raise levels as can aspirin. Also, even if the levels are raised as a result of real liver problems, the actual levels are not indicative of the extent of liver damage. For example, patients with hepatitis A may demonstrate very high levels for one to weeks before the condition, as mentioned earlier, totally resolves itself and goes away. On the other hand, patients with chronic hepatitis C infection typically show very little elevation. Again, liver enzyme tests merely indicate a potential problem.
In addition to the liver enzyme test, the prothrombin time test, and the albumin test mentioned above, a complete liver panel will usually include one more test, the bilirubin test. Again as we discussed last issue, the liver excretes bilirubin, the broken-down pigments from dead red blood cells, by metabolizing it with bile salts and excreting it through the feces. Bilirubin is what makes our feces brown. If for some reason, bilirubin is not excreted (as in obstructive jaundice) the feces will turn clay-colored. Likewise, if bilirubin is found in the bloodstream, it’s indicative that something is amiss in the liver and that bilirubin is flowing in the wrong direction — out into the bloodstream.
Gallstones and the biliary system
As we discussed last issue, gallstones don’t start in the gallbladder; they are related to cholesterol metabolic defects originating in the liver itself. They also happen to be associated with obesity and pregnancy. Essentially, if the cholesterol produced in your liver is too thick and becomes too concentrated in the bile and sits too long in the gallbladder, it can crystallize and form gallstones. It is estimated gallstones result in some 600,000 hospitalizations and more than 500,000 operations each year in the United States alone. Bottom line: it’s one of the most prevalent digestive disorders known.
The usual treatment is laparoscopic surgery to remove the gallbladder. The surgery itself has now become so routine that it can be completed in about an hour and the patient leaves the same day — back to work the next day.
However, because it does not address the underlying cause of the problem (metabolic issues in the liver), gallbladder surgery often does not resolve the patient’s discomfort. And because it eliminates the body’s regulating mechanism for the release of bile when needed, it often creates new digestive problems of its own. In fact, after gallbladder removal, some 13% of patients report persistent pain. Another 17% report chronic diarrhea, and another 20% report intermittent digestive problems and pain. The bottom line is that although surgeons will report an almost 100% success rate for the surgery, patients will report a 50 % failure rate. It’s all a matter of perspective. The surgeon considers the surgery successful if the patient survives, there are no immediate problems, and she collects her fee without a lawsuit. The patient, unfortunately, has to live with the long term results.
The biliary tree
The biliary tree is the anatomical term for the treelike path by which bile is secreted from the liver on its way to the duodenum.
It is referred to as a tree because it begins with a multitude of small branches coming from the thousands of liver lobules which empty into the common bile duct, which is sometimes referred to as the trunk of the biliary tree. Hanging off the trunk, tucked up into the liver is the gallbladder. It is a secondary outpouching, if you will — an outpouching of the bile duct coming from the liver, which is itself an outpouching of the digestive tract. The gallbladder lies in a groove under the liver, between the two lobes, and is a soft, thin-walled sac, shaped like a fat carrot, with its narrow end pointing toward the bile ducts.
Liver duct system
Bile drains from the ultra small bile ducts (ductiles) that service each of the liver’s tens of thousands of lobules into progressively larger ducts, culminating in the common bile duct. The right and left hepatic ducts join just outside the liver to form the common hepatic duct.
Bile passing through the common bile duct exits and enters the gallbladder through the cystic duct. Most physicians refer to the gallbladder as a vestigial organ (as they do the appendix) — meaning that it’s lost most of its original function and now pretty much “gets in the way.” To them, this explains why the gallbladder does not usually empty completely, which allows gallstones to form — leading to pain, infection, inflammation, and even cancer. This also explains why they remove upwards of half a million gallbladders a year in the United States alone.
They are wrong!
The gallbladder serves a definite function. It is not vestigial. It regulates the flow of bile so that it can “push out” into the digestive tract in bursts as needed to assist in the digestion of fats. In fact, the gallbladder will contract to squeeze out stored bile when stimulated by a fatty meal. Without the gallbladder, bile merely dribbles out in a constant flow, thus being present when not required and insufficiently present when needed. This can lead to a whole series of digestive problems including poor digestion, intestinal distress, diarrhea, and an inability to fully break down fats. In fact, many people, as they age, need to take an ox bile supplement (available at all health food stores) with their meals to compensate for insufficient bile in their digestive tracts. If you have digestive problems after eating fatty meals, it’s one of the first things you (and your doctor) should look at.
It is important to understand that problems with the gallbladder rarely stem from the gallbladder itself. They stem from the liver, which if not functioning properly will manufacture bile that is prone to “stoning.” Thus removing the gallbladder does not eliminate the problem; it merely eliminates ONE place problems can manifest. Where else can problems manifest? If you follow the biliary tree down past the gallbladder, you will find that the common bile duct joins the pancreatic duct before entering the duodenum through the ampulla of Vater. And there’s the problem. Although stones and sludge formed in the liver can no longer get trapped in the gallbladder (if it’s been removed), they can still quite easily get lodged in the pancreatic duct and ampulla of Vater. This causes the digestive juices secreted by the pancreas to back up into the pancreas itself and start inflaming and digesting pancreatic tissue. This is called pancreatitis.
In other words, by merely removing the gallbladder and not addressing the underlying problem of “bad bile” being formed in the liver, you may potentially merely be moving symptoms from the gallbladder to the pancreas. Fortunately, there are alternatives. Dietary changes will often help. But the best way to optimize the health of your liver, gallbladder, and pancreas is to regularly cleanse and flush the liver and gallbladder.
The liver gallbladder flush
Of all the things I talk about in my books and newsletters, the one that medical doctors have the hardest time with is detoxes and flushes. In fact, the “scientific” community will regularly speak out against the concept. But most of that hostility comes from confusion, misunderstanding, and prejudice. Yes, it’s true that there is a great deal of “noise” that contributes to that confusion. A search on the internet shows that the word detox has been associated with everything from shampoos to footpads. On the other hand, it’s not that hard to separate the wheat from the chaff– if one wants to. Certainly there’s a whole lot of chaff in the medical community that must be ignored: hormone replacement therapy, angioplasties, and Tamiflu to name just a few.
That said, the principle of the liver/gallbladder flush is simple. You deprive the body of all fats and oils for a period of time to allow bile and cholesterol to build up in the liver and gallbladder. You then consume a drink containing a large amount of olive oil, which requires the liver and gallbladder to purge all of their bile in an attempt to digest this sudden intake of fat. This produces a figurative “wringing” action on both the liver and gallbladder causing them to empty. In addition to the purging of bile and cholesterol, a good flush will also help the liver purge accumulated fats and toxins. There are several cautions when doing a liver/gallbladder flush.
- You will want to have done an intestinal cleanse before doing the liver flush. Why? Because when the liver and gallbladder purge, they dump into the duodenum. If the intestinal tract is not flowing smoothly the purged bile and toxins can either backup into the bloodstream through the liver or be reabsorbed into the bloodstream through the intestinal tract. This can lead to a cleansing reaction.
- You will want to soften any gallstones before doing the flush. Otherwise, if the stones are large and hard, it will be quite painful (possibly even harmful) when the hard rough stones are squeezed through the bile ducts. At one time I used to recommend products such as Phosfood Liquid, Super Phos 30, and liquid extracts of chanca piedra. And they work. In the end, I designed my own softening formula that works far better and faster than these other alternatives — often in a matter of one to two hours. But more importantly this formula helps with all kinds of stones including kidney, gallbladder, and pancreatic. In any case, you will want to do one of these programs before doing a liver detox to soften the stones.
One day versus five day liver cleanses
If you search under liver flushes on the net, you will find two programs recommended — a five day program and a one day program. The principles of both programs are the same. The one day program is essentially the same as the last day of the five day program. I prefer the five day program for a number of reasons.
- You get to build the strength of the morning purge drink from one to five tablespoons of olive oil over five days. This not only provides a cumulative effect; it also allows the body to adapt, thus making the five tablespoon drink easier to handle.
- Whereas both programs will purge the gallbladder, the five day program does a much better job of purging the liver too.
- The five day program is accompanied by herbal teas and tinctures that also contain:
- Lipotropics so they help purge fats from the liver
- Antiparasitic herbs so they help flush parasites from the liver
- Liver rebuilding herbs such as milk thistle and Picrorhiza kurroa that help regenerate liver function
- And are accompanied by juice fasts that help the entire body rebuild and repair itself
What you can expect on the liver detox
If you are so inclined (and you should be), you should examine what you deposit in the toilet during the liver/gallbladder flush. Check for “stones” which may or may not be visible. The bile from the liver gives some stones their typical green color, but also look for black, red, and brown stones, as well as stones with blood inside them. During the course of the cleanse, some people will pass many. Be glad, because the more you pass, the healthier you become. You may also find untold numbers of tiny white cholesterol “crystals” mixed in with the waste. But do not be fooled. Oftentimes, the olive oil is converted into little “soap beads” in the intestinal tract, and many people confuse these little beads with actual stones. Also, keep in mind that if you are softening your stones before doing the flush, they will develop the consistency of toothpaste — thus they will be significantly elongated when “squeezed” out and not look very beadlike at all. And if you are taking psyllium during the program (which I recommend), most of the waste will be encased by the psyllium and not be visible at all.
If you don’t notice anything, though, it doesn’t mean the flush is not working. Also, many people don’t have gallstones. But they do have toxins and accumulated fat in the liver, and those are being purged. In the end, though, it’s not what you see, it’s how you feel. Wait for a few days after the cleanse and then evaluate. Did you lose weight? Do you feel lighter and cleaner? Did your senses come alive? Does food taste better? Are colors brighter? Is your breathing a little easier, less congested? These are the true evaluations of the liver detox.
Go to the liver detox site
I am not going to go into the details of how to do a liver flush in this newsletter. It’s too involved to cover in a single newsletter, and we’ve covered it in great detail at the Baseline of Health® Foundation website. Everything you need to know is there including things like:
- Everything you need to buy
- Exactly how to make the morning flush drinks
- What to eat and juice during the flush
- Daily walkthroughs and hour by hour schedules
- What to do if you’re diabetic
- What to expect
- Live Q and A sessions
Check it out at: Baseline of Health Foundation Liver Detox and Blood Cleanse
And that concludes our section on the pancreas, liver, and gallbladder. When next we return to our exploration of the anatomy and physiology of the intestinal tract from a natural health perspective, we will pick things up with the small intestine. | <urn:uuid:0662776e-b201-414b-97ca-a608d4a68570> | CC-MAIN-2022-33 | https://www.jonbarron.org/article/healing-liver-and-gallbladder | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572033.91/warc/CC-MAIN-20220814113403-20220814143403-00296.warc.gz | en | 0.948225 | 5,770 | 2.75 | 3 |
According to the Abrahamic religions, Aaron (אַהֲרֹן ’Ahărōn) was a prophet, high priest, and the elder brother of Moses. Knowledge of Aaron, along with his brother Moses, comes exclusively from religious texts, such as the Bible and Quran.
The Hebrew Bible relates that, unlike Moses, who grew up in the Egyptian royal court, Aaron and his elder sister Miriam remained with their kinsmen in the eastern border-land of Egypt (Goshen). When Moses first confronted the Egyptian king about the Israelites, Aaron served as his brother’s spokesman (“prophet”) to the Pharaoh (Exodus 7:1). Part of the Law given to Moses at Sinai granted Aaron the priesthood for himself and his male descendants, and he became the first High Priest of the Israelites.
Aaron died before the Israelites crossed the Jordan River. According to the Book of Numbers, he died and was buried on Mount Hor, Deuteronomy however places these events at Moserah. Aaron is also mentioned in the New Testament of the Bible (Luke, Acts, and Hebrews).
According to the Book of Exodus, Aaron first functioned as Moses’ assistant. Because Moses complained that he could not speak well, God appointed Aaron as Moses’ “prophet” (Exodus 4:10-17; 7:1). At the command of Moses, he let his rod turn into a snake. Then he stretched out his rod in order to bring on the first three plagues. After that, Moses tended to act and speak for himself.
During the journey in the wilderness, Aaron was not always prominent or active. At the battle with Amalek, he was chosen with Hur to support the hand of Moses that held the “rod of God”. When the revelation was given to Moses at Mount Sinai, he headed the elders of Israel who accompanied Moses on the way to the summit. While Joshua went with Moses to the top, however, Aaron and Hur remained below to look after the people. From here on in Exodus, Leviticus, and Numbers, Joshua appears in the role of Moses’ assistant while Aaron functions instead as the first high priest.
The books of Exodus, Leviticus, and Numbers maintain that Aaron received from God a monopoly over the priesthood for himself and his male descendants. The family of Aaron had the exclusive right and responsibility to make offerings on the altar to Yahweh. The rest of his tribe, the Levites, were given subordinate responsibilities within the sanctuary. Moses anointed and consecrated Aaron and his sons to the priesthood, and arrayed them in the robes of office. He also related to them God’s detailed instructions for performing their duties while the rest of the Israelites listened. Aaron and his successors as high priests were given control over the Urim and Thummim by which the will of God could be determined. God commissioned the Aaronide priests to distinguish the holy from the common and the clean from the unclean and to teach the divine laws (the Torah) to the Israelites. The priests were also commissioned to bless the people. When Aaron completed the altar offerings for the first time and, with Moses, “blessed the people: and the glory of the LORD appeared unto all the people: And there came a fire out from before the LORD, and consumed upon the altar the burnt offering and the fat [which] when all the people saw, they shouted, and fell on their faces”. In this way, the institution of the Aaronide priesthood was established.
In later books of the Hebrew Bible, Aaron and his kin are not mentioned very often except in literature dating to the Babylonian captivity and later. The books of Judges, Samuel, and Kings mention priests and Levites but do not mention the Aaronides in particular. The Book of Ezekiel, which devotes much attention to priestly matters, calls the priestly upper class the Zadokites after one of King David’s priests. It does reflect a two-tier priesthood with the Levites in a subordinate position. A two-tier hierarchy of Aaronides and Levites appears in Ezra, Nehemiah, and Chronicles. As a result, many historians think that Aaronide families did not control the priesthood in pre-exilic Israel. What is clear is that high priest claiming Aaronide descent dominated the Second Temple period. Most scholars think the Torah reached its final form early in this period, which may account for Aaron’s prominence in Exodus, Leviticus, and Numbers.
Aaron plays a leading role in several stories of conflicts during Israel’s wilderness wanderings. During the prolonged absence of Moses on Mount Sinai, the people provoked Aaron to make a golden calf This incident nearly caused God to destroy the Israelites. Moses successfully intervened, but then led the loyal Levites in executing many of the culprits; a plague afflicted those who were left. Aaron, however, escaped punishment for his role in the affair, because of the intercession of Moses according to Deuteronomy 9:20. Later retellings of this story almost always excuse Aaron for his role. For example, in rabbinic sources and in the Quran, Aaron was not the idol-maker and upon Moses’ return begged his pardon because he felt mortally threatened by the Israelites.
On the day of Aaron’s consecration, his oldest sons, Nadab and Abihu, were burned up by divine fire because they offered “strange” incense. Most interpreters think this story reflects a conflict between priestly families some time in Israel’s past. Others argue that the story simply shows what can happen if the priests do not follow God’s instructions given through Moses.
The Torah generally depicts the siblings, Moses, Aaron, and Miriam, as the leaders of Israel after the Exodus, a view also reflected in the biblical Book of Micah. Numbers 12, however, reports that on one occasion, Aaron and Miriam complained about Moses’ exclusive claim to be the LORD‘s prophet. Their presumption was rebuffed by God who affirmed Moses’ uniqueness as the one with whom the LORD spoke face to face. Miriam was punished with a skin disease (tzaraath) that turned her skin white. Aaron pleaded with Moses to intercede for her, and Miriam, after seven days’ quarantine, was healed. Aaron once again escaped any retribution.
According to Numbers 16–17, a Levite named Korah led many in challenging Aaron’s exclusive claim to the priesthood. When the rebels were punished by being swallowed up by the earth, Eleazar, the son of Aaron, was commissioned to take charge of the censers of the dead priests. And when a plague broke out among the people who had sympathized with the rebels, Aaron, at the command of Moses, took his censer and stood between the living and the dead till the plague abated (Numbers 16:36, 17:1).
To emphasize the validity of the Levites’ claim to the offerings and tithes of the Israelites, Moses collected a rod from the leaders of each tribe in Israel and laid the twelve rods overnight in the tent of the meeting. The next morning, Aaron’s rod was found to have budded and blossomed and produced ripe almonds. The following chapter then details the distinction between Aaron’s family and the rest of the Levites: while all the Levites (and only Levites) were devoted to the care of the sanctuary, a charge of its interior and the altar was committed to the Aaronites alone.
Aaron, like Moses, was not permitted to enter Canaan with the Israelites because the two brothers showed impatience at Meribah (Kadesh) in the last year of the desert pilgrimage when Moses brought water out of a rock to quench the people’s thirst. Although they had been commanded to speak to the rock, Moses struck it with the staff twice, which was construed as displaying a lack of deference to the LORD.
There are two accounts of the death of Aaron in the Torah. The Numbers says that soon after the incident at Meribah, Aaron with his son Eleazar and Moses ascended Mount Hor. There Moses stripped Aaron of his priestly garments and transferred them to Eleazar. Aaron died on the summit of the mountain, and the people mourned for him for thirty days. The other account is found in Deuteronomy 10:6, where Aaron died at Moserah and was buried. There is a significant amount of travel between these two points, as the itinerary in Numbers 33:31–37 records seven stages between Moseroth (Mosera) and Mount Hor. Aaron died on the 1st of Av and was 123 at the time of his death.
Aaron married Elisheba, daughter of Amminadab and sister of Nahshon of the tribe of Judah. The sons of Aaron were Nadab, Abihu, Eleazar, and Itamar; only the latter two had progeny. A descendant of Aaron is an Aaronite, or Kohen, meaning Priest. Any non-Aaronic Levite—i.e., descended from Levi but not from Aaron—assisted the Levitical priests of the family of Aaron in the care of the tabernacle; later of the temple.
The Gospel of Luke records that both Zechariah and Elizabeth and therefore their son John the Baptist were descendants of Aaron.
In religious traditions
Jewish rabbinic literature
The older prophets and prophetical writers beheld in their priests the representatives of a religious form inferior to the prophetic truth; men without the spirit of God and lacking the will-power requisite to resist the multitude in its idolatrous proclivities. Thus Aaron, the first priest, ranks below Moses: he is his mouthpiece, and the executor of the will of God revealed through Moses, although it is pointed out that it is said fifteen times in the Torah that “the Lord spoke to Moses and Aaron.”
Under the influence of the priesthood that shaped the destinies of the nation under Persian rule, a different ideal of the priest was formed, according to Malachi 2:4-7, and the prevailing tendency was to place Aaron on a footing equal with Moses. “At times Aaron, and at other times Moses, is mentioned first in Scripture—this is to show that they were of equal rank,” says the Mekhilta of Rabbi Ishmael, which strongly implies this when introducing in its record of renowned men the glowing description of Aaron’s ministration.
In fulfillment of the promise of peaceful life, symbolized by the pouring of oil upon his head, Aaron’s death, as described in the Aggadah, was of wonderful tranquility. Accompanied by Moses, his brother, and by Eleazar, his son, Aaron went to the summit of Mount Hor, where the rock suddenly opened before him and a beautiful cave lit by a lamp presented itself to his view. Moses said, “Take off your priestly raiment and place it upon your son Eleazar! and then follow me.” Aaron did as commanded; and they entered the cave, where was prepared a bed around which angels stood. “Go lie down upon thy bed, my brother,” Moses continued; and Aaron obeyed without a murmur. Then his soul departed as if by a kiss from God. The cave closed behind Moses as he left; and he went down the hill with Eleazar, with garments rent, and crying: “Alas, Aaron, my brother! thou, the pillar of the supplication of Israel!” When the Israelites cried in bewilderment, “Where is Aaron?” angels were seen carrying Aaron’s bier through the air. A voice was then heard saying: “The law of truth was in his mouth, and iniquity was not found on his lips: he walked with me in righteousness, and brought many back from sin” He died on the first of Av. The pillar of cloud which proceeded in front of Israel’s camp disappeared at Aaron’s death. The seeming contradiction between Numbers 20:22 et seq. and Deuteronomy 10:6 is solved by the rabbis in the following manner: Aaron’s death on Mount Hor was marked by the defeat of the people in a war with the king of Arad, in consequence of which the Israelites fled, marching seven stations backward to Mosera, where they performed the rites of mourning for Aaron; wherefore it is said: “There [at Mosera] died, Aaron.”
The rabbis particularly praise the brotherly sentiment between Aaron and Moses. When Moses was appointed ruler and Aaron high priest, neither betrayed any jealousy; instead, they rejoiced in each other’s greatness. When Moses at first declined to go to Pharaoh, saying: “O my Lord, send, I pray, by the hand of him whom you will send”, he was unwilling to deprive Aaron of the high position the latter had held for so many years; but the Lord reassured him, saying: “Behold, when he sees you, he will be glad in his heart.” Indeed, Aaron was to find his reward, says Shimon bar Yochai; for that heart which had leaped with joy over his younger brother’s rise to glory greater than his was decorated with the Urim and Thummim, which were to “be upon Aaron’s heart when he goeth in before the Lord”. Moses and Aaron met in the gladness of heart, kissing each other as true brothers, and of them, it is written: “Behold how good and how pleasant [it is] for brethren to dwell together in unity!” Of them it is said: “Mercy and truth are met together; righteousness and peace have kissed [each other]”; for Moses stood for righteousness and Aaron for peace. Again, mercy was personified in Aaron, according to Deuteronomy 33:8, and truth in Moses, according to Numbers 12:7.
When Moses poured the oil of anointment upon the head of Aaron, Aaron modestly shrank back and said: “Who knows whether I have not cast some blemish upon this sacred oil so as to forfeit this high office.” Then the Shekhinah spoke the words: “Behold the precious ointment upon the head, that ran down upon the beard of Aaron, that even went down to the skirts of his garment, is as pure as the dew of Hermon.”
According to Tanhuma, Aaron’s activity as a prophet began earlier than that of Moses. Hillel held Aaron up as an example, saying: “Be of the disciples of Aaron, loving peace and pursuing peace; love your fellow creatures and draw them nigh unto the Law!” This is further illustrated by the tradition that Aaron was an ideal priest of the people, far more beloved for his kindly ways than was Moses. While Moses was stern and uncompromising, brooking no wrong, Aaron went about as a peacemaker, reconciling man and wife when he saw them estranged, or a man with his neighbor when they quarreled and won evil-doers back into the right way by his friendly intercourse. As a result, Aaron’s death was more intensely mourned than Moses’: when Aaron died the whole house of Israel wept, including the women, while Moses was bewailed by “the sons of Israel” only. Even in the making of the Golden Calf, the rabbis find extenuating circumstances for Aaron. His fortitude and silent submission to the will of God on the loss of his two sons are referred to as an excellent example to men of how to glorify God in the midst of great affliction. Especially significant are the words represented as being spoken by God after the princes of the Twelve Tribes had brought their dedication offerings into the newly reared Tabernacle: “Say to thy brother Aaron: Greater than the gifts of the princes is thy gift; for thou art called upon to kindle the light, and, while the sacrifices shall last only as long as the Temple lasts, thy light shall last forever.”
In the Eastern Orthodox and Maronite churches, Aaron is venerated as a saint whose feast day is shared with his brother Moses and celebrated on September 4. (Those churches that follow the traditional Julian calendar celebrate this day on September 17 of the modern Gregorian calendar). Aaron is also commemorated with other Old Testament saints on the Sunday of the Holy Fathers, the Sunday before Christmas.
Aaron is commemorated as one of the Holy Forefathers in the Calendar of Saints of the Armenian Apostolic Church on July 30. He is commemorated on July 1 in the modern Latin calendar and in the Syriac Calendar.
In The Church of Jesus Christ of Latter-day Saints, the Aaronic priesthood is the lesser order of priesthood under the higher order of the Melchizedek priesthood. Those ordained to this priesthood have the authority to act in God’s name in certain responsibilities in the church such as the administration of the sacrament and baptism.
In the Community of Christ, the Aaronic order of priesthood is regarded as an appendage to the Melchisedec order and consists of the priesthood offices of deacon, teacher, and priest. While differing in responsibilities, these offices, along with those of the Melchisidec order, are regarded as equal before God.
Aaron (هارون, Hārūn) is mentioned in the Quran as a prophet of God. The Quran praises Aaron repeatedly, calling him a “believing servant” as well as one who was “guided” and one of the “victors”. Aaron is important in Islam for his role in the events of the Exodus, in which, according to the Quran and Islamic belief, he preached with his elder brother, Moses to the Pharaoh of the Exodus.
Aaron’s significance in Islam, however, is not limited to his role as the helper of Moses. Islamic tradition also accords Aaron the role of a patriarch, as tradition records that the priestly descent came through Aaron’s lineage, which included the entire House of Amran.
In the Baháʼí Faith, although his father is described as both an apostle and a prophet, Aaron is merely described as a prophet. The Kitáb-i-Íqán describes Imran as his father.
Aaron appears paired with Moses frequently in Jewish and Christian art, especially in the illustrations of manuscripts and printed Bibles. He can usually be distinguished by his priestly vestments, especially his turban or miter and jeweled breastplate. He frequently holds a censer or, sometimes, his flowering rod. Aaron also appears in scenes depicting the wilderness Tabernacle and its altar, as already in the third-century frescos in the synagogue at Dura-Europos in Syria. An eleventh-century portable silver altar from Fulda, Germany depicts Aaron with his censor and is located in the Musée de Cluny in Paris. This is also how he appears in the frontispieces of early printed Passover Haggadot and occasionally in church sculptures. Aaron has rarely been the subject of portraits, such as those by Anton Kern [710–1747] and by Pier Francesco Mola [c. 1650]. Christian artists sometimes portray Aaron as a prophet holding a scroll, as in a twelfth-century sculpture from the Cathedral of Noyon in the Metropolitan Museum of Art, New York, and often in Eastern Orthodox icons. Illustrations of the Golden Calf story usually include him as well – most notably in Nicolas Poussin’s The Adoration of the Golden Calf (ca. 1633–34, National Gallery, London). Finally, some artists interested in validating later priesthoods have painted the ordination of Aaron and his sons (Leviticus 8). Harry Anderson’s realistic portrayal is often reproduced in the literature of the Latter-Day Saints.
Adapted from Wikipedia, the free encyclopedia | <urn:uuid:a9e4de76-83c8-4e5e-a5be-4628599eb79e> | CC-MAIN-2022-33 | https://slife.org/aaron/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00696.warc.gz | en | 0.974735 | 4,263 | 3.25 | 3 |
Dialectics of Nature. Frederick Engels 1883
Source: Dialectics of Nature, pp. 243-256;
First Published: by Progress Publishers, 1934, 6th printing 1974;
Translated: from the German by Clemens Dutt;
Transcribed: by Andy Blunden, 2006.
Causa finalis – matter and its inherent motion. This matter is no abstraction. Even in the sun the different substances are dissociated and without distinction in their action. But in the gaseous sphere of the nebula all substances, although separately present, become merged in pure matter as such, acting only as matter, not according to their specific properties.
(Moreover already in Hegel the antithesis of causa efficiens and causa finalis is sublated in reciprocal action.)
“The conception of matter as original and pre-existent, and as naturally formless, is a very ancient one; it meets us even among the Greeks, at first in the mythical shape of chaos, which is supposed to represent the unformed substratum of the existing world.” (Hegel, Enzyklopädie, I, p. 258.)
We find this chaos again in Laplace, and approximately in the nebula which also has only the beginning of form. Differentiation comes afterwards.
Gravity as the most general determination of materiality is commonly accepted. That is to say, attraction is a necessary property of matter, but not repulsion. But attraction and repulsion are as inseparable as positive and negative, and hence from dialectics itself it can already be predicted that the true theory of matter must assign as important a place to repulsion as to attraction, and that a theory of matter based on mere attraction is false, inadequate, and one-sided. In fact sufficient phenomena occur that demonstrate this in advance. If only on account of light, the ether is not to be dispensed with. Is the ether of material nature? If it exists at all, it must be of material nature, it must come under the concept of matter. But it is not affected by gravity. The tail of a comet is granted to be of material nature. It shows a powerful repulsion. Heat in a gas produces repulsion, etc.
Attraction and gravitation. The whole theory of gravitation rests on saying that attraction is the essence of matter. This is necessarily false. Where there is attraction, it must be complemented by repulsion. Hence already Hegel was quite right in saying that the essence of matter is attraction and repulsion. And in fact we are more and more becoming forced to recognise that the dissipation of matter has a. limit where attraction is transformed into repulsion,. and conversely the condensation of the repelled matter has a limit where it becomes attraction.
The transformation of attraction into repulsion and vice versa is mystical in Hegel, but in substance he anticipated by it the scientific discovery that came later. Even in a gas there is repulsion of the molecules, still more so in more finely-divided matter, for instance in the tail of a comet, where it even operates with enormous force. Hegel shows his genius even in the fact that he derives attraction as something secondary from repulsion as something preceding it: a solar system is only formed by the gradual preponderance of attraction over the originally prevailing repulsion. – Expansion by heat=repulsion. The kinetic theory of gases.
The divisibility of matter. For science the question is in practice a matter of indifference. We know that in chemistry there is a definite limit to divisibility, beyond which bodies can no longer act chemically – the atom; and that several atoms are always in combination – the molecule. Ditto in physics we are driven to the acceptance of certain – for physical analysis – smallest particles, the arrangement of which determines the form and cohesion of bodies, their vibrations becoming evident as heat, etc. But whether the physical and chemical molecules are identical or different, we do not yet know.
Hegel very easily gets over this question of divisibility by saying that matter is both divisible and continuous, and at the same time neither of the two, which is no answer but is now almost proved (see sheet 5,3 below: Clausius).
Divisibility. The mammal is indivisible, the reptile can regrow a foot. – Ether waves, divisible and measurable to the infinitesimally small. – Every body divisible, in practice, within certain limits, e.g., in chemistry.
“Its essence (of motion) is to be the immediate unity of space and time ... to motion belong space and time; velocity, the quantum of motion, is space in relation to a definite time that has elapsed.” ([Hegel,] Naturphilosophie, S. 65.) “... Space and time are filled with matter.... Just as there is no motion without matter, so there is no matter without motion.” (p. 67.)
The indestructibility of motion in Descartes’ principle that the universe always contains the same quantity of motion. Natural scientists express this imperfectly as the “indestructibility of force.” The merely quantitative expression of Descartes is likewise inadequate: motion as such, as essential activity, the mode of existence of matter, is indestructible as the latter itself, this formulation includes the quantitative element. So here again the philosopher has been confirmed by the natural scientist after 200 years.
The indestructibility of motion. A pretty passage in Grove – p. 20 et seq.
Motion and equilibrium. Equilibrium is inseparable from motion. [In margin: “Equilibrium=predominance of attraction over repulsion."] In the motion of the heavenly bodies there is motion in equilibrium and equilibrium in motion (relative). But all specifically relative motion, i.e., here all separate motion of individual bodies on one of the heavenly bodies in motion, is an effort to establish relative rest, equilibrium. The possibility of bodies being at relative rest, the possibility of temporary states of equilibrium, is ‘the essential condition for the differentiation of matter and hence for life. On the sun there is no equilibrium of the various substances, only of the mass as a whole, or at any rate only a very restricted one, determined by considerable differences of density; on the surface there is eternal motion and unrest, dissociation. On the moon, equilibrium appears to prevail exclusively, without any relative motion-death (moon=negativity). On the earth motion has become differentiated into interchange of motion and equilibrium: the individual motion strives towards equilibrium, the motion as a whole once more destroys the individual equilibrium. The rock comes to rest, but weathering, the action of the ocean surf, of rivers and glacier ice continually destroy the equilibrium. Evaporation and rain, wind, heat, electric and magnetic phenomena offer the same spectacle. Finally, in the living organism we see continual motion of all the smallest particles as well as of the larger organs, resulting in the continual equilibrium of the total organism during the normal period of life, which yet always remains in motion, the living unity of motion and equilibrium.
All equilibrium is only relative and temporary.
(1) Motion of the heavenly bodies. Approximate equilibrium of attraction and repulsion in motion.
(2) – Motion on one heavenly body. Mass. In so far as this motion comes from pure mechanical causes, here also there is equilibrium. The masses are at rest on their foundation. On the moon this is apparently complete. Mechanical attraction has overcome mechanical repulsion. From the standpoint of pure mechanics, we do not know what has become of the repulsion, and pure mechanics just as little explains whence come the “forces,” by which nevertheless masses on the earth, for example, are set in motion against gravity. It takes the fact for granted. Here therefore there is simple communication of repelling, displacing motion from mass to mass, with equality of attraction and repulsion.
(3) The overwhelming majority of all terrestrial motions, however, are made up of the conversion of one form of motion into another – mechanical motion into heat, electricity, chemical motion – and of each form into any other; hence either the transformation of attraction into repulsion – mechanical motion into heat, electricity, chemical decomposition (the transformation is the conversion of the original lifting mechanical on into heat, not of the falling motion, which is only the semblance) [ – or transformation of repulsion into attraction].
(4) All energy now active on the earth is transformed heat from the sun.
Mechanical motion. Among natural scientists motion is always as a matter of course taken to mean mechanical motion, change of place. This has been handed down from the pre-chemical eighteenth century and makes a clear conception of the processes much more difficult. Motion, as applied to matter, is change in general. From the same misunderstanding is derived also the craze to reduce everything to mechanical motion – even Grove is
“strongly inclined to believe that the other affections of matter ... are, and will ultimately be resolved into, modes of motion,” p. 16 –
which obliterates the specific character of the other forms of motion. This is not to say that each of the higher forms of motion is not always necessarily connected with some real mechanical (external or molecular) motion, just as the higher forms of motion simultaneously also produce other forms, and just as chemical action is not possible without change of temperature and electric changes, organic life without mechanical, molecular, chemical, thermal, electric, etc., changes. But the presence of these subsidiary forms does not exhaust the essence of the main form in each case. One day we shall certainly “reduce” thought experimentally to molecular and chemical motions in the brain; but does that exhaust the essence of thought?
Dialectics of natural science: Subject-matter – matter in motion. The different forms and varieties of matter itself can likewise only be known through motion, only in this are the properties of bodies exhibited; of a body that does not move there is nothing to be said. Hence the nature of bodies in motion results from the forms of motion.
1. The first, simplest form of motion is the mechanical form, pure change of place:
(a) Motion of a single body does not exist – [it can be spoken of] only in a relative sense – falling.
(b) The motion of separated bodies: trajectory, astronomy – apparent equilibrium – the end always contact.
(c) The motion of bodies in contact in relation to one another – pressure. Statics. Hydrostatics and gases. The lever and other forms of mechanics proper – which all in their simplest form of contact amount to friction or impact, which are different only in degree. But friction and impact, in fact contact, have also other consequences never pointed out here by natural scientists: they produce, according to circumstances, sound, heat, light, electricity, magnetism.
2. These different forces (with the exception of sound) – physics of heavenly bodies –
(a) pass into one another and mutually replace one another, and
(b) on a certain quantitative development of each force, different for each body, applied to the bodies, whether they are chemically compound or several chemically simple bodies, chemical changes take place, and we enter the realm of chemistry. Chemistry of heavenly bodies. Crystallography – part of chemistry.
3. Physics had to leave out of consideration the living organic body, or could do so; chemistry finds only in the investigation of organic compounds the real key to the true nature of the most important bodies, and, on the other hand, it synthesises bodies which only occur in organic nature. Here chemistry leads to organic life, and it has gone far enough to assure us that it alone will explain to us the dialectical transition to the organism.
4. The real transition, however, is in history – of the solar system, the earth; the real pre-condition for organic nature.
5. Organic nature.
Classification of the sciences, each of which analyses a single form of motion, or a series of forms of motion that belong together and pass into one another, is therefore the classification, the arrangement, of these forms of motion themselves according to their inherent sequence, and herein lies its importance.
At the end of the last (18th) century, after the French materialists, who were predominantly mechanical, the need became evident for an encyclopedic summing up of the entire natural science of the old Newton-Linnaeus school, and two men of the greatest genius undertook this, Saint-Simon (uncompleted) and Hegel. Today, when the new outlook on nature is complete in its basic features, the same need makes itself felt, and attempts are being made in this direction. But since the general evolutionary connection in nature has now been demonstrated, an external side by side arrangement is as inadequate as Hegel’s artificially constructed dialectical transitions. The transitions must make themselves, they must be natural. Just as one form of motion develops out of another, so their reflections, the various sciences, must arise necessarily out of one another.
How little Comte can have been the author of his encyclopaedic arrangement of the natural sciences, which he copied from Saint-Simon, is already evident from the fact that it only serves him for the purpose of arranging the means of instruction and course of instruction, and so leads to the crazy enseignement intégral, where one science is always exhausted before another is even broached, where a basically correct idea is pushed to a mathematical absurdity.
Hegel’s division (the original one) into mechanics, chemics, and organics, fully adequate for the time. Mechanics: the movement of masses. Chemics: molecular (for physics is also included in this and, indeed, both – physics as well as chemistry – belong to the same order) motion and atomic motion. Organics: the motion of bodies in which the two are inseparable. For the organism is certainly the higher unity which within itself unites mechanics, physics, and chemistry into a whole where the trinity can no longer be separated. In the organism, mechanical motion is effected directly by physical and chemical change, in the form of nutrition, respiration, secretion, etc., just as much as pure muscular movement.
Each group in turn is twofold. Mechanics: (1) celestial, (2) terrestrial.
Molecular motion: (1) physics, (2) chemistry.
Organics: (1) plant, (2) animal.
Physiography. After the transition from chemistry to life has been made, then in the first place it is necessary to analyse the conditions in which life has been produced and continues to exist, i.e., first of all geology, meteorology, and the rest. Then the various forms of life themselves, which indeed without this are incomprehensible.
Since the above article appeared (Vorwärts, Feb. 9, 1877), Kekulé (Die wissenschaftlichen Ziele und Leistungen der Chemie) has defined mechanics, physics, and chemistry in a quite similar way:
“If this idea of the nature of matter is made the basis, one could define chemistry as the science of atoms and physics as the science of molecules, and then it would be natural to separate that part of modern physics which deals with masses as a special science, reserving for it the name of mechanics. Thus mechanics appears as the basic science of physics and chemistry, in so far as in certain aspects and especially in certain calculations both of these have to treat their molecules or atoms as masses.”
It will be seen that this formulation differs from that in the text and in the previous note only by being rather less definite. But when an English journal (Nature) put the above statement of Kekulé in the form that mechanics is the statics and dynamics of masses, physics the statics and dynamics of molecules, and chemistry the statics and dynamics of atoms, then it seems to me that this unconditional reduction of even chemical processes to merely mechanical ones unduly restricts the field, at least of chemistry. And yet it is so much the fashion that, for instance, Haeckel continually uses “mechanical” and “monistic” as having the same meaning, and in his opinion
“modern physiology ... in its field allows only of the operation of physico-chemical – or in the wider sense, mechanical – forces.” (Perigenesis.)
If I term physics the mechanics of molecules, chemistry the physics of atoms, and furthermore biology the chemistry of proteins, I wish thereby to express the passing of each of these sciences into another, hence both the connection, the continuity, and the distinction, the discrete separation, between the two of them. To go further and to define chemistry as likewise a kind of mechanics seems to me inadmissible. Mechanics – in the wider or narrower sense knows only quantities, it calculates with velocities and masses, and at most with volumes. Where the quality of bodies comes across its path, as in hydrostatics and aerostatics, it cannot achieve anything without going into molecular states and molecular motions, it is itself only an auxiliary science, the prerequisite for physics. In physics, however, and still more in chemistry, not only does continual qualitative change take place in consequence of quantitative change, the transformation of quantity into quality, but there are also many qualitative changes to be taken into account whose dependence on quantitative change is by no means proven. That the present tendency of science goes in this direction can be readily granted, but does not prove that this direction is the exclusively correct one, that the pursuit of this tendency will exhaust the whole of physics and chemistry. All motion includes mechanical motion, change of place of the largest or smallest portions of matter, and the first task of science, but only the first, is to obtain knowledge of this motion. But this mechanical motion does not exhaust motion as a whole. Motion is not merely change of place, in fields higher than mechanics it is also change of quality. The discovery that heat is a molecular motion was epoch-making. But if I have nothing more to say of heat than that it is a certain displacement of molecules, I should best be silent. Chemistry seems to be well on the way to explaining a number of chemical and physical properties of elements from the ratio of the atomic volumes to the atomic weights. But no chemist would assert that all the properties of an element are exhaustively expressed by its position in the Lothar Meyer curve, that it will ever be possible by this alone to explain, for instance, the peculiar constitution of carbon that makes it the essential bearer of organic life, or the necessity for phosphorus in the brain. Yet the “mechanical” conception amounts to nothing else. It explains all change from change of place, all qualitative differences from quantitative ones, and overlooks that the relation of quality and quantity is reciprocal, that quality can become transformed into quantity just as much as quantity into quality, that, in fact, reciprocal action takes place. If all differences and changes of quality are to be reduced to quantitative differences and changes, to mechanical displacement, then we inevitably arrive at the proposition that all matter consists of identical smallest particles, and that all qualitative differences of the chemical elements of matter are caused by quantitative differences in number and by the spatial grouping of those smallest particles to form atoms. But we have not got so far yet.
It is our modern natural scientists’ lack of acquaintance with any other philosophy than the most mediocre vulgar philosophy, like that now rampant in the German universities, which allows them to use expressions like “mechanical” in this way, without taking into account, or even suspecting, the consequences with which they thereby necessarily burden themselves. The theory of the absolute qualitative identity of matter has its supporters – empirically it is equally impossible to refute it or to prove it. But if one asks these people who want to explain everything “mechanically” whether they are conscious of this consequence and accept the identity of matter, what a variety of answers will be heard!
The most comical part about it is that to make “materialist” equivalent to “mechanical” derives from Hegel, who wanted to throw contempt on materialism by the addition “mechanical.” Now the materialism criticised by Hegel – the French materialism of the eighteenth century was in fact exclusively mechanical, and indeed for the very natural reason that at that time physics, chemistry, and biology were still in their infancy, and were very far from being able to offer the basis for a general outlook on nature. Similarly Haeckel takes from Hegel the translation: causae efficientes = “mechanically acting causes,” and causae finales = “purposively acting causes”; where Hegel, therefore, puts “mechanical” as equivalent to blindly acting, unconsciously acting, and not as equivalent to mechanical in Haeckel’s sense of the word. But this whole antithesis is for Hegel himself so much a superseded standpoint that he does not even mention it in either of his two expositions of causality in his Logic – but only in his History of Philosophy, in the place where it comes historically (hence a sheer misunderstanding on Haeckel’s part due to superficiality!) and quite incidentally in dealing with teleology (Logik, III, ii, 3) where he mentions it as the form in which the old metaphysics conceived the antithesis of mechanism and teleology, but otherwise treating it as a long superseded standpoint. Hence Haeckel copied incorrectly in his joy at finding a confirmation of his “mechanical” conception and so arrived at the beautiful result that if a particular change is produced in an animal or plant by natural selection it has been effected by a causa efficiens, but if the same change arises by artificial selection then it has been effected by a causa finalis! The breeder a causa finalis! Of course a dialectician of Hegel’s calibre could not be caught in the vicious circle of the narrow antithesis of causa efficiens and causa finalis. And for the modern standpoint the whole hopeless rubbish about this antithesis is put an end to because we know from experience and from theory that both matter and its mode of existence, motion, are uncreatable and are, therefore, their own final cause; while to give the name effective causes to the individual causes which momentarily and locally become isolated in the mutual interaction of the motion of the universe, or which are isolated by our reflecting mind, adds absolutely no new determination but only a confusing element. A cause that is not effective is no cause.
N. B. Matter as such is a pure creation of thought and an abstraction. We leave out of account the qualitativative differences of things in lumping them together as corporeally existing things under the concept matter. Hence matter as such, as distinct from definite existing pieces of matter, is not anything sensuously existing. When natural science directs its efforts to seeking out uniform matter as such, to reducing qualitative differences to merely quantitative differences in combining identical smallest particles, it is doing the same thing as demanding to see fruit as such instead of cherries, pears, apples, or the mammal as such instead of cats, dogs, sheep, etc., gas as such, metal, stone, chemical compound as such, motion as such. The Darwinian theory demands such a primordial mammal, Haeckel’s pro-mammal, but, at the same time, it has to admit that if this pro-mammal contained within itself in germ all future and existing mammals, it was in reality lower in rank than all existing mammals and primitively crude, hence more transitory than any of them. As Hegel has already shown (Enzyklopädie, I, S. 199), this view, this “one-sided mathematical view,” according to which matter must be looked upon as having only quantitative determination, but, qualitatively, as identical originally, is “no other standpoint than that” of the French materialism of the eighteenth century. It is even a retreat to Pythagoras, who regarded number, quantitative determination as the essence of things.
In the first place, Kekulé. Then: the systematising of natural science, which is now becoming more and more necessary, cannot be found in any other way than in the inter-connections of phenomena themselves. Thus the mechanical motion of small masses on any heavenly body ends in the contact of two bodies, which has two forms, differing only in degree, viz., friction and impact. So we investigate first of all the mechanical effect of friction and impact. But we find that the effect is not thereby exhausted: friction produces heat, light, and electricity, impact produces heat and light if not electricity also – hence conversion of motion of masses into molecular motion. We enter the realm of molecular motion, physics, and investigate further. But here too we find that molecular motion does not represent the conclusion of the investigation. Electricity passes into and arises from chemical transformation. Heat and light, ditto. Molecular motion becomes transformed into motion of atoms – chemistry. The investigation of chemical processes is confronted by the organic world as a field for research, that is to say, a world in which chemical processes take place, although under different conditions, according to the same laws as in the inorganic world, for the explanation of which chemistry suffices. In the organic world, on the other hand, all chemical investigations lead back in the last resort to a body – protein – which, while being the result of ordinary chemical processes, is distinguished from all others by being a self-acting, permanent chemical process. If chemistry succeeds in preparing this protein, in the specific form in which if obviously arose, that of a so-called protoplasm, a specificity, or rather absence of specificity, such that it contains potentially within itself all other forms of protein (though it is not necessary to assume that there is only one kind of protoplasm), then the dialectical transition will have been proved in reality, hence completely proved. Until then, it remains a matter of thought, alias of hypothesis. When chemistry produces protein, the chemical process will reach out beyond itself, as in the case of the mechanical process above, that is, it will come into a more comprehensive realm, that of the organism. Physiology is, of course, the physics and especially the chemistry of the living body, but with that it. ceases to be specially chemistry: on the one hand its domain becomes restricted but, on the other hand, inside this domain it becomes raised to a higher power.
193. Hegel, Encyclopaedia of the Philosophical Sciences, § 128, Addendum.
194. Op. cit., §98, Addendum 1: “...attraction, is as essential a part of matter as repulsion.”
195. See Hegel, Science of Logic, Book 1, Section 11, Chapter 1, Observation on Kant’s antinomy of the indivisibility and infinite divisibility of time, space and matter.
196. Hegel, Naturphilosophie (Philosophy of Nature), § 261, Addendum.
197. The idea of the preservation of the quantity of motion was expressed by Descartes in his Le Traite de la Lumiere (Treatise on Light), first part of the work Le Monde (The World), written in 1630-33 and published posthumously in 1664, and in his letter to Debeaune dated April 30, 1639. This proposition is given in its most complete form in R. Des-Cartes, Principia Philosophiae (Principles of Philosophy), Amstelodami, 1644, Pars secunda, XXXVI.
198. Grove, The Correlation of Physical Forces (see Note 16). On pp. 20-29 Grove speaks of the “indestructibility of force” when mechanical motion is converted into a “state of tension” and into heat.
199. This note was written on the same sheet as “Outline of Part of the Plan” and is a conspectus of ideas developed by Engels in the chapter “Basic Forms of Motion” (see this edition, pp. 19 and 69-86).
200. Grove, The Correlation of Physical Forces (see Note 16). By “affections of matter” Grove means “heat, light, electricity, magnetism, chemical affinity, and motion” (p. 15) and by “motion” he means mechanical motion or displacement.
201. This outline was written on the first sheet of the first folder of Dialectics of Nature. As regards its contents, it coincides with Engels’s letter to Marx dated May 30. 1873. This letter begins with the words: “This morning in bed the following dialectical ideas about natural science came into my head.” The exposition of these ideas is more definite in the letter than in the present outline. It may be inferred that the outline was written before the letter, on the same day, May 30, 1873. Not counting the fragment on Buchner (see this edition, pp. 202-07), which was written shortly before this outline, all the other chapters and fragments of Dialectics of Nature were written later, i.e., after May 30, 1873.
202. A. Comte set out this system of classification of the sciences in his main work A Course of Positive Philosophy, first published in Paris in 1830-42. The question of classification of the sciences is specially dealt with in the second lecture, in Volume I of the book, headed “An Exposition of the Plan of This Course, or General Considerations Concerning the Hierarchy of the Positive Sciences.” See A. Comte, Cours de philosophic positive, t. I, Paris 1830.
203. Engels is referring to the third part of Hegel’s Science of Logic, first published in 1816. In his Philosophy of Nature, Hegel denotes these three main divisions of natural science by the terms “mechanics,” “physics” and “organics.”
204. This note is one of those three larger notes (Noten) which Engels put in the second folder of materials for Dialectics of Nature (the smaller notes were put in the first and fourth folders). Two of these notes – “On the Prototypes of the Mathematical Infinite in the Real World” and “On the ‘Mechanical’ Conception of Nature” are Notes or Addenda to [Anti]-Dühring, in which Engels elaborates some very important ideas that were only outlined, or stated in brief, in various parts of [Anti]-Dühring. The third note, “Nageli’s Inability to Cognise the Infinite,” has nothing to do with [Anti]-Dühring. The first two notes were in all probability written in 1885. In any case, they cannot date from earlier than mid-April 1884, when Engels decided to prepare for the press a second, enlarged edition of (Anti)- Dühring, or later than late September 1885, when Engels finished and sent to the publisher his Preface to the second edition of the book. Engels’s letters to Bernstein and Kautsky in 1884 and to Schluter in 1885 indicate that he planned to write a series of Addenda and Appendices of a natural-scientific character to various passages in [Anti]-Dühring, with a view to giving them at the end of the second edition of the book. But owing to being extremely busy with other matters (above all with his work on the second and third volumes of Marx’s Capital); Engels was prevented from carrying out his intention. He only managed to make a rough outline of two “notes” or “addenda,” to pp. 17-18 and p. 46 of the text of the first edition of [Anti] -Dühring. The present notice is the second of these “notes.”
The heading “On the ‘Mechanical’ Conception of Nature” was given by Engels in his list of contents of the second folder of Dialectics of Nature. The sub-heading “Note 2 to p. 46”: “the various forms of motion and the sciences dealing with them” occurs at the beginning of this notice.
205. A. Kekulé, Die wissenschaftlichen Ziele und Leistungen der Chemie, Bonn, 1878, S. 12.
206. This refers to an item in Nature No. 420, November 15, 1877, summarising A. Kekulé’s speech on October 18, 1877, when he took the office of rector at the University of Bonn. In 1878 the speech was published in pamphlet form, under the title The Scientific Aims and Achievements of Chemistry.
207. E. Haeckel, Die Perigenesis der Plastidule oder die Wellenzeugung der Lebensteuchen. Ein Versuche zur mechanischen Erkldrung der elementaren Entwickelungs-Vorgange, Berlin, 1876, S. 13.
208. The Lothar Meyer curve shows the relation between the atomic weights of the elements and their atomic volumes. It was constructed by L. Meyer who dealt with it in his article “Die Natur der chemischen Elemente als Funktion ihrer Atomgewichte,” which appeared in 1870 in the journal Annalen der Chemie und Pharmacie. The discovery of the correlation between the atomic weights of the elements and their physical and chemical properties was made by the great Russian scientist D. I. Mendeleyev, who was the first to formulate the periodic law of the chemical elements in his article “The Correlation of the Properties of the Elements and Their Atomic Weights,” published in March 1869, i.e., a year prior to L. Meyer’s article, in the Journal of Russian Chemical Society. Meyer, too, was close to establishing the periodic law when he learned about Mendeleyev’s discovery. The curve made by him graphically illustrated the law discovered by Mendeleyev, except that it expressed the law in external and, unlike Mendeleyev, one sided terms. Mendeleyev went much farther than Meyer in his conclusions. On the basis of the periodic law discovered by him, Mendeleyev predicted the existence and specific properties of chemical elements still unknown at that time; whereas L. Meyer in his subsequent works revealed a lack of understanding of the nature of the periodic law.
209. See Note 183.
210. E. Haeckel, Naturliche Schopfungsgeschichte, 4. Aufl., Berlin, 1873, S. 538, 543, 588; Anthropogenie, Leipzig, 1874, S. 460, 465, 492.
211. Hegel, Encyclopaedia of the Philosophical Sciences, § 99, Addendum.
212. This fragment was written on a separate sheet marked Noten (Notes). It may be an original outline of the Second Note to [Anti]-Dühring headed “On the ‘Mechanical’ Conception of Nature.” | <urn:uuid:f14f09fc-d139-47bf-8756-ff7cc893a8b3> | CC-MAIN-2022-33 | https://connexions.org/CxArchive/MIA/marx/works/1883/don/ch07d.htm | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00297.warc.gz | en | 0.946019 | 7,500 | 3.03125 | 3 |
CORBIS, MIKE KEMP/RUBBERBALL
What can happen in a femtosecond? One millionth of a billionth (10-15) of a second—it’s a time scale that’s almost impossible to grasp. In 1 femtosecond light travels a distance much less than the thickness of a human hair, even less than the diameter of a bacterium. Give it an entire second, and light travels from the Earth to the Moon. Yet much of the chemistry vital for life punches a femtosecond time clock. The making and breaking of atomic bonds during every reaction in chemistry and biology passes through a transition state, which can be thought of as the moment at which the bond decides if it will break or reform. Movement of the surrounding atomic environment over a few femtoseconds has an equal probability of nudging atoms to form new products or of returning them to their original configuration. Because it is...
In the last decade, using tools that allow us to sketch the shape that atoms adopt in an enzyme’s active site during this minuscule span of time, we have begun to understand how to freeze an enzymatic reaction. By replicating a shape that exists for the femtosecond lifetime of the transition state, and creating a chemical copy of that shape, we can completely halt the action of the enzyme. These synthetic mimics are called “transition-state analogs” and are as powerful as any enzyme inhibitors ever created. Because they are exquisitely specific and required in extremely small doses, transition-state analogs provide an approach that could revolutionize how drugs are developed.
Finding ways to capture the behavior of atoms during an infinitesimally small and difficult-to-observe time span was no easy task.
Although designing transition-state analogs for enzymes is a relatively recent development, the idea has been around for quite some time. Two-time Nobel Prize–winning chemist Linus Pauling was an early proponent of the idea that enzymes recognize and bind tightly to their substrates during the transition state.[1. L. Pauling, “Chemical achievement and hope for the future,” American Scientist, 36:51-58, 1948.] Born in 1901, Pauling studied quantum mechanics with Niels Bohr in Copenhagen and Erwin Schrödinger in Zurich before returning to the States to apply what he had learned to chemistry. He saw the problem as similar to an antibody binding to its antigen and proposed that enzymes were designed to recognize the activated state of reactants, the transition state, with precision. The tight binding would sequester the activated reactants from solution and increase the reaction rate.
In 1972, Richard Wolfenden gave mathematical form to Pauling’s proposal by solving simple equilibrium equations between enzymes and their transition states.[2. R. Wolfenden, “Transition state analog inhibitors and enzyme catalysis,” Ann Rev Biophys Bioeng, 5:271-306, 1976.] Wolfenden’s equations predicted that the conformational changes in the shape of the enzyme’s active site and its substrate at the moment when the transition state forms increased binding strength by a factor of 1010 to 1020 .
This suggested a powerful idea. If scientists could produce chemically stable analogs of actual transition-state reactants, they would bind to the enzyme just as tightly during that brief femtosecond window of the enzymatic reaction and block its action. These proposals were supported by observations that natural-product antibiotics, which have features similar to transition-state analogs, were unusually powerful enzyme inhibitors. But these insightful hypotheses were made before the existence of experimental and computational approaches for observing and predicting the structures of enzymatic transition states. Indeed, finding enzymatic transition-state analogs is challenging, because a typical enzyme-reactant or enzyme transition-state complex often has more than 10,000 atoms whose positions would have to be determined.
Today, those insights have helped us shape the field of transition-state analog chemistry, and are leading to new approaches in drug development. But finding ways to capture the behavior of atoms during an infinitesimally small and difficult-to-observe time span was no easy task.
Catching a molecular moment
As more computational power became available in the 1980s, it was possible to imagine that enzymatic transition-state structure could be solved by a combination of experimental and computational quantum chemistry. To get at the problem, we combined a method often used in physical organic chemistry with computational chemistry that originated from the Manhattan Project.
At the end of the Manhattan Project, Jacob Bigeleisen, one of the program’s alums, released his theories, which were later reduced to computer algorithms and made available to the academic community through the Quantum Chemistry Exchange Program, which provided free access to early computational chemistry. When we started our work using this resource, the lab’s original goal was purely academic: to see if the transition-state structure of AMP nucleosidase, an enzyme involved in purine metabolism in E. coli, would be the same if the reaction was catalyzed by an acid rather than the enzyme. The transition states differed. Although the study did not have great significance biologically, it forced us to develop tools that have become essential to solving the transition-state structure of purine nucleoside phosphorylase (PNP), a known target for T-cell cancers. But figuring out the method took some doing.
Only two features, albeit complex ones, are needed to describe molecular interactions in biology: geometric shape and electrostatic charge. We applied these guiding principles to work out the atomic structures of transition states at the catalytic sites of particular enzymes. The shape of the electron cloud surrounding the atoms, known as the van der Waals surface, predicts how a molecule may occupy space and interact with partner molecules. Electron distribution at the van der Waals surfaces determines whether atomic neighbors will be attracted, like opposite poles of a magnet, or repulsed, like similar poles.
Two approaches are needed to solve these features of enzymatic transition states: the measurement of kinetic isotope effects and computational quantum chemistry. Measuring kinetic isotope effects gives information about both the geometry and electrostatic charge of the transition state. In these experiments we replace the common atoms of nature—hydrogen, carbon and nitrogen—with their heavy-isotope counterparts. For example, deuterium (mass 2) replaces hydrogen (mass 1) in the reactants for the enzyme of interest. Each atomic replacement in the kinetic isotope effect experiment alters the femtosecond bond vibrations of the reactant in the transition state. Thus, by replacing atoms isotope by isotope and then monitoring how those changes affect reaction times, we can collect enough information to deduce the atomic structure of the transition state.[3. V. L. Schramm, “Enzymatic transition states, transition-state analogs, dynamics, thermodynamics, and lifetimes,” Annu Rev Biochem, 80:703-32, 2011.]
Although it was possible to determine kinetic isotope effects as early as the 1960s, it wasn’t until the 1980s that computer-based quantum-chemical approaches became refined enough to begin to interpret the results. Computational quantum chemistry is used to search through thousands of possible transition states to find the one that matches the experimental observations from kinetic isotope studies. This structure is then analyzed using Schrödinger’s equation to obtain the wavefunction, which contains information about both geometry and electrostatic charge, and in fact is the most complete description that can be given of a transition state. This information provides enough of a picture, a virtual blueprint, to allow us to design analogs that mimic its geometry and electrostatic features.
Transition-state analogs are recognized by the enzyme, and the forces that would be applied to bond-breaking in the normal reaction are instead converted into binding energy; those forces are considerable, binding the analogs millions of times more tightly than the normal reactants. The transition-state analog for PNP binds 4,300,000 times tighter to its parent enzyme than does the normal reactant. This means that only tiny amounts of a drug that mimics the normal reactant’s transition-state geometry need be delivered to the target enzyme; and by binding tightly to the catalytic site, such an analog acts to inhibit the enzyme by preventing the normal reactants from binding. Transition-state analogs are now beginning to show results in early-stage trials as therapies for a wide range of unrelated diseases, demonstrating their promise as a better means of hitting a therapeutic target.
A future for transition state drug design?
This new approach to drug design differs from more time-honored methods such as synthesizing a chemical entity that is tailored to fit its target—an approach called structure-based drug design—or chemical-library screening and natural-products chemistry, the random search through millions of chemical compounds in the hopes of chancing on one that inhibits the target enzyme. Each of these methods is universal, that is, they can be used against most targets of interest to the pharmaceutical industry, including receptor molecules, ion channels, and enzymes. Transition-state analysis, by contrast, is limited to enzyme targets. Although a small slice of the pharmaceutical-target pie, enzymes are ubiquitous and essential to life, and thus an area ripe for new drug development.
Even though drug design from enzymatic transition-state analysis is in its infancy, it has already produced a family of potential drugs for an array of biological targets. With many already in clinical trials, we will soon learn the details of their biological impact.
A plethora of applications
Chemical analogs for targeting transition state offer a potent method for inhibiting enzymes involved in disease. Today, transition-state analog inhibitors designed for specific targets are beginning to wend their way into preclinical and clinical trials. These drugs have the potential to produce fewer side effects than other enzyme inhibitors, because their binding is so specific and strong.
We designed an early proof-of-concept molecule, immucillin-H, as one of the first transition-state analogs. The original goal was to design a mimic that would bind to the transition state of bovine purine nucleoside phosphorylase (PNP). Blocking PNP is known to kill rapidly dividing T cells found in T-cell leukemia patients and in patients with autoimmune disorders where the T cells are attacking normal host tissues. Immucillin-H was chemically synthesized for us by Peter Tyler at Industrial Research Ltd. in New Zealand. It proved to be a powerful transition-state analog inhibitor of human PNP, and is now in worldwide clinical trials for several different types of leukemia under the name forodesine.[4. K. Balakrishnan et al., “Phase 2 and pharmacodynamic study of oral forodesine in patients with advanced, fludarabine-treated chronic lymphocytic leukemia,” Blood, 116:886-92, 2010.]
Because the first-generation transition-state analog was difficult to synthesize, and was designed for the bovine enzyme, we created a second-generation PNP inhibitor designed specifically to match the transition state of human PNP. It is called BCX4208 and is in clinical trials for treatment of gout. Gout is caused by an excess of uric acid in the blood, leading to painful and destructive crystal formation in joints. More than 15 million people suffer from gout in North America, Europe, and Japan, and the disease is not easily controlled by current drugs. In humans, uric acid formation requires the action of PNP; hence, blocking PNP with BCX4208 may eventually lead to a unique and effective approach to gout management by using the analog at low levels such that normal T cells are not depleted.[5. S. Bantia et al., “Potent orally bioavailable purine nucleoside phosphorylase inhibitor BCX-4208 induces apoptosis in B- and T-lymphocytes—a novel treatment approach for autoimmune diseases, organ transplantation and hematologic malignancies,” Int Immunopharmacol, 10:784-90, 2010.]
An encouraging feature of these powerful PNP inhibitors is their tight binding to the PNP target. They bind so tightly that after a few hours most of the unbound drug is gone from the blood, while the inhibitor stays bound to the target enzyme for the lifetime of the cell.[6. A. Lewandowicz et al., “Achieving the ultimate physiological goal in transition state analogue inhibitors for purine nucleoside phosphorylase,” J Biol Chem, 278:31465-68, 2003.] This is important because most drugs require careful dosing to maintain sufficient amounts in the blood to allow for constant interaction with the target. This requirement for excess circulating drug exposes all cells and increases the risk that off-target interactions will cause side effects. With specific and long-lasting binding to the target, and no drug in the circulation, side effects would in theory be minimized.
PNP transition-state analogs may also find application in combating malaria. Over the course of evolution, Plasmodium falciparum, the most lethal malaria parasite, lost the ability to make its own purines from amino acids and sugars because those of the host are so readily available, and it now relies on host purines, specifically hypoxanthine, to make its RNA and DNA. Both in humans and in P. falciparum, the only way to make hypoxanthine from purine nucleosides is by the catalytic action of PNPs. The challenging aspect of creating an inhibiting compound for malaria was to find a molecule that mimics the transition-state structures of both human and parasitic PNPs.
To overcome this problem, we compared the transition states of human and P. falciparum PNPs. We discovered that a single transition-state inhibitor called DADMe-immucillin-G was similar to both human and parasite PNP transition states and might act as a powerful inhibitor of both. DADMe-immucillin-G was synthesized by Gary Evans at Industrial Research Ltd., and blocked both the human and parasite enzymes at picomolar concentrations. As P. falciparum can only infect primates, we infected Aotus monkeys with the parasite and then treated them with oral doses of DADMe-immucillin-G twice a day for 7 days. The primates’ blood was cleared of parasites between the 4th and 7th days, and no parasites were detected for up to 9 days post-treatment. Although infections returned after the treatment ended, meaning that not every parasite had been killed, slower parasitic growth was observed. As in other applications of PNP inhibitors, no toxicity was detected. P. falciparum infections in the Aotus test are more virulent than in humans; thus there is hope that similar treatment might be even more effective in humans, but clinical trials are needed to test this hypothesis.[7. M. B. Cassera et al., “Plasmodium falciparum parasites are killed by a transition state analogue of purine nucleoside phosphorylase in a primate animal model,” PLoS One, 6:e26916, 2011.]
Targeting pathways important in rapidly dividing cancer cells is a time-honored approach to designing anticancer agents. But most agents also damage normal cells, causing the well-known side effects of cancer chemotherapy. We designed and synthesized transition-state analogs to disrupt the polyamine pathway, which provides essential counterions necessary for quick DNA strand separation in a rapidly dividing cancer cell. The transition-state analogs block human methylthioadenosine phosphorylase (MTAP), resulting in the cellular accumulation of 5’-methylthioadenosine, a metabolite that inhibits polyamine biosynthesis and halts cancer cell division. MTAP inhibitors have shown remarkable efficacy in blocking the growth of or eradicating human lung cancers and head and neck cancers grown in immune-deficient mice.[8. I. Basu et al., “A transition state analogue of 5’-methylthioadenosine phosphorylase induces apoptosis in head and neck cancers,” J Biol Chem, 282:21477-86, 2007.],[9. I. Basu et al., “Growth and metastases of human lung cancer are inhibited in mouse xenografts by a transition state analogue of 5’-methylthioadenosine phosphorylase,” J Biol Chem, 286:4902-11, 2011.] The MTAP inhibitors are administered orally, can be given once a day, and mice that are fed large quantities of the drug remain healthy as judged by weight, blood chemistry, and tissue histology. These unique properties of the MTAP transition-state analogs in mouse models offer the intriguing possibility of controlling certain human cancers with a once-a-day pill that has few side effects. Although the vast majority of drugs that succeed in the mouse model fail in humans, we are encouraged by the specificity and the low toxicity we observed.
Bacterial antibiotic resistance remains another of the world’s major health challenges. The problem stems from the fact that antibiotics kill all bacteria save those rare individuals that develop resistance. Under continued antibiotic selection pressure, the resistant individuals give rise to a new population. So it has been since the advent of penicillin and with all subsequent bacterial antibiotics. But there are better targets that could control the detrimental aspects of infection without killing bacteria and thereby introducing selection pressure. Children are immunized with the DPT (diphtheria, pertussis, tetanus) vaccine to prevent these bacterial diseases, but the combined vaccine does not necessarily create immune reactions against the bacteria. Rather, the D and T of DPT represent inactivated diphtheria and tetanus toxoids produced by the infecting bacteria. Immunization with toxoids creates antibodies against the toxins, not the bacteria. But it is the toxins that cause damage to human tissue.
Many bacterial toxins are produced under the genetic control of quorum-sensing molecules that detect when the numbers of bacteria grow to a critical threshold—a quorum. Our group hypothesized that blocking the quorum-sensing pathway would cut the telegraph wires and prevent transmission of the “make toxin!” attack message. Without production of pathogenicity factors like toxins, biofilms, and human-cell attachment factors, otherwise harmful bacteria would be disarmed but would continue to live, thereby removing the selection pressure for the development of resistant strains.
We targeted a bacterial enzyme called MTAN that regulates the production of quorum-sensing molecules in gram-negative bacteria such as Pseudomonas, Vibrio, and Escherichia species. Our transition-state analog designed to inhibit bacterial MTAN has led to the synthesis of some of the most powerful inhibitors ever described for enzymes. The best of these bind to MTAN 91 million times tighter than the normal enzyme reactants. At nanomolar concentrations, MTAN inhibitors block quorum-sensing molecules in a virulent strain of Vibrio cholera, the causative agent of cholera, without appearing to cause resistance. In fact, when Vibrio bacteria are grown for 26 generations (a 226 increase in cell population) in the presence of a large excess of MTAN inhibitors, subsequent bacterial generations are equally sensitive to inhibition of the quorum-sensing pathway.[10. J. A. Gutierrez et al., “Transition state analogs of 5’-methylthioadenosine nucleosidase disrupt quorum sensing,” Nat Chem Biol, 5:251-57, 2009.] Now it remains to be seen whether transition-state analogs can translate into new and novel antibiotics. A generation of antibiotics that could prevent disease without causing resistance would indeed be a boon to medical treatment.
Vern L. Schramm is Professor and Chair of Biochemistry and the Ruth Mearns Chair in Biochemistry at Albert Einstein College of Medicine of Yeshiva University, located in the Bronx, New York. | <urn:uuid:c58530a5-bcb2-4502-b2c4-25a61c6a9f9a> | CC-MAIN-2022-33 | https://www.the-scientist.com/features/freezing-time-41066 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571869.23/warc/CC-MAIN-20220813021048-20220813051048-00097.warc.gz | en | 0.937587 | 4,233 | 3.90625 | 4 |
The following describes the data and analysis of the Climate Indicators.
The Climate Indicators are split into two main parts, one on single indicators, one on thematic areas covering several indicators. To read more about the Climate Indicators, see the 'About the Climate Indicators' page.
Many of the datasets used here are freely available, for example from the Climate Data Store (CDS), however they may come with a license that restricts their use.
- Temperature indicator
ERA5 1979–2021: Data | Documentation
ERA5 (preliminary) 1950–1978: Data | Documentation
JRA-55: Data | Documentation
GISTEMPv4: Data | Documentation
HadCRUT5: Data | Documentation
NOAAGlobalTempv5: Data | Documentation
Berkeley Earth: Data | Documentation, Appendix
The temperature indicator is based on the latest versions of six datasets available in January 2022. One dataset, ERA5, is produced by C3S (ECMWF) and available in the Climate Data Store. The other five datasets are produced by and available from other institutions: JRA-55 produced by the Japan Meteorological Agency (JMA), GISTEMPv4 produced by the US National Aeronautics and Space Administration (NASA), HadCRUT5 produced by the Met Office Hadley Centre in collaboration with the Climatic Research Unit of the University of East Anglia, NOAAGlobalTempv5 produced by the US National Oceanic and Atmospheric Administration (NOAA) and Berkeley Earth, produced by the organisation of the same name. Two of the datasets are reanalyses (ERA5 and JRA-55) and the others are gridded datasets derived from in situ observations.
The data have been accessed and processed as described in a peer-reviewed publication (Simmons et al., 2017) and a more recent ECMWF Technical Memorandum. These publications explain the choice and nature of the datasets, the horizontal resolutions that are used and how anomalies are adjusted to a common reference period. They also provide discussion of observational coverage and the differences between the resulting data products.
The ERA5 dataset, including its preliminary back extension, starts in 1950 However, the observations it uses over land are sparse over the tropics and Southern Hemisphere prior to 1958. Global and all-land data from ERA5 are thus shown only from 1958 onwards (Figures 1 and 2). Observational coverage is better over Europe and the Arctic, for which ERA5 data are shown from 1950 onwards (Figures 3 and 4). The back extension is currently not used in the operational monthly reporting and as such is used only sparingly in ESOTC 2021.
Each dataset shown in the graphs is aligned to have the same average temperature as ERA5 for 1991–2020. For JRA-55, this involves an adjustment that reduces global averages by 0.13°C and European averages by 0.03°C. Uncertainty is larger for the all-land averages. This is because of a relatively large contribution from Antarctica, which is quite sparsely observed and has extreme temperatures that are challenging to model. A number of the datasets used have sparse coverage of Antarctica. ERA5 and JRA-55 provide spatially complete estimates for the continent, but JRA-55 is warmer than ERA5 by some 2°C on average. The average temperature over all land from JRA-55 is about 0.4°C higher than that from ERA5.
The four other datasets (i.e. all bar ERA5 and JRA-55) were originally defined only as values relative to thirty-year reference periods. HadCRUT5 is an ensemble of 200 possible realisations. The ensemble mean and range of the ensemble are plotted. The ensemble does not sample the uncertainty associated with limited geographical coverage, which is largest for the earliest decades. HadCRUT5 comes in two variants; the one that is extended spatially to provide more complete coverage is used here. NOAAGlobalTempv5 has the most limited spatial coverage and is the only one of the six datasets used that does not provide virtually complete coverage of the Arctic since the 1950s.
1991–2020 reference period
1991–2020 is the latest 30-year reference period defined by the World Meteorological Organization (WMO) for calculating climatological averages. It is the current standard reference period for general use, although 1961–1990 is retained as a reference for long-term climate change assessments.
1981–2020 is the first 30-year reference period for which satellite observations of key variables, including sea surface temperature and sea ice cover, are available to support globally complete meteorological reanalyses such as ERA5. It was the main reference period used for ESOTC 2020 and earlier ESOTC reports.
Estimating change since 1850–1900
The IPCC report on ‘Global warming of 1.5°C’ adopted a then-current estimate of the global average temperature for 1850–1900 as an approximation for the pre-industrial temperature level. The increase from this reference level to the temperature of the 20-year period 1986–2005 was estimated to be ‘0.63°C (± 0.06°C 5–95% range based on observational uncertainties alone)’. The annual mean temperature difference between the periods 1981–2010 and 1986–2005 was insignificant for all the then-current versions of the global datasets presented here (-0.005°C to +0.004°C). On this basis, the rise in global average temperature over the industrial era was taken to be 0.63°C larger than the (less uncertain) rise in temperature above the 1981–2010 level in ESOTC 2018 and 2019.
The datasets used in ESOTC 2020 were mostly newer versions of those available when the Paris Agreement was made and when the IPCC report on ‘Global warming of 1.5°C’ was prepared. The estimate of the global average temperature for 1850–1900 the the datasets in ESOTC 2020 provided was 0.05°C below that quoted in the IPCC report, i.e. 0.68°C below the 1981–2010 average. This value, rather than 0.63°C, was used in ESOTC 2020.
ESOTC 2021 is based on defining the 1850–1900 global average to be 0.88°C below the 1991–2020 average. This is based on a 0.19°C difference between ERA5 averages for 1991–2020 and 1981–2010, and an adjustment of 0.01°C for consistency with new estimates documented last year in the Sixth IPCC Assessment Report. These estimates were based on a different mix of datasets than used here.
IPCC estimates are not available for the all-land, European and Arctic regions for which C3S provides temperature indicators. For these cases, the changes from the 1850–1900 average shown in the right-hand axis of each graph are based on estimates of the differences between 1991–2020 and 1850–1900 averages derived from the Berkeley Earth, GISTEMPv4, HadCRUT5 and NOAAGlobalTempv5 datasets.
- Sea surface temperature indicator
Anomalies are calculated relative to a 1991–2020 average. For the satellite data, the anomalies are calculated daily based on a daily climatology computed from a five-day mean centred on each day. For the in situ datasets, anomalies are calculated on a monthly basis by subtracting the 1991–2020 mean anomaly (relative to the original baseline used by the dataset) for each month. Daily anomalies were aggregated to monthly anomalies, and the monthly anomalies were aggregated to annual anomalies giving each month an equal weight.
Area-averaged anomalies were calculated using an area-weighted average of non-missing grid cells within the chosen region. A grid cell was assumed to be within a region if its centre was within the region. Ocean area in the in situ products was estimated based on the high-resolution C3S satellite product, assigning 100% ocean area to grid cells populated in the satellite product and 100% land area to grid cells that are missing in that product.
The satellite data series extends back to 1983, but here data from 1993 onwards are used for consistency with the CMEMS SST indicator. ERSSTv5 extends back to 1854, however, for the global graphic (Figure 1), data are used from 1880 onward and from a pre-calculated time series released by NOAA. This is related to ‘Data sparsity in early records (before 1880) creates a damping effect that affects the analyzed signal, but its strength and consistency improves over time.’ (see documentation). For European seas, which can be considered better observed, the full ERSSTv5 time period is used.
Uncertainty information in large scale aggregates is not available for the L4 satellite data, so uncertainties in this product were not computed. Uncertainties in the HadSST4 product were calculated following Kennedy et al. (2019). Correlated uncertainties were assumed to be correlated within a year and uncorrelated between years where appropriate. Uncertainties were not calculated for the ERSSTv5 and HadISST1 datasets. No uncertainty information is available for HadISST1. Pre-computed uncertainties in the ERSSTv5 data are available for the global mean but not for the European seas and other regional SST averages. Uncertainty in ERSSTv5 is represented by an ensemble, but the ensemble is not regularly updated.
The trend map was calculated using OLS as implemented in the numpy routine polyfit. A simple straight-line fit was performed assuming uncorrelated residuals. Significance was not computed.
The areas used for the regional seas are the following:
- Europe: 35°–70°N, 25°W–40°E
- Baltic Sea: 52.5°–67.5°N, 8.5°–30.5°E
- Black Sea area 39.5°–48.5°N, 27.5°W–42.5°E
- Mediterranean area 30.5°–46.5°N, 6.5°W–38.5°E
- North Sea area 50.5°–60.5°N, 5.5°W–9.5°E
- Cryosphere thematic section
The cryosphere thematic section gives an overview of the components of the cryosphere and their role in a changing climate. The references are listed in the main section.
Figure 1. is a schematic representation of the components of the cryosphere. Figure 2. builds on the same datasets as presented in the ice sheet, glacier and sea ice indicators; see details further down.
In addition, it presents information on sea ice thickness:
This dataset provides monthly gridded data of sea ice thickness for the Arctic region based on satellite radar altimetry observations from October 2002 onward. Measurements from the Envisat satellite mission are used from October 2002 to October 2010; measurements from the CryoSat-2 mission are used from November 2010 onward. This dataset is currently limited spatially to the Arctic region and temporally to the winter months of October through April because of challenges in estimating sea ice thickness from space during the melt season.
- Glacier indicator
The cumulative mass balance estimates considered here are based on long-term in situ observations, which are compiled by the World Glacier Monitoring Service (WGMS) in annual calls-for-data from a scientific collaboration network across more than 40 countries worldwide. The estimates given here are from a subset of global and European reference glaciers (WGMS 2021, updated and earlier reports).
Figure 1, with global glacier mass change, shows the estimated cumulative annual mass balance for a set of global reference glaciers with more than 30 continuous years of observation within the time period from 1957 to 2021. For Greenland, Iceland and the Pyrenees, time series started between the mid-1980s and the mid-1990s. The values are given relative to 1997. This date was chosen for two reasons – it was the year the Kyoto Protocol was adopted, and the latest start year of mass balance observations for the considered glaciers is 1996 (Mittivakkat on Greenland). Global values are calculated using only one single value (averaged) for each region with glaciers to avoid a bias to well-observed regions. Regional values are calculated as arithmetic averages.
The glaciers considered for the calculation of European glacier mass change (Figure 2) are: Mittivakkat in Greenland; Austre Broeggerbreen and Midtre Lovenbreen in Svalbard; Bruarjökull, Eyjabakkajökull, Hofsjökull (E, N, SW), Koldukvislarjökull, Langjökull and Tungnaarjökull in Iceland; Engabreen and Storglaciären in northern Scandinavia; Ålfotbreen, Nigardsbreen and Rembesdalskåka in southwestern Scandinavia; Gråsubreen, Hellstugubreen and Storbreen in southeastern Scandinavia; Hintereisferner, Kesselwandferner, Vernagtferner, Allalin, Giètro, Gries, Silvretta, Argentière, Saint Sorlin, Sarennes and Carèser in the Alps; Maladeta in the Pyrenees; and Djankuat and Garabashi in the Caucasus.
The glacier distribution map (Figure 3) is based on the Randolph Glacier Inventory (RGI 2017), which consists of a global compilation of glacier outlines that have mainly been derived from satellite images acquired between the years 2000 and 2010. The map shows the distribution of glaciers on the European continent, Svalbard, Iceland and the peripheral glaciers of Greenland, together with the locations of glaciers with long-term mass change measurements. The total area of the glaciers in Europe is 51250 km2, excluding peripheral glaciers in Greenland.
The corresponding regional glacier areas and the mean cumulative mass changes since 1997 are given in the table below. Negative values indicate a loss of glacier mass.
Region Cumulative mass change in m w.e. Glacier area in km2 Total mass change in km3 Greenland -23.3 89,717 (1999–2002) -2,093 Iceland -19.6 11,059 (1999–2004) -217 Svalbard -14.7 33,959 (2007–2008) -498 Scandinavia North -7.9 1,431 (1999–2006) -11 Scandinavia South-west -10.9 1,215 (1999–2006) -13 Scandinavia South-east -18.3 302 (1999–2006) -6 Alps -27.7 2,089 (2003) -58 Pyrenees and Apennines -22.2 3 -0.1 Greater Caucasus -14.9 1,193 (2014) -18 Mean/Total/Total -17.7 140,968 -2,914
Ice sheet indicator
The cumulative mass balance estimates considered here are compiled by the Ice Sheet Mass Balance Inter-comparison Exercise (IMBIE) from a scientific collaboration network across more than 40 countries. The dataset presented is a reconciled estimate of mass balance estimates from three independent satellite techniques – gravimetry, altimetry and input-output method – and their associated uncertainty. For Greenland, 26 different surveys were used to produce this single community estimate; 24 were used for Antarctica. Common spatial and temporal domains were used to compute the satellite data to support the aggregation of the individual datasets. For each of the three independent satellite techniques (gravimetry, altimetry, input-output method), a time series was formed by taking the error-weighted average of individual rates of ice sheet mass change computed using the same technique. These error-weighted averages of the three satellite techniques were then combined into a single reconciled estimate of ice sheet mass change using error weighting.
The graphs for Greenland (Figure 1) and Antarctica (Figure 2) mass change show the estimated cumulative balance over more than 25 years of continued observation. The mass change is also converted to an equivalent sea level contribution by assuming that 360 Gt of ice is equivalent to 1 mm of sea level rise.
For Greenland, the total mass change time series covers the period 1992–2020. For Antarctica, the mass change time series is computed for the whole continent, but also for the East Antarctic Ice Sheet, Western Antarctic Ice Sheet and the Antarctic Peninsula Ice Sheet from 1992 to 2020.
- Sea ice indicator
The gridded sea ice concentration data used in Figures 2 and 5 come from the EUMETSAT Ocean and Sea Ice Satellite Application Facility (OSI SAF) Global Sea Ice Concentration Climate Data Record v2.0. This is a daily product derived from satellite passive microwave observations from the series of SMMR, SSM/I and SSMIS sensors, and covering the period from January 1979 to present.
The sea ice edge shown in Figures 2 and 5 is based on the Sea Ice Edge Climate Data Record v2.0 produced by C3S. This is a daily product derived from the same SSMR, SSM/I and SSMIS observations as those used for the OSI SAF Sea Ice Concentration product and covering the same period (Oct 1978–present).
The time series of Arctic sea ice type shown in Figure 3 are based on the Sea Ice Type Climate Data Record v2.0 produced by C3S. This is a daily gridded product also derived from SSMR, SSM/I and SSMIS observations and covering the periods from October 1978 to present. In this product, ocean grid points are classified as open water, first-year ice, or multiyear ice.
The monthly time series of sea ice extent used in Figures 1 and 4 are based on the EUMETSAT OSI SAF Sea Ice Index v2.1 (OSI-420) product. This product is itself derived from the OSI SAF Sea Ice Concentration v2.0 product mentioned above and covers the same period (1979–present).
Monthly mean sea ice concentrations. During the period when observations from the SMMR sensor are used, data are only available every other day. To generate monthly mean sea ice concentrations, the gridded data are first linearly interpolated in time. This gap filling is only applied to the months shown in Figures 2 and 5, that is March and September for the Northern Hemisphere, and February and September for the Southern Hemisphere. Then, least-square linear regression is applied to the monthly mean data to produce the trend maps.
Median sea ice edge. The median sea ice edge shown in the trend maps is defined as the contour line along which grid cells have a 50% probability of being classified as open water or open ice in the daily gridded sea ice edge product. Note that this product uses a threshold of 30% ice concentration to distinguish between open water and open ice.
Median sea ice type. To generate the time series of Arctic average sea ice types shown in Figure 3, the total daily sea ice area associated with each sea ice type is first calculated and then used to compute monthly averages and 3-month averages for January–March.
Trend calculation. The trend values quoted in the text and in note of the ‘Sea ice indicator’ are derived from least-squares linear regression applied to the monthly sea ice extent data. For the Arctic, the trend values for March and September are both statistically significant at the 99% confidence level. For the Antarctic, neither trend value is statistically significant. Similarly, the trend maps shown in Figures 2 and 5 are based on least-squares linear regression applied to the monthly sea ice concentration data.
Why are ERA5 sea ice data not used?
For its monthly Climate Bulletins, C3S relies on sea ice concentration data from ERA5. The ERA5 sea ice record is itself a combination of different sea ice products and was found to have less temporal consistency than the OSI SAF Sea Ice Index v2.1 product used here. However, at present, the OSI SAF product is only available with a latency of 16 days, which is too long for it to be used for the Climate Bulletins and the focus of the monthly bulletin is to provide an overview of last month within the long-term context, rather than focus on the long-term context itself.
- Greenhouse gas thematic section
The greenhouse gas (GHG) thematic section gives an overview of greenhouse gas sources and sinks, their role in the climate system, as well as the type of data provided by CAMS and C3S. There are no data directly used in this section.
- Greenhouse gas concentrations indicator
C3S climate data record XCO2: Data | Documentation
C3S climate data record XCH4: Data | Documentation
CAMS near real-time data record XCO2: Data & Documentation
CAMS near real-time data record XCH4: Data | Documentation
The C3S XCO2 and XCH4 satellite-derived data products (v4.3), including documentation, (e.g. Buchwitz et al., 2021) are available from the Climate Data Store. They have been generated using retrieval algorithms developed by University of Bremen (Germany), SRON (The Netherlands), University of Leicester (UK), NIES (Japan) and NASA (USA) using radiance spectra as measured by the satellite instruments SCIAMACHY on Envisat, TANSO-FTS onboard the Japanese GOSAT satellite and NASA’s OCO-2 satellite mission. For details please see Reuter et al., 2020.
The CAMS XCO2 data product has been retrieved in near real-time (NRT) from radiances as measured by the GOSAT satellite using the BESD algorithm (Heymann et al., 2015) developed at University of Bremen, Germany. This data product is available from the GOSAT/BESD website. The CAMS XCH4 data product has also been retrieved in NRT from GOSAT radiances but using the RemoTeC algorithm (Butz et al., 2011; Guerlet et al., 2013; Schepers et al., 2012) developed at SRON, The Netherlands. This data product is available via ftp from SRON.
The time series of satellite retrievals shown start at the beginning of 2003. The C3S datasets do not extend until near real-time, as that requires extra processing steps. The C3S data record is extended once a year, by one additional year. To cover the time between the C3S dataset until present, the CAMS dataset is used. The figures have been generated by first computing monthly averages. However, the C3S and CAMS satellite data products are also available for each individual satellite footprint along with detailed information such as time and location of each observation.
The global and hemispheric averages have been obtained by simply averaging all available data without specific adjustments due to missing data. The only adjustment that has been made is ‘area weighting’ to consider the fact that the area of a given latitude band decreases for latitudes closer to the poles.
The XCO2 and XCH4 growth rates shown in Figures 1 and 3 have been computed using the method of Buchwitz et al. (2018). The quoted uncertainty in brackets is the (1 sigma, i.e., 68%) confidence interval.
Note on values over ocean: water is a bad reflector in the short-wave infrared spectral region and requires a specific observation mode which GOSAT (bottom right in Figure 2 and 4) has but SCIAMACHY (top left in Figures 2 and 4) has not.
- Greenhouse gas fluxes indicator
The CAMS greenhouse gas product describes the variations, in space and in time, of the surface sources and sinks (fluxes) of the three major greenhouse gases that are directly affected by human activities: carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O). The variations provide information on the underlying emissions and absorption processes of these gases.
For CO2, the product distinguishes between natural and anthropogenic contributions. For CH4, the product also distinguishes between four emission types (rice cultivation, natural wetlands, biomass burning and other sources).
This product primarily exploits high-quality measurements of air samples collected at tens of sites around the world by various laboratories (159 sites for CO2, 31 sites for CH4 and 123 sites for N2O), in combination with a numerical model of atmospheric tracer transport (Chevallier et al. 2010, Bergamaschi et al. 2013, Thompson et al. 2014).
The selected air sample measurements themselves have negligible uncertainty and are representative of large areas. The product uncertainty mainly comes from the limited coverage of the measurement network and from errors in atmospheric transport modelling. Expressed in relative terms, the uncertainty can reach over 100% for some years and some regions if the estimated flux is small, and much less for some other years.
The flux data and the associated atmospheric fields are available to download. CAMS also provides daily forecasts of atmospheric concentrations of CO2 and CH4 globally with a horizontal resolution of about 9 km by 9 km.
Observations have kindly been provided by many laboratories around the world, including NOAA, CSIRO, ECCC and ICOS-ATC.
The length of the data flux record is 1979-onwards for CO2, 1996-onwards for N2O and 1990-onwards for CH4. The fluxes are defined so that a positive value indicates a net flux into the atmosphere.
The net annual fluxes of Figure 1 are averaged over the globe. The conversion factor of 2.086 PgC/ppm is taken from Prather (2012) and accounts for the lag between CO2 variations in the troposphere and in the stratosphere.
The regions used in Figures 2 and 3 are from the ‘Transcom’ mask. For CO2 the net flux is related to natural processes only, while for CH4 and N2O the flux shown includes all sources and sinks. All values are expressed as the fraction of the total global mean net flux into the atmosphere.
The uncertainty estimate given in Figure 4 is the 68% uncertainty envelope (one standard deviation) calculated with a robust Monte Carlo approach.
The analysis of the change in the European sink for CO2 given here is derived from Bastos et al. (2016).
- Sea level indicator
The sea level information presented here was originally prepared for the Copernicus Marine Environment Monitoring Service (CMEMS) Ocean Monitoring Indicators and subsequently discussed in the CMEMS Ocean State Reports (Legeais et al., ‘Sea level’ in von Schuckmann et al, 2018, 2020).
The sea level dataset used here is based on the sea level Ocean Monitoring Indicators produced by CMEMS and for which C3S products are used as input data. These C3S products are derived from the DUACS delayed-time altimeter gridded maps of sea level anomalies based on a stable number of altimeters (two) in the satellite constellation. Up-to-date altimeter standards are used to estimate the sea level anomalies. Contrary to near real-time sea level products, the stability and accuracy of the delayed-time products make them more suitable for climate applications and ocean monitoring indicators. The timeliness of the sea level products reaches about five months due to the timeliness of the input data, the centred processing temporal window and the validation process.
The sea level anomalies are computed with respect to the 1993–2012 reference period. This means that the inter-annual physical content is referenced to this period. A convention has then been applied for the whole time series so that the averaged global mean sea level during the year 1993 is set to zero. The use of the TOPEX-A instrumental drift correction then includes an additional offset, which has no impact on the inter-annual physical content nor on the sea level trends. The data shown here cover the altimeter era from January 1993 until a few months from present time.
The Earth’s crust is slowly moving upwards due to post-glacial rebound, which affects the sea level observed by altimetry. The global mean sea level trend is corrected for this Glacial Isostatic Adjustment (GIA) using the ICE5G-VM2 GIA model (Peltier, 2004) to take into account the associated volume changes of the ocean.
TOPEX-A instrumental drift
Between 1993 and 1998, the global mean sea level is known to have been affected by a TOPEX-A instrumental drift (WCRP Sea Level Budget Group, 2018; Legeais et al., 2020). This anomaly has been well characterised (see e.g., Valladeau et al., 2012; Watson et al., 2015, Dieng et al. (2017), Beckley et al., 2017). This instrumental drift led to overestimating the trend of the global mean sea level during the first six years of the altimetry era (3.3 mm/yr reduced to 3.0 mm/yr after correction) and the corrected time series shows a clear acceleration over 1993 to present time. However, there is not yet consensus on the best approach to estimate the drift correction at global and regional scales. The global mean sea level evolution presented here (Figure 1) has been corrected for this drift, based on comparisons between altimeter and tide gauges measurements (WCRP Sea Level Budget Group, 2018). Currently, this empirical correction is not applied to the altimeter sea level datasets distributed to users, waiting for the ongoing TOPEX reprocessing by CNES and NASA/JPL.
The altimeter regional sea level trends (Figure 2) have not been corrected for the GIA effect nor for the TOPEX-A instrumental drift.
The uncertainty in the global mean sea level trend since 1993 is estimated to be ± 0.4 mm/yr with a confidence interval of 90% (1.65 sigma) (Ablain et al., 2019). On a regional scale, the averaged local sea level trend uncertainty during 1993–2019 is 0.86 mm/yr with local values ranging from 0.8 to 1.3 mm/yr, and up to 0.9 mm/yr in the European area (Prandi et al., 2021). Note that the uncertainties derived from these studies only consider the errors of the instrumental altimeter measurement system: the uncertainty related to the internal ocean variability is not included. | <urn:uuid:5571fcb2-5176-4bed-8e65-e71ae3220b67> | CC-MAIN-2022-33 | https://climate.copernicus.eu/climate-indicators/about-data | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573540.20/warc/CC-MAIN-20220819005802-20220819035802-00494.warc.gz | en | 0.914404 | 6,449 | 2.953125 | 3 |
The Hebrew sha·maʹyim (always in the plural), which is rendered “heaven(s),” seems to have the basic sense of that which is high or lofty. (Ps 103:11; Pr 25:3; Isa 55:9) The etymology of the Greek word for heaven (ou·ra·nosʹ) is uncertain.
Physical Heavens. The full scope of the physical heavens is embraced by the original-language term. The context usually provides sufficient information to determine which area of the physical heavens is meant.
Heavens of earth’s atmosphere. “The heaven(s)” may apply to the full range of earth’s atmosphere in which dew and frost form (Ge 27:28; Job 38:29), the birds fly (De 4:17; Pr 30:19; Mt 6:26), the winds blow (Ps 78:26), lightning flashes (Lu 17:24), and the clouds float and drop their rain, snow, or hailstones (Jos 10:11; 1Ki 18:45; Isa 55:10; Ac 14:17). “The sky” is sometimes meant, that is, the apparent or visual dome or vault arching over the earth.—Mt 16:1-3; Ac 1:10, 11.
This atmospheric region corresponds generally to the “expanse [Heb., ra·qiʹaʽ]” formed during the second creative period, described at Genesis 1:6-8. It is evidently to this ‘heaven’ that Genesis 2:4; Exodus 20:11; 31:17 refer in speaking of the creation of “the heavens and the earth.”—See EXPANSE.
When the expanse of atmosphere was formed, earth’s surface waters were separated from other waters above the expanse. This explains the expression used with regard to the global Flood of Noah’s day, that “all the springs of the vast watery deep were broken open and the floodgates of the heavens were opened.” (Ge 7:11; compare Pr 8:27, 28.) At the Flood, the waters suspended above the expanse apparently descended as if by certain channels, as well as in rainfall. When this vast reservoir had emptied itself, such “floodgates of the heavens” were, in effect, “stopped up.”—Ge 8:2.
Outer space. The physical “heavens” extend through earth’s atmosphere and beyond to the regions of outer space with their stellar bodies, “all the army of the heavens”—sun, moon, stars, and constellations. (De 4:19; Isa 13:10; 1Co 15:40, 41; Heb 11:12) The first verse of the Bible describes the creation of such starry heavens prior to the development of earth for human habitation. (Ge 1:1) These heavens show forth God’s glory, even as does the expanse of atmosphere, being the work of God’s “fingers.” (Ps 8:3; 19:1-6) The divinely appointed “statutes of the heavens” control all such celestial bodies. Astronomers, despite their modern equipment and advanced mathematical knowledge, are still unable to comprehend these statutes fully. (Job 38:33; Jer 33:25) Their findings, however, confirm the impossibility of man’s placing a measurement upon such heavens or of counting the stellar bodies. (Jer 31:37; 33:22; see STAR.) Yet they are numbered and named by God.—Ps 147:4; Isa 40:26.
“Midheaven” and ‘extremities of heavens.’ The expression “midheaven” applies to the region within earth’s expanse of atmosphere where birds, such as the eagle, fly. (Re 8:13; 14:6; 19:17; De 4:11 [Heb., “heart of the heavens”]) Somewhat similar is the expression “between the earth and the heavens.” (1Ch 21:16; 2Sa 18:9) The advance of Babylon’s attackers from “the extremity of the heavens” evidently means their coming to her from the distant horizon (where earth and sky appear to meet and the sun appears to rise and set). (Isa 13:5; compare Ps 19:4-6.) Similarly “from the four extremities of the heavens” apparently refers to four points of the compass, thus indicating a coverage of the four quarters of the earth. (Jer 49:36; compare Da 8:8; 11:4; Mt 24:31; Mr 13:27.) As the heavens surround the earth on all sides, Jehovah’s vision of everything “under the whole heavens” embraces all the globe.—Job 28:24.
The cloudy skies. Another term, the Hebrew shaʹchaq, is also used to refer to the “skies” or their clouds. (De 33:26; Pr 3:20; Isa 45:8) This word has the root meaning of something beaten fine or pulverized, as the “film of dust” (shaʹchaq) at Isaiah 40:15. There is a definite appropriateness in this meaning, inasmuch as clouds form when warm air, rising from the earth, becomes cooled to what is known as the dewpoint, and the water vapor in it condenses into minute particles sometimes called water dust. (Compare Job 36:27, 28; see CLOUD.) Adding to the appropriateness, the visual effect of the blue dome of the sky is caused by the diffusion of the rays of the sun by gas molecules and other particles (including dust) composing the atmosphere. By God’s formation of such atmosphere, he has, in effect, ‘beaten out the skies hard like a molten mirror,’ giving a definite limit, or clear demarcation, to the atmospheric blue vault above man.—Job 37:18.
“Heavens of the heavens.” The expression “heavens of the heavens” is considered to refer to the highest heavens and would embrace the complete extent of the physical heavens, however vast, since the heavens extend out from the earth in all directions.—De 10:14; Ne 9:6.
Solomon, the constructor of the temple at Jerusalem, stated that the “heavens, yes, the heaven of the heavens” cannot contain God. (1Ki 8:27) As the Creator of the heavens, Jehovah’s position is far above them all, and “his name alone is unreachably high. His dignity is above earth and heaven.” (Ps 148:13) Jehovah measures the physical heavens as easily as a man would measure an object by spreading his fingers so that the object lies between the tips of the thumb and the little finger. (Isa 40:12) Solomon’s statement does not mean that God has no specific place of residence. Nor does it mean that he is omnipresent in the sense of being literally everywhere and in everything. This can be seen from the fact that Solomon also spoke of Jehovah as hearing “from the heavens, your established place of dwelling,” that is, the heavens of the spirit realm.—1Ki 8:30, 39.
Thus, in the physical sense, the term “heavens” covers a wide range. While it may refer to the farthest reaches of universal space, it may also refer to something that is simply high, or lofty, to a degree beyond the ordinary. Thus, those aboard storm-tossed ships are said to “go up to the heavens, . . . down to the bottoms.” (Ps 107:26) So, too, the builders of the Tower of Babel intended to put up a structure with its “top in the heavens,” a “skyscraper,” as it were. (Ge 11:4; compare Jer 51:53.) And the prophecy at Amos 9:2 speaks of men as ‘going up to the heavens’ in a vain effort to elude Jehovah’s judgments, evidently meaning that they would try to find escape in the high mountainous regions.
Spiritual Heavens. The same original-language words used for the physical heavens are also applied to the spiritual heavens. As has been seen, Jehovah God does not reside in the physical heavens, being a Spirit. However, since he is “the High and Lofty One” who resides in “the height” (Isa 57:15), the basic sense of that which is “lifted up” or “lofty” expressed in the Hebrew-language word makes it appropriate to describe God’s “lofty abode of holiness and beauty.” (Isa 63:15; Ps 33:13, 14; 115:3) As the Maker of the physical heavens (Ge 14:19; Ps 33:6), Jehovah is also their Owner. (Ps 115:15, 16) Whatever is his pleasure to do in them, he does, including miraculous acts.—Ps 135:6.
In many texts, therefore, the “heavens” stand for God himself and his sovereign position. His throne is in the heavens, that is, in the spirit realm over which he also rules. (Ps 103:19-21; 2Ch 20:6; Mt 23:22; Ac 7:49) From his supreme or ultimate position, Jehovah, in effect, ‘looks down’ upon the physical heavens and earth (Ps 14:2; 102:19; 113:6), and from this lofty position also speaks, answers petitions, and renders judgment. (1Ki 8:49; Ps 2:4-6; 76:8; Mt 3:17) So we read that Hezekiah and Isaiah, in the face of a grave threat, “kept praying . . . and crying to the heavens for aid.” (2Ch 32:20; compare 2Ch 30:27.) Jesus, too, used the heavens as representing God when asking the religious leaders whether the source of John’s baptism was “from heaven or from men.” (Mt 21:25; compare Joh 3:27.) The prodigal son confessed to having sinned “against heaven” as well as against his own father. (Lu 15:18, 21) “The kingdom of the heavens,” then, means not merely that it is based in and rules from the spiritual heavens but also that it is “the kingdom of God.”—Da 2:44; Mt 4:17; 21:43; 2Ti 4:18.
Also because of God’s heavenly position, both men and angels raised hands or faces toward the heavens in calling upon him to act (Ex 9:22, 23; 10:21, 22), in swearing to an oath (Da 12:7), and in prayer (1Ki 8:22, 23; La 3:41; Mt 14:19; Joh 17:1). At Deuteronomy 32:40 Jehovah speaks of himself as ‘raising his hand to heaven in an oath.’ In harmony with Hebrews 6:13, this evidently means that Jehovah swears by himself.—Compare Isa 45:23.
Angelic dwelling place. The spiritual heavens are also the “proper dwelling place” of God’s spirit sons. (Jude 6; Ge 28:12, 13; Mt 18:10; 24:36) The expression “army of the heavens,” often applied to the stellar creation, sometimes describes these angelic sons of God. (1Ki 22:19; compare Ps 103:20, 21; Da 7:10; Lu 2:13; Re 19:14.) So, too, “the heavens” are personified as representing the angels, “the congregation of the holy ones.”—Ps 89:5-7; compare Lu 15:7, 10; Re 12:12.
Representing Rulership. We have seen that the heavens can refer to Jehovah God in his sovereign position. Thus, when Daniel told Nebuchadnezzar that the experience the Babylonian emperor was due to have would make him “know that the heavens are ruling,” it meant the same as knowing “that the Most High is Ruler in the kingdom of mankind.”—Da 4:25, 26.
However, aside from its reference to the Supreme Sovereign, the term “heavens” can also refer to other ruling powers that are exalted or lifted up above their subject peoples. The very dynasty of Babylonian kings that Nebuchadnezzar represented is described at Isaiah 14:12 as being starlike, a “shining one, son of the dawn.” By the conquest of Jerusalem in 607 B.C.E., that Babylonian dynasty lifted its throne “above the stars of God,” these “stars” evidently referring to the Davidic line of Judean kings (even as the Heir to the Davidic throne, Christ Jesus, is called “the bright morning star” at Re 22:16; compare Nu 24:17). By its overthrow of the divinely authorized Davidic throne, the Babylonian dynasty, in effect, exalted itself heaven high. (Isa 14:13, 14) This lofty grandeur and far-reaching dominion were also represented in Nebuchadnezzar’s dream by a symbolic tree with its height ‘reaching the heavens.’—Da 4:20-22.
New heavens and new earth. The connection of the “heavens” with ruling power aids in understanding the meaning of the expression “new heavens and a new earth” found at Isaiah 65:17; 66:22 and quoted by the apostle Peter at 2 Peter 3:13. Observing such relationship, M’Clintock and Strong’s Cyclopaedia (1891, Vol. IV, p. 122) comments: “In Isa. lxv, 17, a new heaven and a new earth signify a new government, new kingdom, new people.”
Even as the “earth” can refer to a society of people (Ps 96:1; see EARTH), so, too, “heavens” can symbolize the superior ruling power or government over such “earth.” The prophecy presenting the promise of “new heavens and a new earth,” given through Isaiah, was one dealing initially with the restoration of Israel from Babylonian exile. Upon the Israelites’ return to their homeland, they entered into a new system of things. Cyrus the Great was used prominently by God in bringing about that restoration. Back in Jerusalem, Zerubbabel (a descendant of David) served as governor, and Joshua as high priest. In harmony with Jehovah’s purpose, this new governmental arrangement, or “new heavens,” directed and supervised the subject people. (2Ch 36:23; Hag 1:1, 14) Thereby, as verse 18 of Isaiah chapter 65 foretold, Jerusalem became “a cause for joyfulness and her people a cause for exultation.”
Peter’s quotation, however, shows that a future fulfillment was to be anticipated, on the basis of God’s promise. (2Pe 3:13) Since God’s promise in this case relates to the presence of Christ Jesus, as shown at verse 4, the “new heavens and a new earth” must relate to God’s Messianic Kingdom and its rule over obedient subjects. By his resurrection and ascension to God’s right hand, Christ Jesus became “higher than the heavens” (Heb 7:26) in that he was thereby placed “far above every government and authority and power and lordship . . . not only in this system of things, but also in that to come.”—Eph 1:19-21; Mt 28:18.
Christian followers of Jesus, as “partakers of the heavenly calling” (Heb 3:1), are assigned by God as “heirs” in union with Christ, through whom God purposed “to gather all things together again.” “The things in the heavens,” that is, those called to heavenly life, are the first to be thus gathered into unity with God through Christ. (Eph 1:8-11) Their inheritance is “reserved in the heavens.” (1Pe 1:3, 4; Col 1:5; compare Joh 14:2, 3.) They are “enrolled” and have their “citizenship” in the heavens. (Heb 12:20-23; Php 3:20) They form the “New Jerusalem” seen in John’s vision as “coming down out of heaven from God.” (Re 21:2, 9, 10; compare Eph 5:24-27.) Since this vision is initially stated to be of “a new heaven and a new earth” (Re 21:1), it follows that both are represented in what is thereafter described. Hence the “new heaven” must correspond to Christ together with his “bride,” the “New Jerusalem,” and the “new earth” is seen in the ‘peoples of mankind’ who are their subjects and who receive the blessings of their rule, as depicted in verses 3 and 4.
Third heaven. At 2 Corinthians 12:2-4 the apostle Paul describes one who was “caught away . . . to the third heaven” and “into paradise.” Since there is no mention in the Scriptures of any other person having had such an experience, it seems likely that this was the apostle’s own experience. Whereas some have endeavored to relate Paul’s reference to the third heaven to the early rabbinic view that there were stages of heaven, even a total of “seven heavens,” this view finds no support in the Scriptures. As we have seen, the heavens are not referred to specifically as if divided into platforms or stages, but, rather, the context must be relied upon to determine whether reference is to the heavens within earth’s atmospheric expanse, the heavens of outer space, the spiritual heavens, or something else. It therefore appears that the reference to “the third heaven” likely indicates the superlative form of rulership of the Messianic Kingdom. Note the way words and expressions are repeated three times at Isaiah 6:3; Ezekiel 21:27; John 21:15-17; Revelation 4:8, evidently for the purpose of expressing intensification.
Passing away of former heaven and earth. John’s vision refers to the passing away of “the former heaven and the former earth.” (Re 21:1; compare 20:11.) In the Christian Greek Scriptures, earthly governments and their peoples are shown to be subject to Satanic rule. (Mt 4:8, 9; Joh 12:31; 2Co 4:3, 4; Re 12:9; 16:13, 14) The apostle Paul referred to “the wicked spirit forces in the heavenly places,” with their governments, authorities, and world rulers. (Eph 6:12) So the passing away of “the former heaven” indicates the end of political governments influenced by Satan and his demons. This harmonizes with what is recorded at 2 Peter 3:7-12 regarding the destruction as by fire of “the heavens . . . that are now.” Similarly, Revelation 19:17-21 describes the annihilation of a global political system with its supporters; it says that the symbolic wild beast is “hurled into the fiery lake that burns with sulphur.” (Compare Re 13:1, 2.) As for the Devil himself, Revelation 20:1-3 shows that he is hurled “into the abyss” for a thousand years and then “let loose for a little while.”
Abasement of That Which Is Exalted. Because the heavens represent that which is elevated, the abasement of those things that are exalted is at times represented by the overthrow or the ‘rocking’ or ‘agitating’ of the heavens. Jehovah is said to have “thrown down from heaven to earth the beauty of Israel” at the time of its desolation. That beauty included its kingdom and princely rulers and their power, and such beauty was devoured as by fire. (La 2:1-3) But Israel’s conqueror, Babylon, later experienced an agitation of her own “heaven” and a rocking of her “earth” when the Medes and Persians overthrew Babylon and her heavenly gods proved false and unable to save her from the loss of her dominion over the land.—Isa 13:1, 10-13.
Similarly, it was prophesied that the heaven-high position of Edom would not save her from destruction and that Jehovah’s sword of judgment would be drenched in her heights, or “heavens,” with no help for her from any heavenly, or exalted, source. (Isa 34:4-7; compare Ob 1-4, 8.) Those making great boasts, wickedly speaking in an elevated style as if to “put their mouth in the very heavens,” are certain to fall to ruin. (Ps 73:8, 9, 18; compare Re 13:5, 6.) The city of Capernaum had reason to feel highly favored because of the attention it received by Jesus and his ministry. However, since it failed to respond to his powerful works, Jesus asked, “Will you perhaps be exalted to heaven?” and foretold instead, “Down to Hades you will come.”—Mt 11:23.
Darkening of the Heavens. The darkening of the heavens or of the stellar bodies is often used to represent the removal of prosperous, favorable conditions, and their being replaced by foreboding, gloomy prospects and conditions, like a time when dark clouds blot out all light day and night. (Compare Isa 50:2, 3, 10.) This use of the physical heavens in connection with the mental outlook of humans is somewhat similar to the old Arabic expression, “His heaven has fallen to the earth,” meaning that one’s superiority or prosperity is greatly diminished. At times, of course, in expressing divine wrath, God has employed celestial phenomena, some of which have literally darkened the heavens.—Ex 10:21-23; Jos 10:12-14; Lu 23:44, 45.
Upon Judah such a day of darkness came in fulfillment of Jehovah’s judgment through his prophet Joel, and it reached its culmination in Judah’s desolation by Babylon. (Joe 2:1, 2, 10, 30, 31; compare Jer 4:23, 28.) Any hope of help from a heavenly source seemed to have dried up, and as foretold at Deuteronomy 28:65-67, they came into “dread night and day,” with no relief or hope by sunlit morning or by moonlit evening. Yet, by the same prophet, Joel, Jehovah warned enemies of Judah that they would experience the same situation when he executed judgment upon them. (Joe 3:12-16) Ezekiel and Isaiah used this same figurative picture in foretelling God’s judgment on Egypt and on Babylon respectively.—Eze 32:7, 8, 12; Isa 13:1, 10, 11.
The apostle Peter quoted Joel’s prophecy on the day of Pentecost when urging a crowd of listeners to “get saved from this crooked generation.” (Ac 2:1, 16-21, 40) The unheeding ones of that generation saw a time of grave darkness when the Romans besieged and eventually ravaged Jerusalem less than 40 years later. Prior to Pentecost, however, Jesus had made a similar prophecy and showed it would have a fulfillment at the time of his presence.—Mt 24:29-31; Lu 21:25-27; compare Re 6:12-17.
Permanence of Physical Heavens. Eliphaz the Temanite said of God: “Look! In his holy ones he has no faith, and the heavens themselves are actually not clean in his eyes.” However, Jehovah said to Eliphaz that he and his two companions had “not spoken concerning me what is truthful as has my servant Job.” (Job 15:1, 15; 42:7) By contrast, Exodus 24:10 refers to the heavens as representing purity. Thus there is no cause stated in the Bible for God’s destroying the physical heavens.
That the physical heavens are permanent is shown by the fact that they are used in similes relating to things that are everlasting, such as the peaceful, righteous results of the Davidic kingdom inherited by God’s Son. (Ps 72:5-7; Lu 1:32, 33) Thus, texts such as Psalm 102:25, 26 that speak of the heavens as ‘perishing’ and as ‘being replaced like a worn-out garment’ are not to be understood in a literal sense.
At Luke 21:33, Jesus says that “heaven and earth will pass away, but my words will by no means pass away.” Other scriptures show that “heaven and earth” will endure forever. (Ge 9:16; Ps 104:5; Ec 1:4) So the “heaven and earth” here may well be symbolic, as are the “former heaven and the former earth” at Revelation 21:1; compare Matthew 24:35.
Psalm 102:25-27 stresses God’s eternity and imperishability, whereas his physical creation of heavens and earth is perishable, that is, it could be destroyed—if such were God’s purpose. Unlike God’s eternal existence, the permanence of any part of his physical creation is not independent. As seen in the earth, the physical creation must undergo a continual renewing process if it is to endure or retain its existing form. That the physical heavens are dependent on God’s will and sustaining power is indicated at Psalm 148, where, after referring to sun, moon, and stars, along with other parts of God’s creation, verse 6 states that God “keeps them standing forever, to time indefinite. A regulation he has given, and it will not pass away.”
The words of Psalm 102:25, 26 apply to Jehovah God, but the apostle Paul quotes them with reference to Jesus Christ. This is because God’s only-begotten Son was God’s personal Agent employed in creating the physical universe. Paul contrasts the Son’s permanence with that of the physical creation, which God, if he so designed, could ‘wrap up just as a cloak’ and set aside.—Heb 1:1, 2, 8, 10-12; compare 1Pe 2:3, ftn.
Various Poetic and Figurative Expressions. Because the physical heavens play a vital part in sustaining and prospering life on earth—by sunshine, rain, dew, refreshing winds, and other atmospheric benefits—they are spoken of poetically as Jehovah’s “good storehouse.” (De 28:11, 12; 33:13, 14) Jehovah opens its “doors” to bless his servants, as when causing manna, “the grain of heaven,” to descend upon the ground. (Ps 78:23, 24; Joh 6:31) The clouds are like “water jars” in the upper chambers of that storehouse, and the rain pours forth as by “sluices,” certain factors, such as mountains or even God’s miraculous intervention, causing water condensation and subsequent rainfall in specific regions. (Job 38:37; Jer 10:12, 13; 1Ki 18:41-45) On the other hand, the withdrawal of God’s blessing at times resulted in the heavens over the land of Canaan being “shut up,” becoming, in appearance, as hard and as nonporous as iron and having a copper-colored metallic brightness, with a dust-filled, rainless atmosphere.—Le 26:19; De 11:16, 17; 28:23, 24; 1Ki 8:35, 36.
This aids one in understanding the picture presented at Hosea 2:21-23. Having foretold the devastating results of Israel’s unfaithfulness, Jehovah now tells of the time of her restoration and the resulting blessings. In that day, he says, “I shall answer the heavens, and they, for their part, will answer the earth; and the earth, for its part, will answer the grain and the sweet wine and the oil; and they, for their part, will answer Jezreel.” Evidently this represents Israel’s petition for Jehovah’s blessing through the chain of things of Jehovah’s creation here named. For that reason these things are viewed as personified, hence, as if able to make a request, or petition. Israel asks for grain, wine, and oil; these products, in turn, seek their plant food and water from the earth; the earth, in order to supply this need, requires (or figuratively calls for) sun, rain, and dew from the heavens; and the heavens (till now “shut up” because of the withdrawal of God’s blessing) can respond only if God accepts the petition and restores his favor to the nation, thereby putting the productive cycle in motion. The prophecy gives the assurance that he will do so.
At 2 Samuel 22:8-15, David apparently uses the figure of a tremendous storm to represent the effect of God’s intervention on David’s behalf, freeing him from his enemies. The fierceness of this symbolic storm agitates the foundation of the heavens, and they ‘bend down’ with dark low-lying clouds. Compare the literal storm conditions described at Exodus 19:16-18; also the poetic expressions at Isaiah 64:1, 2.
Jehovah, “the Father of the celestial lights” (Jas 1:17), is frequently spoken of as having ‘stretched out the heavens,’ just as one would a tent cloth. (Ps 104:1, 2; Isa 45:12) The heavens, both the expanse of atmosphere by day and the starry heavens by night, have the appearance of an immense domed canopy from the standpoint of humans on earth. At Isaiah 40:22 the simile is that of stretching out “fine gauze,” rather than the coarser tent cloth. This expresses the delicate finery of such heavenly canopy. On a clear night the thousands of stars do, indeed, form a lacy web stretched over the black velvet background of space. It may also be noted that even the enormous galaxy known as the Via Lactea, or Milky Way, in which our solar system is located, has a filmy gauzelike appearance from earth’s viewpoint.
It can be seen from the foregoing that the context must always be considered in determining the sense of these figurative expressions. Thus, when Moses called on “the heavens and the earth” to serve as witnesses to the things that he declared to Israel, it is obvious that he did not mean the inanimate creation but, rather, the intelligent residents inhabiting the heavens and the earth. (De 4:25, 26; 30:19; compare Eph 1:9, 10; Php 2:9, 10; Re 13:6.) This is also true of the rejoicing by the heavens and earth over Babylon’s fall, at Jeremiah 51:48. (Compare Re 18:5; 19:1-3.) Likewise it must be the spiritual heavens that “trickle with righteousness,” as described at Isaiah 45:8. In other cases the literal heavens are meant but are figuratively described as rejoicing or shouting out loud. At Jehovah’s coming to judge the earth, as described at Psalm 96:11-13, the heavens, along with the earth, sea, and the field, take on a gladsome appearance. (Compare Isa 44:23.) The physical heavens also praise their Creator, in the same way that a beautifully designed product brings praise to the craftsman producing it. In effect, they speak of Jehovah’s power, wisdom, and majesty.—Ps 19:1-4; 69:34.
Ascension to Heaven. At 2 Kings 2:11, 12 the prophet Elijah is described as “ascending in the windstorm to the heavens.” The heavens here referred to are the atmospheric heavens in which windstorms occur, not the spiritual heavens of God’s presence. Elijah did not die at the time of such ascension, but he continued to live for a number of years after his heavenly transportation away from his successor Elisha. Nor did Elijah upon death ascend to the spiritual heavens, since Jesus, while on earth, clearly stated that “no man has ascended into heaven.” (Joh 3:13; see ELIJAH No. 1 (Elisha Succeeds Him).) At Pentecost, Peter likewise said of David that he “did not ascend to the heavens.” (Ac 2:34) In reality, there is nothing in the Scriptures to show that a heavenly hope was held out to God’s servants prior to the coming of Christ Jesus. Such hope first appears in Jesus’ expressions to his disciples (Mt 19:21, 23-28; Lu 12:32; Joh 14:2, 3) and was fully comprehended by them only after Pentecost of 33 C.E.—Ac 1:6-8; 2:1-4, 29-36; Ro 8:16, 17.
The Scriptures show that Christ Jesus was the first one to ascend from earth to the heavens of God’s presence. (1Co 15:20; Heb 9:24) By such ascension and his presentation of his ransom sacrifice there, he ‘opened the way’ for those who would follow—the spirit-begotten members of his congregation. (Joh 14:2, 3; Heb 6:19, 20; 10:19, 20) In their resurrection these must bear “the image of the heavenly one,” Christ Jesus, in order to ascend to the heavens of the spirit plane, for “flesh and blood” cannot inherit that heavenly Kingdom.—1Co 15:42-50.
How can persons in “heavenly places” still be on earth?
The apostle Paul in his letter to the Ephesians speaks of Christians then living on earth as though they were already enjoying a heavenly position, being raised up and “seated . . . together in the heavenly places in union with Christ Jesus.” (Eph 1:3; 2:6) The context shows that anointed Christians are so viewed by God because of his having ‘assigned them as heirs’ with his Son in the heavenly inheritance. While yet on earth, they have been exalted, or ‘lifted up,’ by such assignment. (Eph 1:11, 18-20; 2:4-7, 22) These points also shed light on the symbolic vision at Revelation 11:12. Likewise it provides a key for understanding the prophetic picture contained at Daniel 8:9-12, where what has previously been shown to represent a political power is spoken of as “getting greater all the way to the army of the heavens,” and even causing some of that army and of the stars to fall to the earth. At Daniel 12:3, those servants of God on earth at the foretold time of the end are spoken of as shining “like the stars to time indefinite.” Note, too, the symbolic use of stars in the book of Revelation, chapters 1 through 3, where the context shows that such “stars” refer to persons who are obviously living on earth and undergoing earthly experiences and temptations, these “stars” being responsible for congregations under their care.—Re 1:20; 2:1, 8, 12, 18; 3:1, 7, 14.
The way to heavenly life. The way to heavenly life involves more than just faith in Christ’s ransom sacrifice and works of faith in obedience to God’s instructions. The inspired writings of the apostles and disciples show that there must also be a calling and choosing of one by God through his Son. (2Ti 1:9, 10; Mt 22:14; 1Pe 2:9) This invitation involves a number of steps, or actions, taken to qualify such a one for the heavenly inheritance; many of such steps are taken by God, others by the one called. Among such steps, or actions, are the declaring righteous of the called Christian (Ro 3:23, 24, 28; 8:33, 34); bringing him forth (‘begetting him’) with holy spirit (Joh 1:12, 13; 3:3-6; Jas 1:18); his being baptized into Christ’s death (Ro 6:3, 4; Php 3:8-11); anointing him (2Co 1:21; 1Jo 2:20, 27); sanctifying him (Joh 17:17). The called one must maintain integrity until death (2Ti 2:11-13; Re 2:10), and after he has proved faithful in his calling and selection (Re 17:14), he is finally resurrected to spirit life.—Joh 6:39, 40; Ro 6:5; 1Co 15:42-49; see ANOINTED, ANOINTING; DECLARE RIGHTEOUS; RESURRECTION; SANCTIFICATION. | <urn:uuid:51142530-c71f-4403-b697-01847ccd2ffc> | CC-MAIN-2022-33 | https://wol.jw.org/en/wol/d/r1/lp-e/1200001949#h=43:0-44:0 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572304.13/warc/CC-MAIN-20220816120802-20220816150802-00497.warc.gz | en | 0.950854 | 8,163 | 3.328125 | 3 |
Noah Webster’s 1828 Dictionary
DISCOURTESY — DISENCUMBERED
DISCOURTESY, n. Discurtesy. [dis and courtesy.] Incivility; rudeness of behavior or language; ill manners; act of disrespect.
Be calm in arguing; for fierceness makes error a fault, and truth discourtesy.
DISCOURTSHIP, n. Want of respect.
DISCOUS, a. [L.] Broad; flat; wide; used of the middle plain and flat part of some flowers.
1. Literally, to uncover; to remove a covering. Isaiah 22:8.
2. To lay open to the view; to disclose; to show; to make visible; to expose to view something before unseen or concealed.
Go, draw aside the curtains and discover the several caskets to this noble prince.
He discovereth deep things out of darkness. Job 12:22.
Law can discover sin, but not remove.
3. To reveal; to make known.
We will discover ourselves to them. 1 Samuel 14:8.
Discover not a secret to another. Proverbs 25:9.
4. To espy; to have the first sight of; as, a man at mast-head discovered land.
When we had discovered Cyprus, we left it on the left hand. Acts 21:3.
5. To find out; to obtain the first knowledge of; to come to the knowledge of something sought or before unknown. Columbus discovered the variation of the magnetic needle. We often discover our mistakes, when too late to prevent their evil effects.
6. To detect; as, we discovered the artifice; the thief, finding himself discovered, attempted to escape.
Discover differs from invent. We discover what before existed, though to us unknown; we invent what did not before exist.
1. That may be discovered; that may be brought to light, or exposed to view.
2. That may be seen; as, many minute animals are discoverable only by the help of the microscope.
3. That may be found out, or made known; as, the scriptures reveal many things not discoverable by the light of reason.
4. Apparent; visible; exposed to view.
Nothing discoverable in the lunar surface is ever covered.
DISCOVERED, pp. Uncovered; disclosed to view; laid open; revealed; espied or first seen; found out; detected.
1. One who discovers; one who first sees or espies; one who finds out, or first comes to the knowledge of something.
2. A scout; an explorer.
DISCOVERING, ppr. Uncovering; disclosing to view; laying open; revealing; making known; espying; finding out; detecting.
DISCOVERTURE, n. A state of being released from coverture; freedom of a woman from the coverture of a husband.
1. The action of disclosing to view, or bringing to light; as, by the discovery of a plot, the public peace is preserved.
2. Disclosure; a making known; as, a bankrupt is bound to make a full discovery of his estate and effects.
3. The action of finding something hidden; as the discovery of lead or silver in the earth.
4. The act of finding out, or coming to the knowledge of; as the discovery of truth; the discovery of magnetism.
5. The act of espying; first sight of; as the discovery of America by Columbus, or of the Continent by Cabot.
6. That which is discovered, found out or revealed; that which is first brought to light, seen or known. The properties of the magnet were an important discovery. Redemption from sin was a discovery beyond the power of human philosophy.
7. In dramatic poetry, the unraveling of a plot, or the manner of unfolding the plot or fable of a comedy or tragedy.
DISCREDIT, n. [See the Verb.]
1. Want of credit or good reputation; some degree of disgrace or reproach; disesteem; applied to persons or things. Frauds in manufactures bring them into discredit.
It is the duty of every Christian to be concerned for the reputation or discredit his life may bring on his profession.
2. Want of belief, trust or confidence; disbelief; as, later accounts have brought the story into discredit.
1. To disbelieve; to give no credit to; not to credit or believe; as, the report is discredited.
2. To deprive of credit or good reputation; to make less reputable or honorable; to bring into disesteem; to bring into some degree of disgrace, or into disrepute.
He least discredits his travels, who returns the same man he went.
Our virtues will be often discredited with the appearance of evil.
3. To deprive of credibility.
DISCREDITABLE, a. Tending to injure credit; injurious to reputation; disgraceful; disreputable.
DISCREDITED, pp. Disbelieved; brought into disrepute; disgraced.
DISCREDITING, ppr. Disbelieving; not trusting to; depriving of credit; disgracing.
DISCREET, a. [L., Gr. It is sometimes written discrete; the distinction between discreet and discrete are arbitrary, but perhaps not entirely useless. The literal sense is, separate, reserved, wary, hence discerning.]
1. Prudent; wise in avoiding errors or evil, and in selecting the best means to accomplish a purpose; circumspect; cautious; wary; not rash.
It is the discreet man, not the witty, nor the learned, nor the brave, who guides the conversation, and gives measures to society.
Let Pharaoh look out a man discreet and wise. Genesis 41:33.
DISCREETLY, adv. Prudently; circumspectly; cautiously; with nice judgment of what is best to be done or omitted.
DISCREETNESS, n. The quality of being discreet; discretion.
DISCREPANCE, DISCREPANCY, n. [L., to give a different sound, to vary, to jar; to creak. See Crepitate.] Difference; disagreement; contrariety; applicable to facts or opinions.
There is no real discrepancy between these tow genealogies.
DISCREPANT, a. Different; disagreeing; contrary.
1. Separate; distinct; disjunct. Discrete proportion is when the ratio of two or more pairs of numbers or quantities is the same, but there is not the same proportion between all the numbers; as 3:6::8:16, 3 bearing the same proportion to 6, as 8 does to 16. But 3 is not to 6 as 6 is to 8. It is thus opposed to continued or continual proportion, as 3:6::12:24.
2. Disjunctive; as, I resign my life, but not my honor, is a discrete proposition.
DISCRETE, v.t. To separate; to discontinue. [Not used.]
DISCRETION, n. [L, a separating. See Discreet.]
1. Prudence, or knowledge and prudence; that discernment which enables a person to judge critically of what is correct and proper, united with caution; nice discernment and judgment, directed by circumspection, and primarily regarding ones own conduct.
A good man--will guide his affairs with discretion. Psalm 112:5.
My son, keep sound wisdom and discretion. Proverbs 3:21.
2. Liberty or power of acting without other control than ones own judgment; as, the management of affairs was left to the discretion of the prince; he is left to his own discretion. Hence,
To surrender at discretion, is to surrender without stipulation or terms, and commit ones self entirely to the power of the conqueror.
3. Disjunction; separation. [Not much used.]
DISCRETIONARY, DISCRETIONAL, a. Left to discretion; unrestrained except by discretion or judgment; that is to be directed or managed by discretion only. Thus, the President of the United States is, in certain cases, invested with discretionary powers, to act according to circumstances.
DISCRETIONARILY, DISCRETIONALLY, adv. At discretion; according to discretion.
DISCRETIVE, a. [See Discreet and Discrete.]
1. Disjunctive; noting separation or opposition. In logic, a discretive proposition expresses some distinction, opposition or variety, by means of but, though, yet, etc.; as, travelers change their climate, but not their temper; Job was patient, though his grief was great.
2. In grammar, discretive distinctions are such as imply opposition or difference; as, not a man, but a beast.
3. Separate; distinct.
DISCRETIVELY, adv. In a discretive manner.
DISCRIMINABLE, a. That may be discriminated.
DISCRIMINATE, v.t. [L., difference, distinction; differently applied; Gr., L.]
1. To distinguish; to observe the difference between; as, we may usually discriminate true from false modesty.
2. To separate; to select from others; to make a distinction between; as, in the last judgment, the righteous will be discriminated from the wicked.
3. To mark with notes of difference; to distinguish by some note or mark. We discriminate animals by names, as nature has discriminated them by different shapes and habits.
1. To make a difference or distinction; as, in the application of law, and the punishment of crimes, the judge should discriminate between degrees of guilt.
2. To observe or note a difference; to distinguish; as, in judging of evidence, we should be careful to discriminate between probability and slight presumption.
DISCRIMINATE, a. Distinguished; having the difference marked.
DISCRIMINATED, pp. Separated; distinguished.
DISCRIMINATELY, adv. Distinctly; with minute distinction; particularly.
DISCRIMINATENESS, n. Distinctness; marked difference.
1. Separating; distinguishing; marking with notes of difference.
2. a. Distinguishing; peculiar; characterized by peculiar differences; as the discriminating doctrines of the gospel.
3. a. That discriminates; able to make nice distinctions; as a discriminating mind.
1. The act of distinguishing; the act of making or observing a difference; distinction; as the discrimination between right and wrong.
2. The state of being distinguished.
3. Mark of distinction.
1. That makes the mark of distinction; that constitutes the mark of difference; characteristic; as the discriminative features of men.
2. That observes distinction; as discriminative providence.
DISCRIMINATIVELY, adv. With discrimination or distinction.
DISCRIMINOUS, a. Hazardous. [Not used.]
DISCUBITORY, a. [L., to lie down or lean.] Leaning; inclining; or fitted to a leaning posture.
DISCULPATE, v.t. [L., a fault.] To free from blame or fault; to exculpate; to excuse.
Neither does this effect of the independence of nations disculpate the author of an unjust war.
DISCULPATED, pp. Cleared from blame; exculpated.
DISCULPATING, ppr. Freeing from blame; excusing.
DISCUMBENCY, n. [L. See Discubitory.] The act of leaning at meat, according to the manner of the ancients.
DISCUMBER, v.t. [dis and cumber.] To unburden; to throw off any thing cumbersome; to disengage from any troublesome weight, or impediment; to disencumber. [The latter is generally used.]
DISCURE, v.t. To discover; to reveal. [Not used.]
DISCURRENT, a. Not current. [Not used.]
DISCURSION, n. [L., to run.] A running or rambling about.
DISCURSIST, n. [See Discourse.] A disputer. [Not in use.]
DISCURSIVE, a. [L., supra.]
1. Moving or roving about; desultory.
2. Argumentative; reasoning; proceeding regularly from premises to consequences; sometimes written discursive. Whether brutes have a kind of discursive faculty.
DISCURSIVELY, adv. Argumentatively; in the form of reasoning or argument.
DISCURSIVENESS, n. Range or gradation of argument.
DISCURSORY, a. Argumental; rational.
DISCUS, n. [L.]
1. A quoit; a piece of iron, copper or stone, to be thrown in play; used by the ancients.
2. In botany, the middle plain part of a radiated compound flower, generally consisting of small florets, with a hollow regular petal, as in the marigold and daisy.
3. The face or surface of the sun or moon. [See Disk.]
DISCUSS, v.t. [L.] Literally, to drive; to beat or to shake in pieces; to separate by beating or shaking.
1. To disperse; to scatter; to dissolve; to repel; as, to discuss a tumor; a medical use of the word.
2. To debate; to agitate by argument; to clear of objections and difficulties, with a view to find or illustrate truth; to sift; to examine by disputation; to ventilate; to reason on, for the purpose of separating truth from falsehood. We discuss a subject, a point, a problem, a question, the propriety, expedience or justice of a measure, etc.
3. To break in pieces. [The primary sense, but not used.]
4. To shake off. [Not in use.]
DISCUSSED, pp. Dispersed; dissipated; debated; agitated; argued.
DISCUSSER, n. One who discusses; one who sifts or examines.
DISCUSSING, ppr. Dispersing; resolving; scattering; debating; agitating; examining by argument.
DISCUSSING, n. Discussion; examination.
1. In surgery, resolution; the dispersion of a tumor or any coagulated matter.
2. Debate; disquisition; the agitation of a point or subject with a view to elicit truth; the treating of a subject by argument, to clear it of difficulties, and separate truth from falsehood.
DISCUSSIVE, a. Having the power to discuss, resolve or disperse tumors or coagulated matter.
DISCUSSIVE, n. A medicine that discusses; a discutient.
DISCUTIENT, a. [L.] Discussing; dispersing morbid matter.
DISCUTIENT, n. A medicine or application which disperses a tumor or any coagulated fluid in the body; sometimes it is equivalent to carminative.
DISDAIN, v.t. [L., to think worthy; worthy. See Dignity.] To think unworthy; to deem worthless; to consider to be unworthy of notice, care, regard, esteem, or unworthy of ones character; to scorn; to contemn. The man of elevated mind disdains a mean action; he disdains the society of profligate, worthless men; he disdains to corrupt the innocent, or insult the weak. Goliath disdained David.
Whose fathers I would have disdained to set with the dogs of my flock. Job 30:1.
DISDAIN, n. Contempt; scorn; a passion excited in noble minds, by the hatred or detestation of what is mean and dishonorable, and implying a consciousness of superiority of mind, or a supposed superiority of mind, or a supposed superiority. In ignoble minds, disdain may spring from unwarrantable pride or haughtiness, and be directed toward objects of worth. It implies hatred, and sometimes anger.
How my soul is moved with just disdain.
DISDAINED, pp. Despised; contemned; scorned.
1. Full of disdain; as disdainful soul.
2. Expressing disdain; as a disdainful look.
3. Contemptuous; scornful; haughty; indignant.
DISDAINFULLY, adv. Contemptuously; with scorn; in a haughty manner.
DISDAINFULNESS, n. Contempt; contemptuousness; haughty scorn.
DISDAINING, ppr. Contemning; scorning.
DISDAINING, n. Contempt; scorn.
DISDIACLASTIC, a. An epithet given by Bartholine and others to a substance supposed to be crystal, but which is a fine pellucid spar, called also Iceland crystal, and by Dr. Hill, from its shape, parallelopipedum.
DISDIAPASON, BISDIAPASON, n. [See Diapason.] In music, a compound concord in the quadruple ratio of 4:1 or 8:2.
Disdiapason diapente, a cocord in a sectuple ratio of 1:6.
Disdiapason semi-diapente, a compound concord in the proportion of 16:3.
Disdiapason ditone, a compound consontance in the proportion of 10:2.
Disdiapason semi-ditone, a compound concord in the proportion of 24:5.
DISEASE, n. Dizeze. [dis and ease.]
1. In its primary sense, pain, uneasiness, distress, and so used by Spenser; but in this sense, obsolete.
2. The cause of pain or uneasiness; distemper; malady; sickness; disorder; any state of a living body in which the natural functions of the organs are interrupted or disturbed, either by defective or preternatural action, without a disrupture of parts by violence, which is called a wound. The first effect of disease is uneasiness or pain, and the ultimate effect is death. A disease may affect the whole body, or a particular limb or part of the body. We say a diseased limb; a disease in the head or stomach; and such partial affection of the body is called a local or topical disease. The word is also applied to the disorders of other animals, as well as to those of man; and to any derangement of the vegetative functions of plants.
The shafts of disease shoot across our path in such a variety of courses, that the atmosphere of human life is darkened by their number, and the escape of an individual becomes almost miraculous.
3. A disordered state of the mind or intellect, by which the reason is impaired.
4. In society, vice; corrupt state of morals. Vices are called moral diseases.
A wise man converses with the wicked, as a physician with the sick, not to catch the disease, but to cure it.
5. Political or civil disorder, or vices in a state; any practice which tends to disturb the peace of society, or impede or prevent the regular administration of government.
The instability, injustice and confusion introduced into the public councils have, in truth, been the mortal diseases under which popular governments have every where perished.
DISEASE, v.t. dizeze.
1. To interrupt or impair any or all the natural and regular functions of the several organs of a living body; to afflict with pain or sickness to make morbid; used chiefly in the passive participle, as a diseased body, a diseased stomach; but diseased may here be considered as an adjective.
2. To interrupt or render imperfect the regular functions of the brain, or of the intellect; to disorder; to derange.
3. To infect; to communicate disease to, by contagion.
4. To pain; to make uneasy.
DISEASED, pp. or a. Dizezed. Disordered; distempered; sick.
DISEASEDNESS, n. Dizezedness. The state of being diseased; a morbid state; sickness.
DISEASEFUL, a. Dizezeful.
1. Abounding with disease; producing diseases; as diseaseful climate.
2. Occasioning uneasiness.
DISEASEMENT, n. Dizezement. Uneasiness; inconvenience.
DISEDGED, a. [dis and edge.] Blunted; made dull.
DISEMBARK, v.t. [dis and embark.] To land; to debark; to remove from on board a ship to the land; to put on shore; applied particularly to the landing of troops and military apparatus; as, the general disembarked the troops at sun-rise.
DISEMBARK, v.i. To land; to debark; to quit a ship for residence or action on shore; as, the light infantry and calvary disembarked, and marched to meet the enemy.
DISEMBARKED, pp. Landed; put on shore.
DISEMBARKING, ppr. Landing; removing from on board a ship to land.
DISEMBARKMENT, n. The act of disembarking.
DISEMBARRASS, v.t. [dis and embarrass.] To free from embarrassment or perplexity; to clear; to extricate.
DISEMBARRASSED, pp. Freed from embarrassment; extricated from difficulty.
DISEMBARRASSING, ppr. Freeing from embarrassment or perplexity; extricating.
DISEMBARRASSMENT, n. The act of extricating from perplexity.
DISEMBAY, v.t. To clear from a bay.
DISEMBITTER, v.t. [dis and embitter.] To free from bitterness; to clear from acrimony; to render sweet or pleasant.
DISEMBODIED, a. [dis and embodied.]
1. Divested of the body; as disembodied spirits or souls.
2. Separated; discharged from keeping in a body.
1. To divest of body; to free from flesh.
2. To discharge from military array.
DISEMBOGUE, v.t. [See Voice.] To pour out or discharge at the mouth, as a stream; to vent; to discharge into the ocean or a lake.
Rolling down, the steep Timavus raves, and through nine channels disembogues his waves.
1. To flow out at the mouth, as a river; to discharge waters into the ocean, or into a lake. Innumerable rivers disembogue into the ocean.
2. To pass out of a gulf or bay.
DISEMBOGUEMENT, n. Discharge of waters into the ocean or a lake.
DISEMBOSOM, v.t. To separate from the bosom.
DISEMBOWEL, v.t. [dis and embowel.] To take out the bowels; to take or draw from the bowels, as the web of a spider.
DISEMBOWELED, pp. Taken or drawn from the bowels.
DISEMBOWELING, ppr. Taking or drawing from the bowels.
DISEMBRANGLE, v.t. To free from litigation. [Not used.]
DISEMBROIL, v.t. [dis and embroil.] To disentangle; to free from perplexity; to extricate from confusion.
DISEMBROILED, pp. Disentangled; cleared from perplexity or confusion.
DISEMBROILING, ppr. Disentangling; freeing from confusion.
DISENABLE, v.t. [dis and enable.] To deprive of power, natural or moral; to disable; to deprive of ability or means. A man may be disenabled to walk by lameness; and by poverty he is disenabled to support his family.
DISENABLED, pp. Deprived of power, ability or means.
DISENABLING, ppr. Depriving of power, ability or means.
DISENCHANT, v.t. [dis and enchant.] To free from enchantment; to deliver from the power of charms or spells.
Haste to thy work; a noble stroke or two ends all the charms, and disenchants the grove.
DISENCHANTED, pp. Delivered from enchantment, or the power of charms.
DISENCHANTING, ppr. Freeing from enchantment, or the influence of charms.
DISENCUMBER, v.t. [dis and encumber.]
1. To free from encumbrance; to deliver from clogs and impediments; to disburden; as, to disencumber troops of their baggage; to disencumber the soul of its body of clay; to disencumber the mind of its cares and griefs.
2. To free from any obstruction; to free from any thing heavy or unnecessary; as a disencumbered building. | <urn:uuid:74d16525-72f2-406f-9581-842b7c9c5263> | CC-MAIN-2022-33 | https://m.egwwritings.org/en/book/1843.536526#36605 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572908.71/warc/CC-MAIN-20220817122626-20220817152626-00096.warc.gz | en | 0.892407 | 5,538 | 3.0625 | 3 |
The tourism industry in Greenland is not focused on mass tourism but rather on adventure tourism, which poses interesting design challenges. How do we design for the few, in a sustainable way, in Greenland’s fragile nature?
Tourists from all over the world want to experience the raw and untouched nature of the Arctic. As a result, Greenland has started preparing for rapid and significant growth in tourism in the coming years, partly by upgrading the flight infrastructure with the addition of two new airports. Currently relying on two former US military airports for international flights, Kangerlussuaq and Narsarsuaq, Greenland is constructing two new modern airports: one in Ilulissat in the north of Greenland and another in Nuuk, the capital city. A third regional airport is being planned in Qaqortoq in the south of Greenland.
One thing is to get tourists safe and sound to Greenland, but another is to prepare an entire industry. To accommodate the tourism growth, there is a need to further develop infrastructure and services in the sector, such as accommodation, restaurants, and guided tours introducing visitors to the magnificent scenery, culture, and history. All of this must be developed in close collaboration with the local communities in Greenland, with the utmost respect for the country’s cultural and natural heritage.
Greenland is the world’s largest island, with 2,166,086 km2 of land. The Greenland ice sheet is the largest in the northern hemisphere, covering around 80% of the country. Greenland’s coastline is an impressive 44,000km. There are 18 cities in Greenland and 58 hamlets, all located along the coast. The total population is approximately 56,500 people. Around 19,000 people live in Nuuk, the capital, whereas some of the smallest hamlets, such as Isertoq near Tasiilaq on the east coast, have less than 60 inhabitants. Greenland is truly a land of contrast.
Greenland is home to the world’s largest national park, covering an area of 972,000 m2. It includes the entire northeastern part of Greenland, north of Ittoqqortoormiit, including the world’s northernmost land area. Its coastline extends over 18,000 km.
However, the Northeast Greenland National Park is somewhat different from most national parks in other countries. The only people who have access to the area on a regular basis are local hunters and fishermen from Ittoqqortoormiit, and there are no organised tours for visitors and tourists. Apart from the personnel at meteorological and monitoring stations and the Danish Sirius Patrol, around 40 people in total, no one lives in the national park.
Infrastructure in the Arctic is a challenge, and Greenland is no exception. The distances are vast, nature is rough, and the weather can be extreme.
There are no roads between the inhabited places, so all traffic is by helicopter, aeroplane, boat or skidoos, and dog sleighs during winter. All the cities and hamlets therefore operate like small, isolated islands.
Travelling to and within Greenland can therefore be a challenge. The international airports in Greenland are old American Air Force bases, placed in isolated locations for strategic reasons. They are not placed near any of the major settlements, meaning that most travellers must also fly domestically to reach their destination.
But things are changing. The Greenlandic Government is planning three new airports in the regional cities of Ilulissat in the north, Nuuk in the middle, and Qaqortoq in the south. The airport in Nuuk is set to open in 2024, the one in Ilulissat the year after, while construction of the regional airport in Qaqortoq commences in summer 2022. These new airports will change the way people travel to Greenland and transform domestic travel in the country.
In the coming years, the cities will have to upgrade local infrastructure, build accommodation, and familiarise people with the tourist industry. This preparation phase should be seen as an opportunity to develop design strategies with a local touch. It provides an opportunity to offer future guests a unique Greenlandic design experience; of infrastructure, architecture and carefully executed design solutions in nature, such as boardwalks, information signs, and simple harbour solutions in the fjords of Greenland.
With 5,700 inhabitants, Sisimiut is the second-largest city in Greenland. Located only 200 km from the international airport in Kangerlussuaq, the city is not part of the current airport expansion plans. To improve connectivity, the municipality of Qeqqata has therefore initiated an ambitious road project, the Arctic Circle Road, connecting Sisimiut and Kangerlussuaq.
As the infrastructure is changing with new international airports in Ilulissat and Nuuk, Greenland’s second-largest city, Sisimiut (Municipality of Qeqqata), is taking matters into its own hands. The municipality has started the construction of a road connecting Sisimiut with Kangerlussuaq and Kangerlussuaq Airport.
Kangerlussuaq airport is a vital infrastructure hub in Greenland. This is where Air Greenland connects the country with the outside world – that is, until the new airports open in Ilulissat and Nuuk.
A road connection between Kangerlussuaq and Sisimiut has been discussed since the early 1960s and is now finally taking shape. The project’s first step is to create an ATV track connecting the two towns, which will later be upgraded to a dirt road. Promoting tourism and creating new experiences along the new route is an important priority, including in the Aasivissuit-Nipissat area, which was listed as a UNESCO World Heritage Site in 2018.
Sisimiut is known for its spectacular nature. It offers off-piste skiing and snowmobiling and hosts the famous 160-kilometre cross-country race, “The Arctic Circle Race”. During summer, the area provides sublime trekking, ATV experiences and fishing, and visitors can explore the unique cultural landscape of the UNESCO World Heritage Site: Inuit hunting grounds between ice and sea.
Arctic Circle Road is a game-changer for tourism for three main reasons:
1. It provides low-cost, flexible, and independent transport between the two cities.
2. It provides access to a massive land area that was previously out of reach.
3. It secures the basis for hotel investments in Kangerlussuaq, which has had difficulties attracting private investors since the closure of the US military base.
This ambitious project is an opportunity to create new Greenlandic designs for roads, shelters, viewpoints, hotels etc. For this to happen, the project should start with a design strategy, which could later form the basis for more detailed guidelines and design manuals.
There are three UNESCO World Heritage sites in Greenland:
- Kujataa – Norse and Inuit Farming at the edge of the ice cap
- Aasivissuit-Nipisat – Inuit hunting grounds between ice and sea
- Ilulissat Kangia – the Ilulissat icefjord
The Ilulissat icefjord was the first site in Greenland to be inscribed on the UNESCO World Heritage List in 2004. The National Museum of Greenland and the municipalities where the sites are located are responsible for managing the UNESCO World Heritage Sites. All development projects within or close to a UNESCO-listed area must be assessed and approved by the National Museum and the municipality at hand, making sure that the project complies with the protection of the cultural heritage site.
The Ilulissat icefjord. The Ilulissat glacier is the fastest moving in the world, and the area is known for its colossal icebergs.
In 2018, the “Aasivissuit – Nipisat” was inscribed on the UNESCO World Heritage List. The area covers a traditional Inuit hunting ground, from the icecap near the city of Kangerlussuaq to the coastal town of Sisimiut. The history of this stretch of landscape goes back more than 4000 years.
Kujataa, Norse and Inuit Farming. This area shows the Norse and Inuit farming culture. The old ruins and structures go back more than 1000 years.
The Greenlandic Government has approved several cultural heritage protection laws and executive orders to promote and protect the future of these critical sites. These include the Heritage Protection Act on Cultural Heritage Protection and Conservation from 2010, an Executive Order on Cultural Heritage Protection (2016), the Museum Act (2015), and the Planning Act (2010).
The Heritage Protection Act protects ancient monuments, historic buildings, and historical areas. Laws addressing nature and landscape protection include the Nature Protection Act (2003) and the Acts on Preservation of Natural Amenities, Environmental Protection and Catchment and Hunting.
All physical changes to the UNESCO World Heritage Sites, like paths, roads, structures, and buildings must undergo a thorough approval process with the National Museum and the municipality in which the site is located. Furthermore, all mining activities are subject to strict legal requirements, and any disturbance or demolition of the sites is punishable by law.
The planning law in Greenland is unique, as there is no private ownership of land. Someone building a house will not own the building plot on which the house is built but is merely granted the right to build there. The house itself, however, can be privately owned.
When the municipalities in Greenland plan for new housing areas, they determine what type of public services and buildings are required, such as schools, kindergartens, and public housing. They can also plan for private functions, such as privately owned houses, buildings, and shops. People who wish to build a house must wait for the municipality to advertise available building plots, and only those who meet specific requirements can apply. If there is more than one applicant, a lot is drawn to determine who gets the building plot.
There are two levels of planning in Greenland:
- Overall physical and regional planning is the responsibility of the Greenland Home Rule Government
- Municipal and city planning is a municipal responsibility
Greenland comprises five municipalities: Avannaata, Kujalleq, Qeqertalik, Qeqqata, and Sermersooq. In addition, there are city boundaries and hamlet boundaries, while the remaining land is referred to as “the open land”, which is also a municipal responsibility.
Greenland has always been an attractive destination for explorers. Greenland expeditions started in the late 1800s as polar expeditions aiming to map various areas and study their geology, wildlife, fauna, climatic conditions, or traces of human presence. Today, Greenland expeditions are carried out mainly for the sake of adventure.
Knud Rasmussen (1879-1933) is arguably the most famous Polar explorer. Born in Ilulissat and the son of a Danish missionary, Knud Rasmussen did several expeditions in Greenland. The best known is the fifth Thule expedition in 1921–1924, which visited all the existing northern Inuit tribes. 2021 is the 100th anniversary of the 18,000 km polar expedition, covering Greenland, Canada, Alaska, and Siberia.
Tourism is the second biggest export in Greenland after the fishing industry. Before the Covid-pandemic hit in 2020, tourism was a prosperous business, and it will be so again.
Between 2015 and 2019, the number of international tourists in Greenland increased by 36.3%. Foreign overnight stays increased by 34% in the period, international flight passengers by 12%, and cruise line passengers by 86.2%
The National Tourism Strategy 2021–2023 is published by the Department of Commerce on behalf of the Greenland Home Rule Government. The overall objective of the strategy is to ensure that Greenland will be ready for the growth in tourism when the new airports open by the end of 2023. The key focus is to ensure that the new and existing airports create the best possible framework for the Greenland tourism sector.
The strategy has three focus areas:
Firstly, ensuring sufficient capacity to receive tourists (hotels, restaurants, tours etc.).
Second, ensuring a varied offering of unique tours and experiences. And thirdly, education and knowledge-building in tourism. One could argue that these three focus areas may not be sufficient. Alongside the commercial development of tourism in Greenland, a discussion must take place about how to design future accommodation, paths and tracks in nature, signage, and so forth. What is the urban Greenland we want to welcome the world with? How do we present the untouched nature of Greenland? These are some of the questions that must be addressed in the coming years.
Visit Greenland is an independent tourism agency, established and financed by the Greenland Home Rule Government.
Visit Greenland’s strategy for 2021-2024 defines four must-wins towards 2024
1. Increased demand from adventure tourists
2. All-year-round tourism in all of Greenland
3. Knowledge sharing and competence upgrade
4. Promoting favourable framework conditions
The all-inclusive concept includes all the essentials in the booking price. Besides accommodation, you can expect food, drinks, activities, and entertainment to be included, without having to pay extra.
Adventure tourism is defined as people exploring remote areas outside their comfort zone. This type of tourism is on the rise all over the world, and Greenland is no exception. As a result, adventure tourists are an increasingly important target group for tourism operators in Greenland.
Throughout Greenland’s different development stages, the architecture has changed from primitive tent structures using driftwood, sealskin, and other available materials to more permanent turf houses and houses built from rocks. These houses were built for one or more families. The permanent structures indicated a change in the nomadic Inuit lifestyle, and settlements started to emerge.
In 1952, the first housing benefit scheme was launched in Greenland, making it possible to get a loan to cover the cost of building materials. In most cases, however, people still had to build the houses themselves as skilled carpenters were hard to find outside the bigger cities and settlements. Through the scheme, the Ministry of Greenland made it more attractive to build in the bigger towns and cities, close to the main institutions. The housing benefit scheme evolved into the characteristic Greenlandic type houses, which can be seen along the coast all over Greenland.
All the houses were designed by the Greenlandic Commission’s Board of Architecture’s Office.
In the 1960s and 1970s, Greenland was influenced by modernism like the rest of the world. As a result, larger housing complexes became more common, especially in the larger cities.
In the 1980s and 1990s, 2-3 story wooden buildings became popular. The scale became more humane, and the small windows were a testament to an environmental approach and a focus on reducing energy consumption.
In the 2000s and 2010s, the architecture in Greenland can be described as a “fast architecture”, designed mainly to accommodate contractors and their economy. This was an unfortunate development, but again, things are changing for the better, bringing more focus to the quality of the architecture. For instance, the municipality of Sermersooq has approved an architectural policy – the first ever in Greenland.
There is no forest production in Greenland, so all building materials are shipped from abroad, mainly from Denmark. The only building material produced in Greenland is cement and concrete, which is not very sustainable. On top of this challenge, the climate is challenging, and the optimal time window for construction is short. Snow and sub-zero temperatures are a challenge on the construction site. Therefore, planning and precision are crucial aspects of the building process in Greenland, as any obstacles tend to increase costs.
With these obstacles in mind, and a tourism industry focusing on adventure tourism rather than mass tourism, minimalistic and temporary structures might be the way to go for Greenland when it comes to design in nature.
The municipality of Sermersooq has approved an architectural policy, the first of its kind in Greenland. The strategy is an important step to address the future of architecture and design in modern Greenland.
The policy describes four overall objectives, each addressing a different aspect of architecture in Sermersooq.
1. Identity-building architecture
The identity of the individual, the place, and the community.
2. Housing and quality of life
A house and a home must give something back and inspire.
3. Urban space and life in the city/hamlet
When designing the spaces between buildings, creating meeting places for the people, one must focus on materials and comfort.
4. Time and place
All design should involve using appropriate technology and beware of the context and surroundings. As an example, using high-end materials in a small hamlet might seem out of place.
Through their initiative Sermersooq Municipality is taking the lead in Greenland by defining the community’s architecture and design policy. This is an essential and welcomed contribution to the debate about how Greenland should design for tourism. Through their initiative Sermersooq Municipality is taking the lead in Greenland by defining the community’s architecture and design policy. This is an essential and welcomed contribution to the debate about how Greenland should design for tourism.
The population in most Greenlandic villages is decreasing – globalisation and urbanisation are also an issue in the remote Arctic areas. It will be interesting to see if tourism can create a new foundation for some of the small local communities in Greenland. Efforts are already underway to develop the tourism offering across the country.
In Ilulissat, the Ilimanaq Lodge opened in 2017. It consists of fifteen huts and a restaurant in a restored historical building – all in the village of Ilimanaq. Visitors get a unique nature experience and experience the culture and everyday life in an active Greenlandic village.
The Nuuk Icefiord Lodge is another example of a newly developed tourism concept located near the village of Kapisillit, 75 km from Nuuk.
Situated 3 km from the village of Kapisillit, the Lodge consists of 50 lodges and a restaurant overlooking the sea. A dirt road connecting the lodge with the village is part of the project.
Kapisillit only has 53 inhabitants, so the Lodge will accommodate more people than the entire population of the village. With full occupancy of the cabins, the village expects a significant increase in traffic and visitor numbers. According to the project owner, the cabins will create more life in the village, which is well in line with the municipality’s vision of developing Kapisillit as an attractive tourism destination and a green and sustainable settlement.
The architecture is modern and differs from the area’s architectural history. As Kapisillit is known for its small wooden houses with classic saddle roofs.
To strengthen tourism in Greenland, the Government is working on establishing several visitor centres throughout the country. The idea behind the project is to promote a strong, professional, and self-sustaining tourism sector to benefit a sustainable Greenlandic society. The visitor centres are expected to help spread the socio-economic benefits of tourism and ensure that the benefits from local tourism activities stay in the local communities.
Ilulissat Icefjord Centre provides knowledge about the ice and its significance for life and people in Greenland. The ice is a rich source of information about variations in temperature, climate change, and the effect of the ice on nature and the cycle of nature. As the first official visitor centre in Greenland, The Ilulissat Icefjord Centre opened in July 2021.
Nordboporten in Qaqortoq tells the story of the Vikings who settled in Greenland more than 1,000 years ago. Southern Greenland has hundreds of sites with ruins and remains from the time of the Norse settlers. The Arctic Farmers Visitor Center provides insights into life in Southern Greenland, from the Norse settlements and their disappearance until today, when warmer temperatures enable more crops to be grown and harvested in the area.
The Polar House in Tasiilaq tells the story of the East Greenlandic culture and the history of the great Arctic expeditions. The culture in East Greenland is formed by the harsh nature and inaccessible landscapes but also by the area’s rich wildlife and the proud Inuit hunting culture.
Nuuk Nature and Geopark tells the story of the earth’s formation and the first life on earth. The first colonial settlement in Greenland was in Nuuk, and the 10,000 km2 Nuuk Fjord system is home to spectacular wildlife both on land, sea, and in the air. The earliest signs of life on earth, 3,800 million year old microbes, are found in the Nuuk Fjord.
The Inuit Hunting Ground Centre in Sisimiut tells the story of the earliest migrations to Greenland, the traditional Inuit lifeforms, and hunting culture in the more ice-free tundra of Western Central Greenland. The area was declared a UNESCO World Heritage in 2018.
The visitor centres are linked to the three existing UNESCO World Heritage Sites in Greenland and sites that could potentially be added to the World Heritage List in the coming years.
Greenlandic architecture is inspired by Danish and Nordic architecture. Some of the most important buildings in Greenland, like Katuaq – The Culture House in Nuuk and the Ilulissat Icefjord Centre are designed by Danish architects. The buildings are functional and beautiful, but some find it unfortunate that they are not designed by local architects.
The new visitor centres will represent Greenland and its architecture for many years to come, so one could argue that the design should be in the hands of local architects and designers. These ambitious projects represent an opportunity to challenge what Greenland architecture is and should be in the future – from a local perspective. The future calls for further studies and application of local knowledge in terms of design in nature. Hence, a global reach out of Greenland should engage and empower the local design community and local inhabitants in addressing their competencies and sensibilities in dialogue with global expertise and know-how. | <urn:uuid:f9b7e3f8-ce35-4a74-b6bb-5940f803c71a> | CC-MAIN-2022-33 | https://natnorth.is/visions/greenland | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571869.23/warc/CC-MAIN-20220813021048-20220813051048-00095.warc.gz | en | 0.945648 | 4,586 | 2.625 | 3 |
The galaxy distribution includes a substantial number of bound systems dominated by two galaxies (hereinafter "pairs"). These simplest systems of galaxies are excellent laboratories for studying galaxy masses (since we understand two-body dynamics better than those in clusters, and since the range of separations in pairs extends much farther than individual galaxy rotation curves), and for study of the effects of perturbations on galaxies. Overall references on this topic include the proceedings of IAU Colloquium 124 and the book Double Galaxies by Karachentsev (in Russian; there is an English version hosted by NED).
Recognizing pairs: There are special difficulties in recognizing complete sets of pairs. Various criteria have been considered, relying on either projected separation or including radial-velocity information. Early works simply used a diameter vs. separation criterion. More recent ones have incorporated isolation criteria, as illustrated in this diagram following Fig. 1 of Karachentsev's book. In this instance, the ratio of distance to the nearest galaxy (with angular diameter greater than some fraction like 1/2 of the smaller pair member) must satisfy X1i/X12 > 5 ai/a1.
The particular cutoff ratios among the relevant distances are fixed either to give some "reasonable" number of pairs, or by rough physical arguments from the ratios of expected tidal influences if mass follows light. At most, such a criterion is statistically applicable. Van Albada, for example (see the dissertation from Groningen by Soares 1990) has used the local surface density of comparably bright galaxies to assess the probability that a candidate companion is physical, while Karachentsev has included radial-velocity information; more recent redshift surveys have allowed pairs in (position, velocity) space to be identified at separations close to 1 Mpc (Charlton and Salpeter 1991 ApJ 375, 517). However, a basic problem is that the velocity dispersions of groups are comparable to the relative velocities in pairs; that is, we have no way to distinguish true two-galaxy systems from sets of group members which we happen to view as close together. Dynamical distortions provide evidence of pairing, but the converse is not true - pairs will not necessarily produce distortions at a particular time, depending on mass ratio and orbital parameters. It is posible for a pair to look undistorted in typical optical images but show lage tidal featyres in H I, because the atomic gas often starts in a more extended disk (see examples in John Hibbard's H I Rogues' Gallery.
Some frequently used catalogs of galaxy pairs include:
Note that what you consider an appropriate, complete, or representative catalog depends on what you hope to do with it. A catalog pure enough for mass determinations will be dreadfully incomplete for interaction studies or population statistics. These catalogs are all more or less biased toward equal-luminosity pairs. Finding faint companions suffers from strong background confusion and incompleteness. These catalogs indicate that about 10% of luminous galaxies are in two-body systems, with numbers ranging from about 11% for ellipticals to 6% for later-type spirals and irregulars. The I0 or Irr II galaxies are found almost exclusively in pairs, leading to the notion that they are transient phases seen following a tidal disturbance. Compared to overall numbers, it appears that early-type (E,S0) galaxies are overrepresented in pairs (see Sulentic's 1990 review for the Sant' Agata meeting). This extension of the morphology-density relation is interesting for theories of how pairs originated.
Pairs are often used to estimate galaxy masses; if we can assume that the pair orbits in some catalog are seen at random orientations, and we understand any related selection effects, we can determine the mean M/L ratio of galaxies out to the radius of a typical companion orbit. If the pair orbit is inclined at an angle θ to our line of sight, and the companion motion makes an instantaneous angle φ to the line of sight, the system mass within the component separation R is given in terms of observables as follows:
where vr is the radial-velocity difference observed and R ⊥ is the projected separation. The analogous equation in binary-star astronomy is used to determine the mass function m sin3 θ, a bit better since one can follow through cycles in φ. The major problem for binaries is that we must integrate over pairs at different θ, φ, and catalogs will undersample some ranges of these as well as R. Trivially, a catalog will be more reliable against background contamination for smaller R ⊥ and hence smaller R, while for noncircular orbits a bias in θ, φ will also be present - so you already have to know the answer to derive it!
This means that, unfortunately, the resulting masses depend critically on how the sample is selected and on how one excludes non-bound members (for example, either by using a cutoff in measured M/L or using only those pairs in very low-density regions). Karachentsev makes a strong case for halo masses not much larger than required by the rotation curves of pair members, while L. Schweizer (1987 ApJSuppl. 64,411; 64, 417; 64, 427) argues for global M/L ratios a few times larger; she finds that the dynamical properties of pairs show correlations suggesting that the mass is concentrated well inside typical orbital radii.
The pair population is of interest for theories of galaxy formation - they are an especially clean test of where galaxy angular momentum arises (for example by tidal torquing), while their survival is related to such items as the merger rate in the past, growth of galaxies by cannibalism, and the present number of dwarf binaries. There is some evidence that we can see the effects of orbital modification by dynamical drag effects, for example in preferentially depopulating direct orbits (Keel 1991 ApJLett 375, L5; Zaritsky et al. 1993 ApJ 405, 464).
Interactions: It has been clear since at least the analog work by Holmberg (1943 ApJ) that close encounters may transfer energy between orbital and internal motions, altering both the galaxies' internal dynamics and orbits. Detailed modelling of this process started with the Toomres' (1972 ApJ 178, 623) paper using point masses and test particles, which could already reproduce structures in M51 and the Mice very well. Lots of analytic detail is given in chapter 7 of Galactic Dynamics. Tidal distortions can be traced using the stars or gas; H I maps have proven strikingly effective in showing old and extensive tidal damage.
Catalogs of galaxies showing obvious interactions have been compiled from Sky Survey material by several workers. The statistics of galaxy pairs already show that interactions occur between members of bound systems, not as chance encounters of non-related galaxies; there are simply too many observed pairs to have been formed by capture (Chatterjee 1987 Astrophys. Space Sci. 137, 267). Some important catalogs of interacting pairs are:
Models show that the responses of galaxies to tidal perturbations depend on the galaxy type (spiral vs. elliptical, for example) and direction of companion orbit. Thus, one can often diagnose a system's history from its appearance and kinematics. The basic categories for most pairs fall into a few kinds, as noted by Karachentsev in his catalog. These include common envelopes, shells, bridges, and tails, which may consist of distorted spiral arms or separate features. These tell us about the victim galaxy's dynamics and the geometry of the tidal disturbance.
For spirals, the dynamically cold disk can form long bridges and tails, depending on relative velocity, direction, and our viewing angle. This can produce things like M51, the Mice, Antennae, and even the 300-kpc tails of the Superantennae (Melnick and Mirabel 1990 A&A 231, L19). By contrast, ellipticals can form only broad fanlike distortions, but these in concert with kinematic disturbances are frequently enough for an orbital reconstruction (see, for example, Borne 1988 ApJ 330, 28).
Star formation: The following is based on the review by Keel 1990 (IAU Symp. 146, Dynamics of Galaxies and their Molecular Cloud Distributions, p. 243).
It is by now part of the lore of galaxy research that galaxy interactions can, among other interesting effects, trigger bursts of star formation. This makes such systems useful laboratories for examining star formation in unusual environments, probing the behavior of a disturbed interstellar medium, and perhaps seeing processes that were important during galaxy formation. This paper reviews the evidence for the presence and scope of enhanced star formation during interactions, and presents several mechanisms that have been proposed to account for this excess.
Since we do not have "before and after" views of interacting galaxies, we are driven to perform statistical comparisons of large samples of interacting and non-interacting (sometimes called for brevity "isolated") galaxies. This offers the hope that we might measure shifts in the (already broad) distributions of properties tracing the SFR. Selection of both interacting and comparison samples can involve some subtlety, since the SFR we wish to measure is itself a function of galaxy type and luminosity. Furthermore, selecting program galaxies for obvious morphological signs of interaction biases the sample in favor of certain kinds of interactions seen at certain stages. Conclusions from such samples may not be generalizable to the whole population of encounters. Ideally, then, we should obtain comparable observations of samples of galaxies with the same distribution of Hubble type, luminosity (as measured before any alteration by the interactions), and environment (except, of course, for the presence of companions). Since interactions induce star formation and can therefore change the luminosity of a galaxy, and tidal disturbances can change the morphology, this sort of comparison cannot be attained in practice. However, the more closely matched the properties of interacting and control samples are, the greater the confidence one may have that any differences between the two are in fact associated with the interactions. Exactly how they are associated depends to some extent on the population of systems now seen undergoing interactions: galaxies that are only now undergoing their first mutual close approach should be more like isolated systems than those that have been in fairly close, slowly decaying circular orbits for most of the Hubble time (as discussed by Karachentsev 1988 , Dvoinye Galaktiki). Thus, dynamical understanding of the entire population of binary galaxies will be important in unravelling just how interactions influence galaxy evolution.
Only for extreme "starburst" systems (loosely defined herein as those in which the SFR exceeds 4-5 times its preburst level) can we be sure that most of the star formation that we observe has been triggered by a companion, simply because only a tiny fraction of "isolated" galaxies show such a high SFR. Some of the highest values are found for apparently merging systems; more detailed interpretation of their role awaits identification of a statistically representative sample of merger candidates without recourse to quantities strongly affected by star formation (such as far-infrared luminosity). Observations of these systems can sidestep the statistical approach, since the star formation in these cases must be due predominantly to the interaction. In some cases, the SFR in these systems is so high that a global wind can be set up, thus sweeping the galaxy nearly free of gas and leaving a system that may eventually resemble an elliptical (Graham et al. 1984, Nature 310, 213).
There are several star-formation tracers for which sufficient survey material is available for statistical comparison. Further indicators (for example, in the X-ray band) should become available in the future. Note also that the discussion here is confined to luminous, gas-rich systems, which is to say spiral galaxies.
Optical colors. These reflect primarily stellar populations of age ~109 years or less, as well as being sensitive to the strength of such populations relative to any underlying older (bulge) population. Galaxies in pairs display a correlation of color indices (the Holmberg effect) tighter than that expected from the known correlation of morphological type (Holmberg 1958 Medd. Lunds Astron. Obs. Ser. 2, No. 136, Demin et al. 1984 Astron. Zh. 61, 625, Madore 1986 in Spectral Evolution of Galaxies, 97). This provides evidence of similarly recent episodes of star formation. The distributions of color indices themselves were examined by Larson and Tinsley (1978 ApJ 219, 46) for systems in the Arp and Hubble atlases, showing that the strongly interacting systems in the Arp atlas show a large dispersion in colors that could be accounted for by bursts of star formation superimposed on a normal (older) component. Finally, samples, such as the Markarian galaxies, selected for their strong near-ultraviolet continua (that is, blue color), are rich in paired and interacting galaxies (Heidmann and Kalloghlian 1974 Astrof, 9. 71; Casini and Heidmann 1975 A&A 39, 127; Kazarian and Kazarian 1988 Astrof, 28, 487; Keel and van Soest 1992 A&A Suppl 94, 553). These extreme systems probe the tail of the SFR distribution in much the same way as far-infrared flux-limited samples.
Direct counts of stars and clusters. Statistics of supernovae show excesses of type II outbursts (and hence of young, massive progenitors) in interacting galaxies (Smirnov and Tsvetkov 1981 PAZh 7, 154; Kochhar 1990 in IAU Coll. 124). Some interacting systems have extraordinarily luminous individual H II complexes (Petrosian, Saakian, and Khachikian 1985 Astrof. 21, 57), while others have a normal H II region luminosity function even if the number of H II regions is unusually high (Keel, Frattare, and Laurikainen someday). There are indications that the spatial distribution of H II regions is more centrally concentrated in interacting systems than in normal spirals (Bushouse 1987 ApJ 320, 49, Kennicutt et al. 1987 AJ 93, 1011). Recently it has become clear (as outlined in the earlier section on starbursts) that many interacting and merging systems make stars in very luminous, perhaps massive clusters which can long outlast OB stars; probing their ages is a growth industry. This is a particular area where one is tempted to compare dynamic star-forming environments today with events on the early Universe.
Nuclear and integrated emission-line properties. Recombination lines trace the number of stars producing significant ionizing radiation (OB stars), with some sensitivity to reddening, obscuration, and mass function. For both nuclei and disks, several spectroscopic and imaging surveys have shown clear (statistical) excesses of emission in interacting systems (Keel et al. 1985 AJ 90, 90, 708; Bushouse 1987; Kennicutt et al. 1987), with some tendency for the excess to be stronger for more disturbed systems. This is found in Hα luminosity, in equivalent width (normalized to optical luminosity), and in Hα surface brightness (normalized to disk area). Further, the Hα equivalent width can be combined with continuum color indices to form a 2-color diagram with a very long effective wavelength baseline, and this may be interpreted much as done by Larson and Tinsley (Kennicutt et al. 1987).
Detailed comparison of multiwavelength images shows that, for galactic nuclei, the role of obscuration is strong and complex; ionizing clusters can contribute to Hα emission while remaining completely unseen in the optical continuum. Thus, differential comparisons are likely to be more reliable than absolute measures. In particular, model comparisons yielding burst ages or IMF slopes must be regarded as highly suspect. Also, there are systems in which emission lines may be influenced by such processes as shock heating (Keel 1990 AJ 100, 356) or weak nuclear activity (Kennicutt, Keel, and Blaha 1989 AJ 97, 1022), so spectroscopic diagnostics are needed to be sure the luminosities we measure really reflect the SFR. For very dusty systems, it may not be clear how much the optical spectrum reflects the dominant energetics of the galaxy (compare the conclusions of Sanders et al. 1988 ApJ 325, 74 and Leech et al. 1989 MNRAS 240, 349 as regards the role of star formation in the most luminous IRAS galaxies).
Thermal infrared. Two ranges have been studied - the 10μ window (offering excellent spatial resolution and modest sensitivity, probing high dust temperatures) and the far-infrared bands pioneered with IRAS (excellent sensitivity but poor resolution, over a much wider temperature range). Both are sensitive to a wider range of stellar masses than are H recombination lines. At 10μ, Cutri and McAlary (1985 ApJ 296, 90) found that galaxies from the Karachentsev (1972, 1988) catalog of paired galaxies have systematically higher luminosities (and detection probabilities) than isolated systems, which they interpreted as reflecting dust heated by increased numbers of young stars. Similarly, Lonsdale, Persson, and Matthews (1984 ApJ 287, 1009) found enhanced 10-20μ emission in galaxies selected for tidal distortions from the Arp Atlas.
There has been an enormous amount of work on the connection between IRAS emission and interactions (Soifer et al. 1984 ApJL 278, L71, Lonsdale, Persson, and Matthews 1984; Telesco, Wolstencroft, and Done 1988 ApJ 329, 174; Lawrence et al. 1989 MNRAS 240, 329), but the results for an infrared-selected sample can be somewhat misleading if taken out of context. The far-IR properties of optically-selected samples (Bushouse 1987, Kennicutt et al. 1987, Haynes and Herter 1988 AJ 96, 504; Sulentic 1989 AJ 98, 2066) show distributions much like those seen in Hα, with most systems modestly enhanced and a small percentage dramatically affected. It is this small tail of the SFR distribution that is strongly represented in FIR flux-limited samples, even though only a tiny fraction of all interacting systems are seen during such extreme bursts. These systems include many of the famous "superluminous" IRAS galaxies (Sanders et al. 1988). Above about 1011 solar luminosities in the far-IR, the source population is dominated by mergers and multi-way interactions (as seen in this WFPC2 montage of powerful IRAS galaxies). Statistical treatment of the IRAS data is limited by the poor resolution (generally requiring that pair members be treated together). In some very distorted galaxies, interpretation of the far-IR emission can be complicated by the possibility of more effective conversion of visible-wavelength radiation from an old stellar population into thermal infrared emission, when the dust is no longer confined to a single plane (e.g. Thronson et al. 1990 ApJ 375, 456).
Radio continuum emission. Spirals with high-surface-brightness radio disks are actively star-forming, and a large fraction are in interacting systems (Condon et al. 1982 ApJ 252, 102). The spectral index and surface brightness of the emission indicate a nonthermal origin, perhaps in supernova-accelerated particles radiating in fields along spiral arms. In some cases, the radio structure shows direct links to star-forming regions, and in some nearby objects, individual sources identified as supernova remnants can be found (Kronberg, Biermann and Schwab 1985 ApJ 291, 693; Noreau and Kronberg 1987 AJ 93, 1045). At least at the highest values, the surface brightness at centimeter wavelengths appears to reflect the supernova rate, and thus SFR in the relevant mass range. Over the 6-20 cm range, both disks (Hummel 1981 A&A 96, 111) and nuclei (Hummel et al. 1987 A&A Suppl 70, 517) show statistical enhancements in interacting systems.
All of these SFR indicators tell similar stories: the majority of interacting spirals have increases in SFR of order 30%, detectable only statistically, while a few experience increases of an order of magnitude. Such a wide range of responses may indicate sensitivity to internal dynamics, or to details of spin and orbit directions for particular encounters. There has been no shortage of proposed mechanisms to produce these effects, largely discussed in the starburst section.
Active Galactic Nuclei: Interactions have frequently been implicated as somehow triggering the occurrence of various kinds of nuclear activity, but just how and how often remains surprisingly unclear - see the review by Heckman (1990, IAU Coll. 124, p. 359). Confirming such a process would be quite important; not only would we learn more about how to feed the monster, and we know that most galaxies already have an undernourished one. I will cite only some of the major results here.
Seyfert nuclei: Seyfert galaxies seem to have more companions than non-Seyferts of the same kind and luminosity (Dahari 1984 AJ 89, 966, MacKenty 1989 ApJ 343, 125), but the strength of this result depends critically on just how the control sample is selected (Fuentes-Williams and Stocke 1988 AJ 96, 1235). Samples of interacting galaxies are rich in Seyferts (Keel et al. 1985 AJ 90, 2208) but this survey plus those of Dahari 1985 (ApJ Suppl 57, 143) and Bushouse (1986 AJ 91, 255) also shows that very distorted galaxies almost never have Seyfert nuclei. The upshot is that perturbations make it easier to have Seyfert activity, but do not appear utterly essential (to first order, a bar acts like a small companion). When a Seyfert nucleus is present, it is usually in the brighter pair member, and preferentially in host galaxies with the most concentrated bulge masses as judged from rotation curves (Keel 1996 AJ AJ 111, 696). This panel shows the kinds of environments found for Seyferts, including luminous companions and tidal tails.
Radio galaxies: Radio galaxies are more likely to have close companions than optically similar radio-quiet galaxies (Heckman et al. 1985 ApJ 288, 122); note that here too the details of sample selection are rather important in one's result (Dressel 1981 ApJ 245, 25; Stocke 1978 AJ 83, 348; Adams et al. 1980 AJ 85, 1010). The highest-luminosity nearby radio galaxies almost universally show evidence of strong interactions or mergers (Heckman et al. 1986 ApJ 311, 586). Detailed studies of a few such cases show evidence that the galaxy recently acquired substantial gas (van Breugel et al. 1984 ApJ 277, 82; Heckman et al. 1982 ApJ 262, 529). This was suspected in the 1950s from Centaurus A. The HST snapshot survey of 3CR radio galaxies has shown a large fraction to show some kinds of morphological distortions (de Koff et al. 1996 ApJSuppl 107, 621).
QSOs: It was hard enough to tell that they were in galaxies, much less surrounded by others. Imaging from Mauna Kea has been extremely suggestive, with Hutchings and Campbell 1983 (Nature 303, 984) claiming that 30% of QSOs with z < 0.6 show evidence of interactions. Spectroscopy by Stockton (1979 IAU Symp. 92, 89) and Heckman et al. (1984 AJ 89, 958) confirms the association of these galaxies in redshift, and Stockton 1982 (ApJ 257, 33) has shown that many of the companions have their own low-luminosity active nuclei. Further individual systems have been studied by, for example, Shara et al 1985 (ApJ 246, 339; 4C 18.68), Yee and Green 1987 (AJ 94, 618; PG 1613+658), and Vader et al 1987 (AJ 94, 847; IRAS 00275-2359). Be careful in putting the statistical studies together; many of the "QSOs" in some studies are no more luminous than Markarian Seyferts, so that it is not clear which problem is being addressed. Calling the Sun a quasar doesn't answer the quasar-physics problem. Anyway, as was long expected, HST results have added considerably to our understanding. Most QSO host galaxies have compact companions within tens of kpc (Bahcall et al. 1995 ApJ 450, 486, Disney et al. 1995 Nature 376, 150), a result which was foreshadowed by Stockton's 1982 paper. This fraction is by now the most striking correlation of nuclear activity and galaxy interactions. Some of these companions are seen in this montage of HST images, from rather luminous normal companions to the very close, compact companions of PKS 1302-102 and PKS 2349-013.
Dynamical models show that galaxies are very sticky, and that a deeply penetrating encounter can dissipate so much orbital energy that a coalescence is inevitable. Simple estimates suggest that most bright galaxies must have undergone at least one merger of near-equals (see Toomre 1977, Yale conference p. 401); there have been many numerical studies of what happens here, so we have a good idea of what to look for in the real universe. In fact, there are several useful relics of mergers. Most commonly sought are tidal tails from a single main body; the dynamical evolution of the core proceeds so fast that tails from initial disks will still be visible for several billion years after the nuclei have merged. F. Schweizer 1982 (ApJ 252, 455) has shown that this in the case in NGC 7252, with a central body approaching a de Vaucouleurs light distribution while counter-rotating motions and tidal tails still exist farther out. Numerous such merger candidates have been identified from optical imaging; many are also strong IR sources. These have some of the most strongly enhanced far-IR levels observed, extending the notion of enhanced star formation to the most violent interactions possible; there has been some discussion of a role for violent cloud collisions between clouds starting in different galaxies, or emission directly produced by rapid shocks in dense gas disks (Harwit et al. 1987 ApJ 315, 28).
Mergers lead to an attractive notion about the formation of elliptical galaxies - do they all come from mergers of disk systems? In this case, one has only a single galaxy-formation problem (for disks) rather than two. Stronly dissipative processes are required, to avoid violating Liouville's theorem (but note that the maximum phase-space density in a disk may not be at the nucleus). Both observations (such as Graham et al 1984, Nature 310, 213) and modelling (Barnes and Hernquist 1991 ApJLett 370, L65) indicate that gas can collect very rapidly at the nucleus, and that the subsequent star formation can drive a powerful enough wind to sweep the galaxy free of gas. Voila! An elliptical! An attractive scheme, probably first clearly stated by Toomre in the 1977 Yale conference, but more evidence really should be collected. The current rate of mergers and luminosity function of ellipticals put interesting constraints on this (Keel & Wu 1995 AJ 110, 129), as do estimates of the history of the merger rate - at this point it appears that most ellipticals might be merger remnants, as long as rampant merging took place in the early Universe so that most ellipticals ceased star formation by z ~ 1. Current work includes tests for the number of pairs and mergers at high redshifts, which are connected since most merging occurs between bound pairs. The merger rate is often paramatrized to vary as (1+z)α, where α < 3 to avoid overproducing ellipticals today. In the local Universe, we find systems at all stages of merging - this image sequence shows a set of nearby mergers in the order in which they best match the order of things seen in numerical simulations. It is especially interesting that this same order makes sense for optical colors and far-infrared excess, both suggesting a transient burst of star formation associated with merging.
There are some additional post-merger signatures. These include shells in ellipticals, polar rings, counter-rotating stellar or gaseous subsystems, and external H I in ellipticals. Polar rings appear around some S0 or very early Sa galaxies; the connection to mergers arises from the fact that polar orbits are less vulnerable to smearing by differential precession than more equatorial ones, and thus longer lasting. An atlas has been presented by Whitmore et al. 1990 (AJ 100, 1489). Detailed modelling has been done by Steiman-Cameron and Durisen 1982 (ApJLett 263, L51),Schweizer at al. 1983 (AJ 88, 909), and Whitmore et al. 1987 (ApJ 314, 439). The Hubble Heritage team produced a stunning image of NGC 4650A showing young stars clusters and dust in the polar ring.
In contrast, counter-rotating subsystems attract attention because they should be very short-lived. Several cases have been reported in ellipticals (Franx and Illingworth 1988 APJLett 327, L55; Bertola and Bettoni 1988 ApJ 329, 102), S0's (NGC 4550, Rubin et al. 1992 ApJL 394, L9), and even the disk of the spiral NGC 4826=M64 (Braun et al. 1992 Nature 360, 442; 1994 ApJ 420, 558). Such motions cannot be primordial, and are thus evidence of recent addition of substantial stars or gas from outside. The H I detected in some ellipticals comes in patchy external rings, leading to the suspicion that it too was externally acquired (e.g. van Gorkom et al. 1986 AJ 91, 791).
All these processes should be more common in regions of higher galaxy density, like the whole universe long ago. What do we see at higher redshifts? This issue will be crucial in understanding galaxy evolution. Since mergers can trigger huge starbursts, and perhaps transform galaxy types, they may have been a controlling influence on galaxy evolution. In fact, galaxy formation is now thought to have been rather protracted, building up piece by piece from smaller constituents, so mergers might be our best present analogs to protogalaxies (see Djorgovski in Nearly Normal Galaxies, p. 290, for a statement of how rapidly the conventional wisdom can evolve). Most people who have inspected deep HST images have remarked on how many funny-looking galaxies they see, and quantitative studies back this up both as to number of statistically defensible pairs and number of "peculiar" systems, though one must be wary of passband and surface-brightness selection effects. Some of these oddities appear in this snippet from the Hubble Deep Field - North imagery.
The merger rate might appear to change strongly with time (in linear, and not comoving coordinates) simply as a collisional process, depending on the square of the galaxy density. There has been substantial analytic and numerical work on just when two galaxies will stick. Including gravitational focussing, Fang and Saslaw (1997 ApJ 476, 534) derive a merger cross-section:
where V0 is the initial (asymptotic) relative speed and r* is the disk radius (say of the larger one)i, including relevant halo material. Following Roos and Norman 1979 (A&A 76, 75), merging requires that v < 3.1 σ for head-on encounters (the numerical factor may change for other kinds). Fang and Saslaw generalize this to include slower encounters,
where fast impulse encounters have μ=0, ν=2, and η = π leading to small S, while slow merging-friendly encounters have μ=1, ν=1, and 1 < η < 10. In general, approach velocities less than the internal velocities spell doom, and most encounters which lead to merging will spiral together over 6-10 crossing times. However, these considerations are not the ones relevant to asking how merging affects the entire galaxy population; demographics (Chatterjee 1987 Ap&S 135, 131) show that most merging today must occur betweeen members of pairs which are already bound, so that the history of which galaxies were formed in pairs (or small multiplets) dominates the whole question. Clusters, for example, are not fertile environments for merging per se, since most encounters will be strongly hyperbolic and transfer too little orbital energy to internal motions (though they can have interesting effects on the outer parts of galaxies, especially in the repeated applications forming harassment).
Last changes: 9/2009 © 2000-9 | <urn:uuid:e27384c3-1ec9-4655-a434-91be985756ac> | CC-MAIN-2022-33 | http://pages.astronomy.ua.edu/keel/galaxies/mergers.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571911.5/warc/CC-MAIN-20220813081639-20220813111639-00497.warc.gz | en | 0.916608 | 6,930 | 2.515625 | 3 |
View sample criminology research paper on family relationships and crime. Browse other research paper examples for more inspiration. If you need a thorough research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our writing service for professional assistance. We offer high-quality assignments for reasonable rates.
‘‘The most important part of education,’’ said the Athenian in Plato’s Laws, ‘‘is right training in the nursery’’ (li. 643). Through acceptance of Freudian theory, this ancient belief gained new credibility during the first half of the twentieth century. According to Freudian theory, successful socialization begins with an early attachment to the mother, an attachment that must later be modified by a conscience, or ‘‘superego,’’ that develops through identification with a parent of the child’s own sex (Freud). In the case of a young boy, the theory continues, attachment to the mother leads to the boy’s jealousy of his father, but fear of his father’s anger and punishment forces the child to control his incestuous and antisocial desires. Because Freud argued that the development of conscience for males depends on attachment to the mother and identification with the father, psychoanalytic explanations of crime focused on paternal absence and maternal deprivation. These emphases continue to guide psychological theories and research despite the decline in popularity of Freudian theory.
Toward the mid-twentieth century, sociological theories became influential. First Charles Cooley and then George Herbert Mead proposed that people develop self-concepts that reflect how they believe they are perceived by ‘‘significant others’’ (Mead). These self-concepts motivate a person’s actions. The parents provide the first group of significant others from whom a child acquires a sense of identity. If parents are neglectful or abusive, the child develops self-concepts that tend to lead to associations with others who similarly denigrate the value of individuals. Edwin Sutherland suggested in the 1930s that both delinquent and nondelinquent behavior is learned from ‘‘differential associations’’ with others who have procriminal or anticriminal values. Children reared by families with ‘‘criminalistic’’ values would accept a criminal lifestyle as normal. Children neglected by their families would be more strongly influenced by nonfamilial associates, some of whom might be procriminal (Sutherland and Cressey).
The second half of the twentieth century witnessed development of explanations for crime that took into account both psychological and sociological processes. Most popular among them are the ‘‘control theories,’’ which assume that all people have urges to violate society’s conduct norms and that people who abide by the norms do so because of internal and external controls. These controls trace to the family through ‘‘bonding’’ (internal control) and discipline (external control).
Control theories rest on an assumption that deviance is natural and that only conformity must be learned. Social learning theories, on the other hand, assume that both prosocial and antisocial activities are learned. They claim that a desire for pleasure and for avoidance of pain motivates behavior, and hence they focus on rewards and punishments. Social learning theories employ the notion of vicarious conditioning to explain how people learn by watching and listening, and direct attention toward the influence of parents as models for behavior and as agents for discipline. Some theorists, however, question the assumption that self-interested pleasure and pain govern all voluntary choices.
Regardless of what theory is used to explain how behavior is learned, Western cultures place a heavy burden on families through assigning responsibility for child rearing to them. Families in such cultures must transmit values so as to lead children to accept rules that they are likely to perceive as arbitrary. It should be no surprise, therefore, to find that family life bears a strong relation to juvenile delinquency (Kazdin). Perhaps the most significant changes in thinking during the last quarter of the twentieth century have been methodological. Increasingly, social scientists have become aware of retrospective and expectational biases, biases that occur when people are asked to recall their experiences— particularly when they have theories about the way people react to events of certain types. These biases affect data collection and interpretation. To overcome these biases, newer studies have used longitudinal approaches, studying people through time. These longitudinal studies provide a basis for reassessing theories about family relations and crime.
Single-Parent Families and Crime
In contemporary Western societies, a nuclear family structure has been idealized. Conversely, deviations from this structure have been blamed for a variety of social problems, including crime. One of the signs of change, however, has been acknowledgment that not all single-parent families are ‘‘broken.’’ Another has been renewed examination of family dynamics in a context in which effects of having a single parent in the home can be considered apart from concomitant poverty, or effects of poor supervision and disruptive child rearing.
Classical theories endorsed the popular view that good child development requires the presence of two parents. This view seemed to have been corroborated by studies showing that the incidence of broken homes was higher among delinquents than among the nondelinquents with whom they were compared. In line with the Freudian tradition, many believed that paternal absence resulted in over-identification with the mother. According to this view, delinquency is one symptom of compensatory masculine ‘‘acting-out.’’ The theory purports also to explain why delinquency is prevalent among blacks and the poor, groups with high rates of single-parent families.
If delinquency were a response to excessive maternal identification, however, the presence of a stepfather should reduce the criminogenic effects of paternal loss. This does not occur. In fact, studies have consistently shown higher rates of delinquency for boys who had substitute fathers than those having no fathers in the home (Glueck and Glueck; Hirschi; McCord, McCord, and Thurber).
Despite the frequency with which both the popular press and participants in the legal system blame ‘‘broken’’ homes for failures to socialize children as willing participants in an ordered social system, their conclusion goes well beyond the facts. Research that takes into account the role of parental conflict, stress, or socioeconomic conditions in relation to single-parent families fails to show that single-parent families contribute disproportionately to crime.
Because poverty is related to both crime and single-parent families, studies that confound socioeconomic status and family structure have tended to nourish the belief that single-parent families account for crime (Crockett, Eggebeen, and Hawkins). Studies within a particular social class, however, show that neither British nor American children from single-parent homes are more likely to be delinquent than are their similarly situated classmates from two-parent families. Disruptive parenting practices and behavior account for most of the apparent effects of single-parent families on crime (Capaldi and Patterson; Gorman-Smith, Tolan, and Henry; McCord; DeKlyen, Speltz, and Greenberg).
Family conflict is particularly criminogenic (McCord; Rutter; West & Farrington), and the choice to divorce must typically be made by parents who do not get along. David Farrington found that marital disharmony of their parents, when boys were fourteen, predicted subsequent aggressive behavior among boys who had not been previously aggressive. Tracing the lives of a group of men forty years after they had participated in a youth study, Joan McCord contrasted effects of conflict between parents with effects of parental absence. Compared with boys raised in quarrelsome but intact homes, boys reared by affectionate mothers in broken homes were half as likely to be convicted of serious crimes. Criminality was no more common among those reared solely by affectionate mothers than among those reared by two parents in tranquil homes.
Michael Rutter was able to disentangle effects of parental absence and effects of parental discord in his study of children whose parents were patients in a London psychiatric clinic. Among those who had been separated from their parents, conduct disorders occurred only if the separations were the result of parental discord. Among those still living with both parents, disorders occurred when there was parental conflict. Furthermore the children’s behavior improved when they were placed in tranquil homes.
No one has taken the position that single-parent families are superior to good two-parent families. But good two-parent families are not the option against which an adequate comparison of single-parent families ought to be measured. For many children, the option to living in a single-parent household is living with an alcoholic or aggressive father or living in the midst of conflict. Recent research has resulted in a considerable amount of evidence to suggest that if the remaining parent provides strong and supportive guidance, offspring in single-parent homes are no more likely to become delinquents than if there are two good parents in the home (Matsueda and Heimer).
Parental Attachment and Crime
The nuclear family structure places a special burden on parents. Because they are seen to be the primary socializing agents, parents are expected to provide warmth and protection as well as guidance. Conversely, absence of affection and inadequate discipline have been seen as sources of crime.
Psychoanalytic perspectives encouraged the use of case materials to develop facts for a science of human behavior. The view that maternal deprivation has dire effects on personality gained support from case histories documenting maternal rejection in the backgrounds of aggressive youngsters and from studies of children reared in orphanages, many of whom became delinquents. Indeed, John Bowlby suggested that the discovery of a need for maternal affection during early childhood paralleled the discovery of ‘‘the role of vitamins in physical health’’ (p. 59).
Critics of the conclusions reached in these studies noted the selective nature of retrospective histories and pointed out that institutionalized children not only lack maternal affection but also have been deprived of normal social stimulation. They wondered, as well, whether a father’s affection was irrelevant. Around mid-century, several studies suggested that paternal affection had effects similar to those of maternal affection. For example, Travis Hirschi compared the impact of paternal affection with that of maternal affection in his study of California students. Hirschi’s analysis indicated that the two parents were equally important and, moreover, that attachment to one parent had as much beneficial influence on the child as attachment to both.
Most of the evidence on parental attitudes toward their children has depended on information from adolescents who have simultaneously reported their parents behavior and their own delinquencies. Because these studies are based on data reporting delinquency and socialization variables at the same time and by the same source, they are unable to disentangle causes from effects.
Evidence from adolescents’ reports of interactions with their parents when they were fifteen and of their own delinquency when they were seventeen years old suggests that friendly interaction with parents may deter delinquency (Liska and Reed). Relying on adolescents to report about their parents’ child-rearing behavior assumes that the adolescents have correctly perceived, accurately recall, and honestly report the behavior of their parents. There are grounds for questioning these assumptions.
Experimental studies show that conscious attention is unnecessary for experiences to be influential, so salient features of their socialization may not have been noticed by the adolescents. Studies have also shown that reports of family interaction tend to reflect socially desirable perspectives. To the extent this bias afflicts adolescents’ reports, real differences in family upbringing tend to blur. When parents report on their own behavior, they are likely both to have a limited and biasing perspective and to misrepresent what they are willing to reveal.
A handful of studies have used measures of parent-child interaction not subject to the biases of recall and social approval. Robert Sampson and John Laub reanalyzed data from the files compiled by Sheldon and Eleanor Glueck. Using multiple sources for information about parent-child relations, they found that parental rejection was a strong predictor of criminality. After coding case records based on home observations for a period of approximately five years, Joan McCord retraced 235 members of the Cambridge Somerville Youth Study. She found that those who had mothers who were self-confident, provided leadership, were consistently nonpunitive, and affectionate were unlikely to commit crimes. Thus, studies on emotional climate in the home present consistent results. Like parental conflict, negative parent-child relations enhance the probability of delinquency. Parental affection appears to reduce the probability of crime. Not surprisingly, parental affection and close family ties tend to be linked with other features of family interaction.
Variations in Discipline and Crime
Psychoanalytic theory postulates that development of the superego depends on the ‘‘introjection’’ of a punitive father. This perspective generated research on successive training for control of oral, anal, and sexual drives and on techniques for curbing dependency and aggression. Although resultant studies failed to produce a coherent picture showing which disciplinary techniques promoted a strong conscience and which decreased antisocial behavior, they focused attention on the relationship between discipline and deviance. Studies less closely tied to psychoanalytic theory have considered various types of punishment and used such concepts as firmness, fairness, and consistency in analyzing relationships between discipline and crime.
The Gluecks found that incarcerated delinquent boys rarely had ‘‘firm but kindly’’ discipline from either parent, yet a majority of the nondelinquents with whom they were compared experienced this type of discipline. Parents of delinquents were more likely to use physical punishment and less likely to supervise their sons. Hirschi characterized discipline by asking if the parents punished by slapping or hitting, by removing privileges, and by nagging or scolding.
He found that use of these types of discipline was related to delinquency, a conclusion which suggests that such punishments promoted the behaviors they were ‘‘designed to prevent’’ (p. 102). Several longitudinal studies investigating effects of punishment on aggressive behavior have shown that punishments are more likely to result in defiance than compliance. Power and Chapieski studied toddlers one month after they had started walking unassisted and again a month later. The sample, drawn from Lamaze classes, was middle class, with mothers at home. Among them, ‘‘Infants of physically punishing mothers showed the lowest levels of compliance and were most likely to manipulate breakable objects during the observations’’ (p. 273). Additionally, six months later, the same infants showed slower development as measured by the Bayley mental test scores.
Crockenberg and Litman studied two year olds in the laboratory, where they measured the infants’ obedience to requests and interviewed their mothers about discipline and family life. The same mother-child pairs were studied a month later in their homes during meal preparation and mealtime. After controlling other types of maternal behavior, the observers’ ratings indicated that negative control was related to defiance in both settings.
Similarly, spanking seems counterproductive for children preparing to enter school. Strassberg, Dodge, Pettit, and Bates recruited families in three cities as they registered the children for kindergarten. Parents present in the home reported their disciplinary practices over the prior year. The children were subsequently observed in their classrooms. Children spanked by their mothers or fathers displayed more angry, reactive aggression in the kindergarten classrooms than did those who did not receive physical punishments.
In 1997, McCord analyzed the effects of corporal punishment based on biweekly observation of 224 parents and their sons over an average period of five and one-half years. In addition to measuring the use of corporal punishment in the home, each parent was rated in terms of warmth expressed toward the child. At the time of these ratings, the sons were between the ages of ten and sixteen. Thirty years later, the criminal records of the subjects were traced. Regardless of whether or not a father was affectionate toward his son, his use of corporal punishment predicted an increased likelihood that the son would subsequently be convicted for a serious crime.
Regardless of whether or not a mother was affectionate toward her son, the mother’s use of corporal punishment predicted an increased likelihood that the son would subsequently be convicted for a serious violent crime.
Punishment is not necessary to rear an emotionally healthy, behaviorally adaptable, and socially responsible child. Nevertheless, most American adults experienced at least some punishment, typically physical punishment, when they were children. Most use some physical punishment in raising their children. Therefore it is clear that healthy development can occur when physical punishment has been used. Although in the short run, punishments may stop unwanted behavior, they also increase the likelihood that children will learn to use force to get what they want. The use of punishments also endanger the parent-child relationship, a relationship that often provides a foundation for subsequent familial ties.
Punishment is only one of several aspects of effective parenting. Others include holding clear standards of conduct and rules of behavior and communicating these clearly to children. Communication is promoted through attending to what children are doing, monitoring behavior so that parental reactions to unwanted behavior are contingent on that behavior and so that misbehavior can be prevented.
General Socialization and Crime
In studying the impact of family on delinquency, long-term studies are particularly helpful, providing information for judging whether parental rejection and unfair discipline precede or follow antisocial behavior. For two decades, David Farrington and Donald West traced the development of 411 working-class London boys born between 1951 and 1953. When the boys were between eight and ten years old, their teachers identified some as particularly difficult and aggressive. Social workers visited the homes of the boys in 1961 and gathered information on the parents’ attitudes toward their sons, disciplinary techniques used, and compatibility between the parents. In 1974, as the boys reached maturity, each was classified as noncriminal (if there were no convictions) or, according to his criminal record, as a violent or a nonviolent criminal. Farrington and West found that the families most likely to produce criminals had been quarrelsome, provided little supervision, and included a parent with a criminal record.
Furthermore, boys whose parents had been harsh or cruel in 1961 were more likely than their classmates to acquire records for violent crimes. Parental cruelty was actually a more accurate selector of boys who would become violent criminals than was the child’s early aggressiveness.
Other longitudinal studies show antecedents to aggression and antisocial behavior similar to those found by Farrington and West. McCord found that maternal rejection and lack of selfconfidence, paternal alcoholism and criminality, lack of supervision, parental conflict, and parental aggressiveness permitted predictions of adult criminality that were more accurate than those based on a person’s own juvenile offense record. In studying Swedish schoolboys, Dan Olweus found that ratings of maternal rejection, parental punitiveness, and absence of parental control predicted aggressiveness. Descriptions of the family had been obtained from interviews with the parents when the boys were sixth-graders, and aggressiveness was evaluated by the boy’s classmates three years later. In her Finnish longitudinal study, Lea Pulkkinen discovered that lack of interest in and control of the fourteenyear-old child’s activities, use of physical punishments, and inconsistency of discipline tended to lead to criminality by the age of twenty.
All of these studies suggest that delinquents have parents who act unfairly or who are too willing to inflict pain, whereas the parents of nondelinquents provide consistent and compassionate attention. Community variations may account for the fact that some varieties of family life have different effects in terms of delinquency in different communities. In general, consistent friendly parental guidance seems to protect children from delinquency regardless of neighborhoods. But poor socialization practices seem to be more potent in disrupted neighborhoods.
In sum, family life influences delinquency through providing offspring with predispositions regarding how to cope with life outside the family. Children reared by affectionate, consistent parents are unlikely to commit serious crimes either as juveniles or as adults. On the other hand, children reared by parents who neglect or reject them are likely to be greatly influenced by their community environments. When communities offer opportunities and encouragement for criminal behavior, children reared by neglecting or rejecting parents are likely to become delinquents.
Siblings and Crime
Studies of family relationships and crime have commonly centered on parent-child influence. Generally, if included at all, siblings are mentioned only in passing. Daniel Glaser, Bernard Lander, and William Abbott, however, focused on siblings when asking why some people become drug addicts. Three pairs of sisters and thirty-four pairs of brothers living in a slum area of New York City responded to questions asked in interviews by a former addict and a former gang leader. One member of each pair had never used heroin, whereas the other had been an addict. Results of this study suggested that the typical addict was about two years younger than the nonaddicted sibling, spent less time at home, left school at a younger age, and began having relationships with persons of the opposite sex when younger. The interviews did not yield evidence of systematic differences between addicts and their siblings regarding parental affection or expectations for success. Like the Finnish adolescents studied by Pulkkinen, and the British delinquents in the Farrington and West sample, the addicts appear to have had peers for their reference groups. Unfortunately, relatively little is known about why some children adopt peers instead of family as reference groups.
Differences in sex, intelligence, and physique provide partial answers to why one child in a family develops problems and another does not. In addition, several studies show that even after controlling for family size (delinquents tend to come from larger families), middle children are more likely to be delinquents than are their oldest or youngest siblings. Rutter suggests that parental actions could be the determinant, with delinquent children tending to be those who were singled out for abuse by quarreling parents.
Farrington and West analyzed criminal records among the families of the 411 London boys they studied. Having a criminal brother, they discovered, was approximately as criminogenic as having a criminal father. Data from Minnesota confirm the apparent criminogenic impact of sibling criminality. In 1974, Merrill Roff traced criminal records of approximately thirteen hundred sets of siblings born between 1950 and 1953. Males whose siblings had juvenile court records were about one and a half times as likely to have court records themselves as were those whose siblings did not have such records. Furthermore, those whose brothers had been juvenile delinquents were about twice as likely to have adult criminal convictions as those who were the only juvenile delinquents in the family.
Marriage and Crime
Although crimes within the family typically go unrecorded, violence between husband and wife accounts for a significant proportion of recorded criminal assaults and homicides. Additionally, as has been noted above, criminal parents tend to rear delinquent children. Apart from these facts, relatively little is known about the relationship of crime to marriage.
Two links between crime and age of marriage have been forged in the literature. First, several studies suggest that delinquents marry at younger ages than do nondelinquents. Second, criminality tends to decline at about the time that marriage takes place. Perhaps because of the popular belief that marriage has a settling effect, researchers have sometimes concluded that marriage reduces crime. Yet at least three accounts of the relationship between marriage and crime can be given. Delinquents may marry when they are ready to settle down, delinquents who are less criminally inclined may be more likely to many (with marriage marking no change in motivation), or marriage may produce change.
One of the few studies with information sufficient to test whether marriage has a palliative effect is by Farrington and West. They compared men who married between the ages of eighteen and twenty-one with unmarried men at age twenty-one. The two groups had similar histories to the age of eighteen. These comparisons failed to show that marriage reduces delinquency.
Family Intervention and Crime
Because studies of the causes of crime implicate parents, treatment strategies have been aimed at changing parental behavior. Alan Kazdin summarized research on parent management training by noting that it ‘‘has led to marked improvements in child behavior’’ (p. 1351). One long-term follow-up study of home visiting during the first pregnancies of women suggest that such visits produce reductions in juvenile crime (Olds et al.). Unmarried pregnant women were randomly assigned to have a visiting nurse or to be in a comparison group. Those whose mothers received the home visits had less than half as many arrests fifteen years later. Evidence is mounting that training in parental skills can be successful, although more work is necessary both to develop effective strategies across a variety of cultural environments and to assure that the most dysfunctional families receive the training.
Interpreting The Data
After World War II, scientists began to study socialization by producing in microcosm conditions that seemed important for understanding personality development. Early studies generally reflected the psychoanalytic perspective. Aggression was conceived as instinctual, and conscience was thought of as a ‘‘superego’’ that developed from identification with a parent. As Freudian influence declined, researchers began to consider alternative theories.
Laboratory experiments showing that observing aggression can produce aggressive behavior suggest why punitive parents may tend to have aggressive offspring. Imitation of aggression in the laboratory increases when aggression is described as justified. Parents who justify their use of pain as punishment may foster the idea that inflicting pain is appropriate in other contexts.
Much effort has been expended in investigating the role played by rewards and punishments in teaching children how to act. Although it has been demonstrated that prompt feedback increases conformity to norms, some studies also show the paradoxical effects of rewards and punishments. Rewards sometimes decrease performance, and punishments sometimes increase forbidden actions. These studies suggest that use of rewards and punishments can create ambiguous messages. Similar ambiguities may affect parent-child relationships. Lax discipline and the absence of supervision, as well as parental conflict, could increase delinquency because they impede communication of the parents’ socializing messages.
- BOWLBY, JOHN. Maternal Care and Mental Health. New York: Schocken Books, 1966.
- CAPALDI, D. M., and PATTERSON, G. R. ‘‘Relations of Parental Transition to Boys’ Adjustment Problems: I. A Linear Hypothesis. II. Mothers at Risk for Transitions and Unskilled Parenting.’’ Developmental Psychology 27 (1991): 489– 504.
- COOLEY, CHARLES Human Nature and the Social Order. (1920). Introduction by Philip Rieff. Foreword by George Herbert Mead. New York: Schocken Books, 1964.
- CROCKENBERG, S., and LITMAN, C. ‘‘Autonomy as Competence in 2-Year-Olds: Maternal Correlates of Child Defiance, Compliance, and Selfassertion.’’ Developmental Psychology 26 (1990): 961–971.
- CROCKETT, L. J.; EGGEBEEN, D. J.; and HAWKINS, A. J. ‘‘Father’s Presence and Young Children’s Behavioral and Cognitive Adjustment.’’ Journal of Family Issues 14, no. 3 (1993): 355–377.
- DEKLYEN, M.; SPELTZ, M. L.; and GREENBERG, M. T. ‘‘Fathering and Early Onset Conduct Problems: Positive and Negative Parenting, Father-Son Attachment, and the Marital Context.’’ Clinical Child and Family Psychology Review 1 (1998): 3–21.
- FARRINGTON, D. P. ‘‘The Family Backgrounds of Aggressive Youths.’’ In Aggression and Antisocial Behaviour in Childhood and Adolescence. Edited by L. A. Hersov and M. Berger. Oxford, U.K.: Pergamon, 1978.
- FARRINGTON, DAVID, and WEST, DONALD J. ‘‘The Cambridge Study in Delinquent Development (United Kingdom).’’ In Prospective Longitudinal Research in Europe: An Empirical Basis for Primary Prevention of Psychosocial Disorders. Edited by Samoff A. Mednick and Andre E. Baert. New York: Oxford University Press, 1981. Pages 137–145.
- FREUD, SIGMUND. New Introductory Lectures on Psycho-Analysis. New York: W. W. Norton, 1964.
- GLASER, DANIEL; LANDER, BERNARD; and ABBOTT, WILLIAM. ‘‘Opiate Addicted and Non-addicted Siblings in a Slum Area.’’ Social Problems 18 (1971): 510–521.
- GLUECK, SHELDON, and GLUECK, ELEANOR ‘‘Unraveling Juvenile Delinquency.’’ New York: Commonwealth Fund, 1950.
- GORMAN-SMITH, D.; TOLAN, P. H.; and HENRY, D. ‘‘The Relation of Community and Family to Risk among Urban-Poor Adolescents.’’ In Where and When: Historical and Geographical Aspects of Psychopathology. Edited by P. Cohen, C. Slomkowski, and L. N. Robins. Mahwah, N.J.: Lawrence Erlbaum Associates, 1999.
- HIRSCHI, TRAVIS. Causes of Delinquency. Berkeley: University of California Press, 1969.
- KAZDIN, A. E. ‘‘Parent Management Training: Evidence, Outcomes, and Issues.’’ Journal of the American Academy of Child and Adolescent Psychiatry 36, no. 10 (1997): 1349–1356.
- LISKA, A. E., and REED, M. D. ‘‘Ties to Conventional Institutions and Delinquency: Estimating Reciprocal Effects.’’ American Sociological Review 50 (August 1985): 547–560.
- MATSUEDA, R. L., and HEIMER, K. ‘‘Race, Family Structure, and Delinquency: A Test of Differential Association and Social Control Theories.’’ American Sociological Review 52 (December 1987): 826–840.
- MCCORD, JOAN. ‘‘A Longitudinal View of the Relationship between Paternal Absence and Crime.’’ In Abnormal Offenders, Delinquency, and the Criminal Justice System. Edited by J. Gunn and D. P. Farrington. Chichester, John Wiley, 1982. Pages 113–128.
- MCCORD, JOAN. ‘‘Long-term Perspectives on Parental Absence.’’ In Straight and Devious Pathways from Childhood to Adulthood. Edited by L. N. Robins and M. Rutter. Cambridge, U.K.: Cambridge University Press, 1990.
- MCCORD, JOAN. ‘‘Family Relationships, Juvenile Delinquency, and Adult Criminality.’’ Criminology 29, no. 3 (1991): 397–417.
- MCCORD, JOAN. ‘‘Family as Crucible for Violence: Comment on Gorman-Smith et al.’’ Journal of Family Psychology 10, no. 2 (1996): 147—152.
- MCCORD, J.; MCCORD, W.; and THURBER, E. ‘‘Some Effects of Paternal Absence on Male Children.’’ Journal of Abnormal and Social Psychology 64, no. 5 (1962): 361–369.
- MEAD, GEORGE HERBERT. Mind, Self and Society from the Standpoint of a Social Behaviorist. Edited with an introduction by Charles W. Morris. Chicago: University of Chicago Press, 1962.
- OLDS, D. L.; HENDERSON, C. R.; COLE, R.; ECKENRODE, J.; KITZMAN, H.; LUCKEY, D.; PETTITT, L.; SIDORA, K.; MORRIS, P.; and POWERS, J. ‘‘Long-term Effects of Nurse Home Visitation on Children’s Criminal and Antisocial Behavior: 15-year Follow-up of a Randomized Controlled Trial.’’ Journal of the American Medical Association. 280 (1998): 1238–1244.
- OLWEUS, DAN. ‘‘Familial and Temperamental Determinants of Aggressive Behavior in Adolescent Boys: A Causal Analysis.’’ Developmental Psychology 16 (1980): 644–660.
- PLATO. ‘‘Laws.’’ The Dialogues of Plato, 2. Translated by B. Jowett. New York: Random House, 1937. Pages 407–703.
- POWER, T. G., and CHAPIESKI, M. L. ‘‘Childrearing and Impulse Control in Toddlers: A Naturalistic Investigation.’’ Developmental Psychology 22, no. 2 (1986): 271–275.
- PULKKINEN, LEA. ‘‘Search for Alternatives to Aggression in Finland.’’ Aggression in Global Perspective. Edited by A. P. Goldstein and M. Segall. Elmsford, N.Y.: Pergamon, 1983. Pages 104–144.
- ROFF, MERRILL. ‘‘Long-Term Follow-up of Juvenile and Adult Delinquency with Samples Differing in Some Important Respects: Crossvalidation within the Same Research Program.’’ The Origins and Course of Psychopathology. Edited by John S. Strauss, Haroutun M. Babigian, and Merrill Roff. New York: Plenum, 1977. Pages 323–344.
- RUTTER, MICHAEL. ‘‘Epidemiological Strategies and Psychiatric Concepts in Research on the Vulnerable Child.’’ The Child in His Family: Children at Psychiatric Risk. International Yearbook for Child Psychiatry and Allied Disciplines, 3. Edited by E. James Anthony and Cyrille Koupemik. New York: Wiley, 1974. Pages 167–179.
- SAMPSON, R. J., and LAUB, J. H. Crime in the Making: Pathways and Turning Points Through Life. Cambridge, Mass.: Harvard University Press, 1993.
- STRASSBERG, Z.; DODGE, K. A.; PETTIT, G.; and BATES, J. E. ‘‘Spanking in the Home and Children’s Subsequent Aggression Behavior toward Kindergarten Peers.’’ Development and Psychopathology 6, no. 3 (1994): 445–461.
- SUTHERLAND, EDWIN, and CRESSEY, DONALD R. Criminology (1924–). 10th ed. Philadelphia: Lippincott, 1978.
- WEST, DONALD, and FARRINGTON, DAVID P. The Delinquent Way of Life: Third Report of the Cambridge Study in Delinquent Development. London: Heinemann, 1977. | <urn:uuid:e6f72bc0-707b-40c9-9d79-506060b4307e> | CC-MAIN-2022-33 | https://www.iresearchnet.com/research-paper-examples/criminology-research-paper/family-relationships-and-crime/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.78/warc/CC-MAIN-20220817001643-20220817031643-00297.warc.gz | en | 0.95023 | 7,439 | 3.140625 | 3 |
Rock Music as a Resource in Harmonic, Melodic and Metric Dictation
Students in many ear training classes are too often insulated from realistic and musical learning experiences. Most ear training involves the teaching of harmonic, melodic, and rhythmic perception by means of dictation. The music used in this dictation is usually played at a piano and divorced of context—a chord progression played as block chords in the middle register of the piano at a dynamic level of mezzo forte and in a uniform, heterogeneous rhythm. In melodic dictation the student is expected to transcribe a melody which may again be similar rhythmically, dynamically, and in terms of register to the above mentioned description of the harmonic dictation exercise. Often these melodies are composed by the instructor in an effort to test the student's ability to perceive specific intervals. The student after a period of time, for example, may be expected to hear, say, a minor seventh, a tritone, or a major sixth, frequently within a brief three or four phrase melody. Some have referred to this most unmusical endeavor as "hearing specific intervals within context." At best, however, this may be likened to a Reader's Digest-like realization of an entire piece; hit all of the high points, spare the fluff, and be done with it. If one wanted to test for many specific intervals one might as easily play an all-interval twelve-tone row, such as that found in Alban Berg's "Lyric Suite." Then, when the student could correctly transcribe this melody, the instructor would be ensured that the student had a mastery of all of the intervals, and within context! (Of course this last point is slightly exaggerated.)
The student may benefit from occasional harmonic, melodic and rhythmic dictation drills at the piano in the same manner that a performer does from the practicing of scales, arpeggios, and etudes. And as the performer uses scales and exercises only as a means to an end, so too should the student concentrate on "real" music, that is, the use of recorded examples of diverse styles, periods, instrumentation, harmonies, melodies, and tempi, and use non-contextual musical drills only as a secondary means for acquiring the requisite transcription skills. The student who only listens to music played on the same instrument (the piano, computer, etc.), at the same tempo, register, and dynamic level, and out of context will not be equipped to transcribe live or recorded music or music of styles to which he/she is unaccustomed.
In order to develop the essential skills of music perception and transcription, the student must work extensively outside of the classroom. Ear training labs, usually featuring numerous taped harmonic, melodic, and rhythmic dictation exercises, may be helpful. Many ear training labs, in addition to tapes, now feature interactive computer assisted instruction programs which allow a student to receive individualized instruction and hear varied material at graded levels. Many of these programs allow the student to focus upon deficiencies, progress to an appropriate level when the preceding level has been mastered, and keep track of the amount of time spent on the computer. Presently, the disadvantages of the computer as an ear training accessory/pedagogue are centered around the available software and include limited tone color capacities, rhythmic rigidity, brevity of musical excerpts, and, as in the case of dictation from the classroom piano, the use of unrealistic and homogeneous musical material.
Regardless of the successes or shortcomings of the ear training lab, the student must practice these skills in situations outside of the music school. One of the best means of motivating students to listen analytically in an extra-academic setting is to require them to transcribe the music to which they are constantly exposed. A student, for example, hears Diet Coke, Burger King, Doublemint gum, U.S. Army, Miller, and Budweiser jingles each week, as well as television theme songs, music from M-TV and VH-1, radio stations, car horns and church bells. All of these pitch/music sources provide more and diversified opportunities for transcription, perception, and analysis than those which the student faces in the music theory classroom or lab. The thought that a plethora of educational experiences constantly confronts the student, and indeed such a rich means to an end, that is, twenty-four potential hours of dictation practice, probably has not occurred to the student or the professor. Many music students surprisingly do not make the connection between that which takes place in the classroom or lab with the transcription of music from other sources. Students who have been taught using only the piano or an unimaginative computer have often responded with animosity or confusion when first asked to transcribe recorded music or music which has not been played at the piano in the sterile manner mentioned above, objecting that they could transcribe the music if it were played on the piano, played slower, played louder, etc. Students can and will overcome these problems if they are consistently challenged to transcribe and analyze varied types of music in context. Most students, for example, have heard the common chimes theme shown here but surprising few can correctly notate it.
Recordings of rock music, as well as country, jazz, and other popular styles, can be used effectively as a resource in the teaching of classical music theory and ear training. (For convenience, the term "rock music," will be used throughout to mean all types of non-Classical Western music.) Whether rock music is an "art" form deserving of observation and analysis seems to have been answered in the affirmative. Witness the number of music appreciation textbooks discussing the subject, or the significant and philosophically varied members of the Classical music world who have praised and continue to praise rock music as an important form—Leonard Bernstein, Philip Glass, and John Cage, to name a few. (John Rockwell, in The New York Times, January 19, 1986, claimed that the impact of rock videos upon the composition, staging, and content of Classical opera is profound and that the two will become more closely related in the future.) Most college students grew up listening to rock music. It is the music that they like best and hear most often. It is their "folk" music. Whether a student is exposed to Classical music later and begins to like it or dislike it, the student will probably continue to enjoy rock music. In addition, many students will work harder at acquiring the necessary skills for melodic, harmonic, and rhythmic dictation if some of the music that they are studying is drawn from this body of music. A carefully selected collection of rock music works in the Classical music ear training course will be very helpful for the student's acquisition of skills in music perception and analysis. Studying Classical theory and ear training can seem more germane when the musical experience can be applied outside of the classroom; for example, when a student discovers the parallels between harmonic progressions and melodic and rhythmic structure in Classical and rock music, and that analytical techniques and views from one domain can be used in the other to yield more information about both styles. There are countless examples of parallels between rock and Classical music in their technical content. Such parallels can be helpful in emphasizing a point. When parallels do not exist, the students also benefit in that they are better equipped to understand the technical differences between the two types of music.
The student who aspires to be a public school music teacher needs to be familiar with rock music. Students will expect him/her to be familiar with "their" music—to discuss it, teach it, and transcribe it when necessary. In fact, many young musicians have become interested in Classical music only after initially experiencing rock music, then studying theory and noticing the points of coincidence. Classical music offers a longer history of works and a relatively well developed corpus of theoretical ideas and analytical devices to entice the student. Adept theory teachers, at the college, high school, or elementary levels, can "convert" some students to Classical music if the teacher can speak the student's first language, that is, when he/she knows the literature and theoretical substance of rock music.
As mentioned above, one finds many parallels between rock and Classical music. The teaching of voice leading and the intelligent construction of harmonic progressions according to common practice principles is an important concern and one that is often elusive for the first- or second-year theory or ear-training student. For example, one would want the student to harmonize the first five bass notes of the major scale as I-V-I6-IV-V, I-vii6-i6-IV-V, or I-V-I6-ii6-V rather than I-ii-iii-IV-V or the non-diatonic succession of major chords, I-II-III-IV-V, as occurs in Jim Croce's "Bad, Bad Leroy Brown." The chorus of Stevie Wonder's "As" features the progression i-V-i6-IV; the chorus of Irene Cara's "Flashdance" features I-V-I6-IV-V11-V. (The V11-V succession in rock music is analogous to the cadential I-V of Classical music.) Augmented sixth chords which resolve to V and I are found in several Beach Boys' songs. Secondary dominants are frequently found in the music of Prince, They Might Be Giants, Beatles, Elvis Costello, R.E.M., Neil Young, The Pixies, The Traveling Wilburys, Bob Dylan, and others. Chord successions such as I6-IV, V6-I, V-I6, and V/V-V6 are extremely common, as well.
A strong dissension between Classical and rock music, however, occurs in the area of modulation. Modulation in rock and popular music is seldom discussed in theory and more rarely in ear training classes. Modulation to the key of the dominant (or the key of the relative Major when the original key is minor), the mainstream practice of Classical period music, is mostly nonexistent in rock music. The most common (and uninteresting) modulation in rock music is that from the tonic key to the key of the flatted supertonic, a well-worn cliche which usually occurs at the beginning of the last verse. There are many other less frequently used modulations which can provide good harmonic perception exercises. To list a few:
Beach Boys, "Don't Worry Baby"—modulations between each verse and chorus—verses in the key of I Major and choruses in the key of II Major:
verse: Key of C: I-IV-V-I-IV-V-ii-V-iii
V/ii (V/ii = D:V)
chorus: Key of D: I-ii-V-I-ii-V-IV11 (IV11 = C: V11)
Christopher Cross, "Think of Laura"—verses: I Major, choruses: VI Major:
verse: Key of C: I-V6-ii-vi-IV
I-V6-ii-vi-IV-V-V/ii (V/ii = A:I)
chorus: Key of A: I-vi-V/ii-ii-V--I-vi-V/ii-ii-V
Van Halen, "Jump"—verses and choruses: I Major, bridge: Major:
verse: Key of C: I bridge: Key of : vi-IV-V-I-vi-IV-V-I-vi-IV-V-I
Jefferson Starship, "No Way Out"—verses: i minor, chorus: Major:
verse: Key of d minor: i iv-V-i-VI-V-iv-VI-V
chorus: Key of Major: I-V-I6-IV
Talking Heads, "And She Was"—verses: I Major and Major, chorus: I Major, bridge: v minor:
verses: Key of E:
Key of F:
chorus: Key of E: I-IV--IV-I-IV--IV-I bridge: Key of b minor: i-VI-i-VI
Talking Heads, "Wild Wild Life"—verses:
verses: Key of F:
Key of :
I-IV-V-I-IV-V-V/ii (V/ii = : I)
chorus: Key of : I-IV-V-IV-I-I-IV-V-IV-I
I-IV-V-IV-I-IV-V-IV-V (V = F: I)
bridge: Key of : VII-I-IV-V-IV-VII-I-IV--I (I = F: )
Chicago, "You're the Inspiration"—intro: VI Major, verse: I Major, chorus: III Major:
intro: Key of A: I-IV-V-I-IV-V (V = C: III) verse: Key of C: I-I6-iii-vi-vi-IV
transition: V6-I-IV6--V6/vi-vi-V6/V-V-V6/vi-VI (VI = E: IV) V6
chorus: Key of E: I-I6-IV-V-I-I6-IV-V- ( = C: V)
Traditional ear training courses seldom include "metric" dictation, yet in rock music, such a study is appropriate. Rock groups often explore meters other than 4/4. The following is a brief list of unusual meters:
|Allman Brothers||Whipping Post||(verse: 6/8)
(chorus: 11/8 = 6/8 +5/8)
|Band||Just Another Whistle Stop||(verse: 4/4)
|Beach Boys||Hold On Dear Brother||(verse: 3/4)
(chorus: 5/4 = 3+2)
|Beatles||All You Need Is Love||(verse: 7 = 4+3)
(chorus: 26 = 4+4+4+4+4+4+2)
|Good Morning Good Morning||(intro: 4+4+4+4)
(verse: 5+5+5+3+4; 5+4+3+3+4+4 .....)
|She Said She Said||(verse: 10 measures of 4/4)
(chorus: 4+4+3+3+3; 3+3+3; 3+3+3)
|Adrian Belew||Laughing Man||11|
|Byrds||Get to You||(verse: 5/8)
|Cream||Passing The Time||(intro: 5+5+5; 2+2+3;
2+2+3; 5+5+5; 2+2+3; 2+2+3)
(verse: 4+4+4+4; 4+3+4+3)
|Thela Hun Ginjeet||7/16 vs. 4/4|
|(numerous compositions involving polymeter—"Three Of A Perfect Pair," "Frame By Frame," "Discipline," "Neal And Jack And Me," and others)|
|John McLaughlin||Dance of Maya||20/8|
|Dawn||(20 = 6+6+6+2)|
|Murder By Number (drums introduction: quarter-note triplets which sound like 3/4. The band enters in 4/4 while the quarter triplets continue in the drums.)|
|Sting||Straight to My Heart||7|
|Weather Report||Hernandu||(verse: 17 = 6+6+5)
(solos: 11 = 6+5)
|Yes||Changes||(17 = 4+3+4+3+3)|
|Frank Zappa||Pound For A Brown On The Bus||14/16
(14 = 3+4+3+4)
|Little House I Used To Live In||11|
|Marqueson's Chicken||(series of 7s, followed by
series of 5s, then 6s)
|Tink Walks Amok||11|
|Ya Hozna||(band: 3/4, drums: 5/8)|
Rock music and recordings of non-Classical music should be studied in the teaching of ear training and theory. Rock music's parallels with Classical music reinforce those theoretical constructs underlying both styles—the differences between the two also lead to a better general understanding. The student who listens only to common practice period music in the classroom or at the computer, devoid of context and instrumentation, will not be prepared to understand musical styles either aurally or intellectually.
Dr. E. Michael Harrington is Music Business Program Faculty Chair at SAE Nashville. He has created and taught courses at the Berklee College of Music, Belmont University, UAB, and William Paterson University and frequently teaches training sessions at Harvard Law School. In 2014 & 2015, he was requested and gave testimony on copyright law revision to the U. S. Patent & Trademark Office, U. S. Copyright Office and the U. S. Dept. of Commerce. He is on the Advisory Board of the Future of Music Coalition, a member of Leadership Music and owner of E Michael Music.com.
He has worked as consultant and expert witness in hundreds of music copyright/IP matters involving parties including the Dixie Chicks, Adele, Vince Gill, Steven Spielberg, Steve Perry, Sting, Samsung, Tupac, Lady Gaga, Deadmau5, Busta Rhymes, HBO, U. S. Postal Service and others, delivered lectures at law schools, organizations and universities throughout the U. S. & Canada including Harvard, Yale, Boston College, Cardozo, GWU, Boston Bar, Texas Bar, Minnesota Bar, Future of Music Coalition, McGill, Carelton, Experience Music Project and others, and been interviewed by media including the New York Times, Wall Street Journal, Bloomberg Law, Bravo, CNN, Washington Post, ABC, NBC, CBS, Fox, the "Today Show," Time, Huffington Post, Fortune, Inc Magazine, Bio Channel, Ovation, NPR, the Associated Press, Salon, PC Magazine, Billboard, PBS, Canadian Broadcasting Corporation, Barely Legal Radio and others. | <urn:uuid:506d8f05-6f7b-43d8-bed0-a15c90790f87> | CC-MAIN-2022-33 | https://symposium.music.org/index.php/31/item/2080-rock-music-as-a-resource-in-harmonic-melodic-and-metric-dictation | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570871.10/warc/CC-MAIN-20220808183040-20220808213040-00295.warc.gz | en | 0.921716 | 4,114 | 3.703125 | 4 |
Image by melissaperryphotography/iStock
1. The muzzle grasp
When two dogs are hanging out and then one appears to bite the other’s muzzle, it can be a bit shocking to the owners, but it isn’t something to be worried about. This is just a social behavior dogs got from their wolf ancestors. The dogs are affirming their social relation: the dog being grasped is insecure and making sure it’s still the other dog’s pup.
Muzzle grasping is usually done between dogs who know each other well, but insecure dogs can also ask a human can do it. If your pup approaches you while puffing its nose, you can gently grasp its muzzle and reassure it that you’ll take care of it. He’ll be happy to know it.
2. Cat in the box
It’s no secret that cats love cardboard boxes. From small indoor cats to big tigers, they can’t get enough of the box. They like to sit in boxes even if they barely fit. It seems that, for domestic cats at least, boxes reduce their stress by providing a safe enclosed space.
Cats like a place where they can observe others, but not be seen. They like to hide from predators and prey, which boxes help with. Or at least cats feel like they do. They might also like boxes for their warmth or maybe they just want to explore the new thing in their environment. We still don’t really know.
3. The head tilt
You might have noticed that sometimes when you talk to your dog, she tilts her head to the side. It’s very cute, but why is she doing it? Well not much research has been done on this, so scientists don’t have a clear answer but they do have some plausible hypotheses.
There are two possible reasons for the head tilt. Dogs might be tilting their heads to get a better view of your facial expression, because it could be hard to see around a long muzzle. But they might be doing it instead to hear you better, possibly by adjusting their floppy ears.
4. Flehmen response
If you’ve ever seen a cat with a particularly funny look on their face, they may be doing something called “flehemen.” Look for an open mouth with drawn back lips; maybe the cat looks kind of disgusted or grimacing. Don’t take offense, because there’s a different reason for this expression.
The flehemen response is actually a kind of smelling method. Instead of sniffing with their nose, the cat is drawing air over an organ on the roof of their mouth. This organ is called the vomeronasal organ and it can smell different scents than the nose. Other animals, like tigers and horses, do this too.
While a person yawning is perceived as bored or tired, yawning actually has a different meaning to dogs. Your dog’s yawns aren’t necessarily a sign of tiredness, but are often a pacifying behavior. They’re doing it to show friendship and peace to you or another dog. And they probably interpret your yawn the same way.
While yawning may have served a more physiological purpose originally, it’s evolved to become a form of communication. As with many dog behaviors, it evolved as a part of their social pack behavior. Hierarchy and dominance is very important in a pack, so communication is important. If you yawn back, you’re affirming the peace.
6. Licking your face
Some find it endearing, others find it annoying, and many find it gross, but dogs keep licking people’s faces. Even some wolves do it to humans. Why are they doing this? Well, it’s to show they think of you as a friend and to suppress any aggressive or dominance behavior.
Licking your face is a dog’s gesture of peace, so the best response to it is to close your eyes, turn your head away, and yawn. To the dog, this means you accepted their friendship. On the bright side, the germs from their tongue are no worse than the germs you get from kissing other humans.
7. For the love of catnip
Cats are notorious for going crazy for catnip. They rub their bodies all over it and some even lick it up. This behavior is actually a response to a chemical compound, called nepetalactone, that the catnip is producing. For the plants, nepetalactone wards off insects, but for cats it does something quite different.
Scientists think nepetalactone is similar to cat pheromones, which are chemicals animals use to communicate. The nepetalactone molecule goes into a cat’s nose and then binds to the same receptor a pheromone would bind to, this effectively signals to the cat’s brain that there’s tons of pheromones around.
8. Catnip on the brain
Researchers investigated what catnip is doing to cat brains and found that catnip, through the molecule nepetalactone, is stimulating three areas of the cat brain. It affects the olfactory bulb, which processes smells, the amygdala, which is tied to emotions and decisions, and the hypothalamus, which is involved with a number of things, including sexual response.
The hypothalamus stimulation may be the reason for the rolling around, which is what female cats do when in heat. It may also be why kittens don’t normally react to catnip until they become sexually mature. But not all adult cats go wild around catnip, because the reaction is genetic. However, catnip craziness isn’t limited to pet cats, some big cats love catnip too.
9. Panting for a cause
You’re probably used to your pupper panting with his mouth open and tongue lolling out, but do you know why he’s doing that? Dogs pant to cool down because the evaporation of their saliva removes heat. Humans sweat to do this, but dogs only have sweat glands on their paws so they can’t get rid of a lot of heat that way.
When breathing hard and fast, dogs evaporate more water. Letting the tongue hang out also increases the amount of evaporating saliva and cools down the blood circulating in the tongue. All mammals pant under certain conditions, like when it’s really hot or from overexertion. Just make sure to give your dog some water when he pants, because he’s losing a lot of water from doing so.
10. Just shake it off
Have you ever been caught in an unwanted shower when your wet dog voraciously shakes itself? Well that behavior evolved for a very important reason: keeping the doggy toasty warm. Fur needs to be dry to keep an animal warm, because it can’t trap air when it’s wet.
Since an animal could get hypothermia when wet in cold weather, getting dry quick is crucial. Other mammal species, like mice and bears, shake themselves to get dry too. Animals have to shake hard enough to break the water’s surface tension, so smaller animals actually have to shake faster. Dogs get help from their loose skin flapping around, because it helps to throw off water.
11. Scratching everything
While cats are adorable, they can get annoying at times. One of the worst things cats do is scratch the furniture, carpet, and just about anything. Unfortunately, this is a natural behavior for them. Scratching is good for their claws, because it removes the dead outer layer of the nail.
Plus, scratching stuff serves as a way for cats to mark their territory. Cats are pretty territorial, and are used to living alone, so they don’t like other cats encroaching on their area. Cat paws have scent glands, so the smell and visible scratches serve to warn other cats, “This is my territory!”
12. Not the couch!
Not only is scratching good for their nails and for marking their territory, it also helps cats stretch their back and shoulder muscles. So maybe your cat is avoiding your small scratching post because it can’t get a good enough stretch on it. Don’t you just love a good stretch?
And if your cat didn’t have enough reasons to scratch your furniture, it seems that scratching is also a stress reliever and emotional release. Your cat may do it when she’s anxious or excited or frustrated. Whatever the reason your cat is scratching, to get her off the furniture and carpet, you should give her a tall scratching post.
13. Rolling in the grass
When dogs go to the beloved outside, they like to roll themselves all over the grass. Why do they insist on doing this? It might be another leftover behavior from their wolf ancestry. Wolves will roll around in an interesting odor to get it on their body and then bring the scent back to their pack.
The pack then sniffs the scent and may even follow it back to its origin. Maybe your dog is finding an interesting smell and bringing it to you, because you’re its family. Or maybe it’s rolling in the grass for an entirely different reason. There’s a few other possible explanations.
14. Stinky and itchy
While your dog may be rolling around to bring you a cool new scent, he might also be doing it to get rid of an unwanted smell. Perhaps you bathed him with something that smells too strong, and he just wants it off. Next time, bathe him with something that has no scent and he might be happier.
But if your dog is especially itchy, he may just be trying to get a good scratch in. If he’s seems especially itchy, he might have some kind of bugs bothering him and you should probably have him checked out. Since there’s a few possibilities for rolling in the grass, pay attention to your dog to figure it out.
15. Pupper’s appetite for grass
When they’re not rolling in it, they’re eating it. Why do dogs like to eat grass? Well if they’re not feeling well they might eat a bunch to make themselves vomit, but this doesn’t happen every time. So why else could they be eating it? It probably goes back to their pre-modern life.
Since dogs were scavengers and had to eat whatever they could find, they’re pretty much down to eat anything nowadays. And since anything includes grass, they eat grass. It might be providing a good source of fiber and minerals for your pup. Cats, however, eat grass for a different reason.
16. Kitty’s appetite for grass
When you let your cat outside, they take in some sun and then start munching on the grass. And then they probably throw up the grass they literally just ate. Cats can’t digest plants (although they might get a vitamin from it) so why did they bother eating it in the first place?
It’s likely that cats eat grass to throw up other indigestible material in their digestive tract: the bones, fur, and feathers of their prey. Your cat might not eat live animals now, but it’s pretty hard to get rid of a behavior that natural selection has been working on for millions of years.
17. The mysterious purr
Few things bring more joy to a cat person than their kitty purring; it’s soothing and comforting. But while cats often purr when cuddling, they also do it in dramatically different situations, like when they’re in pain or giving birth. Since it’s done in such different emotional circumstances, it isn’t considered communication.
In fact, scientists think it’s actually a form of self medication. Low frequencies have been shown to help build bone density, and cat purrs fall right in this range. So purrhaps, cats are purring to heal and maintain healthy bones. And it’s even possible that your bones could benefit from being next to their pawsitive vibrations.
18. The unwanted present
It’s not fun coming home to a wounded bird or rodent in your home, but your cat doesn’t understand your revulsion. She’s gone out and caught this animal just for you. When wild cat mothers raise their kittens, they start teaching them how to hunt by bringing back prey.
This present starts the kitten eating meat instead of milk, but also provides them with something to test their hunting skills on. So when your cat brings you an unwanted present, she’s probably trying to teach you how to hunt. But since male cats can also bring back these little gifts, it may be the cat’s instinct to get its prey to a safer place than where they caught it.
19. Following into the bathroom
Dogs just can’t seem to give you any privacy. They tend to follow their owners into the bathroom, which can be a little uncomfortable, to say the least. Sure, it’s nice to have a bathroom buddy when you’re out in the woods at night, but in your own home, it’s really unnecessary. Despite that, dogs have their pack mentality and they just don’t see privacy the way you do.
Dogs’ wolf ancestry is most likely to blame. Since you are part of your dog’s pack, he’s just showing his loyalty. Or perhaps he’s just very curious about what you do when you close the door. However, if he consistently follows you everywhere it may be because he’s insecure or thinks you need to be guarded all the time. These behaviors can become dangerous and consulting a veterinarian could be the best option.
20. A good ol’ tail wag
Dogs are well known for two things: being man’s best friend and wagging their tails. Generally, a dog’s tail wag is accepted to mean happiness and friendliness, but it may actually be more nuanced than that. For instance, if your dog is wagging its tail slowly, that means he’s feeling uncertain.
However, if he’s wagging his tail energetically it means something else entirely. Most likely it means the dog is happy and excited, but a couple of studies have shown that the side he wags on can mean different things. Who knew a dog’s tail wag could communicate so much?
21. Wagging right vs. left
Your dog may be telling you more than you think with how she wags her tail. If your pupper wags slightly more to her right side, it means she sees something she wants to approach. Most likely this is a human, like you. However, if she wags more to her left side, she might be looking at something she wants to avoid.
Your dog might want to avoid a more dominant dog. When scientists showed videos of dogs wagging their tails to other dogs, the watching dogs were anxious about left side wags but pretty calm about right side wags. But if your dog wags her tail right in the middle, who knows what it means.
22. It’s all in the tail
The further we get into the tail wagging business, the more complicated it gets. For instance, if the tail is wagging widely, it’s probably a more positive sign than a tail wagging in tiny movements. But of course, dogs do use their tails to communicate in other ways than just wagging.
If your dog has his tail lowered between his legs, he’s probably scared, anxious, or submissive. But if your dog has his tail held high, he might have seen something really interesting, or it might be a threatening and dominant signal. A middle tail height probably means your dog is relaxed and happy.
23. Kitty’s raised tail
While dog’s tails are often wagging, cat tails aren’t as energetically expressive. However, it can still tell you something. When a cat walks up to you with its tail held high, it’s greeting you. Give that kitty a little head rub and it may in turn rub up against you.
The rubbing or head butting is probably your cat’s way of marking you with its scent, although if it’s a strange cat that you’ve just met, she’s probably trying to get information about you. While this is behavior cats do to other cats, there’s one common behavior that cats only do around humans.
You learn it as a child, dog goes bark and cat goes meow, but they don’t exactly teach you what those meows mean. If you’ve ever owned a cat, you likely know that meows mean different things in different situations. Plus, it varies between individual cats. But you might not have known that cats only meow to humans.
While kittens meow to mom, when hungry, scared, or cold, adult cats don’t meow to each other. They communicate with each other in other ways, like hissing, growling, and scent marking. So when your kitty is meowing, they’re trying to tell you something, like feed me, pet me, or maybe just hi. Pay attention to your cat and you’ll probably learn to distinguish the different meows.
25. The exposed belly and attack
Cats are well known for exposing their belly and then attacking if you try and rub it. Of course, there’s the rare cat that actually lets you rub their belly, but most don’t like it. When cats do show you their tummy, they’re really showing that they trust you.
Then you break their trust when you reach for it. A cat’s belly is its most vulnerable body part because right under that fluff are their crucial organs. So instead of going for the belly rub, pet your cat’s head instead. Of course, a few cats actually like their belly rubbed, but you have to risk the scratches to figure out which ones.
26. Burying the toy hatchet
You might be getting annoyed at your pup for digging in your garden and burying things like toys and food, but your dog isn’t just going to lose an instinctual behavior so quickly. Out in the wild, food is a precious resource compared the plentiful bounty of the food bowl.
Dogs’ ancestors buried food so they could come back and eat it later. Maybe they found too much food for one meal, so they had to take a doggy bag back to the hole. Just think of your garden as your dog’s very own refrigerator and not, well, your garden.
27. Regurgitation near puppies
You’ve heard of mama birding, but what about mama dogging? Mama dogs sometimes puke up their food near their puppies. She isn’t sick, she’s actually feeding them. Of course these puppies could just wander over to their food bowl and eat there, but it’s a trait from a time when they couldn’t.
In the wild, wolf cubs can’t hunt for their own food, so the parents’ help is needed to feed them. It might be kinda gross to you, but mama dog is just trying to do her mama-birding best for her puppies. So, don’t get too mad at her when she vomits for her puppies.
28. Making biscuits
One strange thing cats do, as cat people lovingly call it, is make biscuits. They knead their little paws on blankets, furniture, and you. It’s pretty cute, if painful, but why do they do it? Most scientists think it’s a neotenic behavior, meaning it was a juvenile behavior that adults just kept doing.
Kittens knead their mothers’ bellies to make her make milk, and since some adult cats also suckle when they knead, this explanation makes sense. However, adult wild cats don’t knead, so why do domestic ones? While cats are fairly similar to their wild counterparts, domestication has changed them in a few ways.
29. Do they need to knead?
Kittens knead because it’s necessary for getting milk, but wild adult cats don’t. Yet domestic adult cats do, so what gives? It turns out that neotenic behaviors, like kneading, are mostly found in domestic animals and not in their wild cat relatives. So while cats aren’t super different from domestication, this seems to be one change.
The neotenic behavior is likely because humans artificially selected sociable and less aggressive cats, which are traits more similar to kittens than full grown wild cats. Generally, wild cats are loners, but house cats aren’t nearly as much. So kneading has become a way for adult cats to show they trust you and feel safe.
30. Circling before bed
As the day (or the article) winds down, it gets to be time for bed. Your dog wants to join you, but first she circles around a spot a few times before settling in. Why? Well, back when dogs lived in the wild they had to make the ground suitable for sleeping somehow.
Grass and dirt certainly aren’t as comfortable as a plush bed, but dogs made it work. If the dog was settling down in a grassy area, she probably needed to pat down the tall grasses, so circling was a good way to do that. Plus, the movement might have driven out insects and reptiles that could threaten her puppies.
31. Sleepy kitty
Have you ever thought, if only I was a cat and I could sleep all day? Well maybe all that fantasizing stopped you from wondering why they sleep so much in the first place. People think it all comes back to their past eight lives as hunters.
Hunting takes energy, so sleeping a lot conserves that needed energy. Also, cats’ prey often come out at dawn and dusk, so that’s when cats are the most active. For much of their 12 or so hours, cats are just dozing in a light sleep so they can quickly get up if needed, but they do take short deep sleeps, too.
32. Why do cats lick you?
It’s super cute when your cat licks you, as it feels like they really do care about you. But then, after a few licks, it feels like someone’s rubbing sandpaper on your skin. Their tongues are made for ripping meat off the bone, so why are they licking you? Well there’s the other thing their tongue is for: cleaning.
Cats are great because you don’t need to bathe them, and that’s all thanks to their barbed tongue. Sometimes, cats will groom other cats, usually members of their family. So your cat might be grooming you, seeing as you’re part of her family. But there are some other possible reasons your cat likes to lick you…
33. Rough sandpaper kisses
Cats might lick (or even bite) you for attention. They want something, maybe play or pets. But if it’s excessive licking, the cat might be stressed about something. However, sometimes your cat licks you just because you taste interesting. Maybe you spilled something on yourself or you’ve got water on you from the shower or your cat just likes the salt on your skin.
But there’s also the fact that your cat may just be licking you to show affection. Cats do this to each other, so they might be just showing you some love and wanting some love in return. Licking often means the cat is calm, but since there’s several possible explanations, pay attention to the context of your cat’s licking to figure out the real reason.
34. Rolling over while playing
While rolling over can often be submissive behavior to stop aggression, during play it means something entirely different. Researchers watched a lot of dogs play, analyzing their behavior when they rolled over, and found that the dogs weren’t being submissive at all.
The dogs rolling over are just playing, and want to keep playing. Often, they do it to avoid a play bite, but sometimes they do it to get into a better position to give a play bite. Either way, the dog rolling over is not saying, “you went too far and I want to stop.” In fact, some people think bigger dogs will roll over to give the smaller dog a fairer play fight.
35. Barking their heads off
Wolves don’t bark much at all, compared to the other sounds they make. But domesticated foxes bark, when wild ones don’t. So what’s the deal with barking and why do dogs do it? Because sometimes they bark so much you just want to go back to the shelter and trade your dog in for a new one.
Well, their barks have different meanings. When they bark at a stranger, it sounds different than when they’re just barking alone or when they’re playing. But it seems like it has something to do with their domestication and the fact that they were bred to be less aggressive. Unlike the cat’s meow, though, dogs do seem to communicate with each other by barking.
36. Your dog is just so excited to see you again!
There’s nothing better than coming home to your dog. He’s overwhelmingly excited and acting like it’s been days, even if it’s only been a few hours, since he last saw you. Why do they do this every time you come home? Scientists took a peek into dog brain scans to understand it better. The smell of familiar humans triggered their brain’s reward center like no other smell did.
Plus, scientists did an experiment and found that the reunion of owner and dog is quite similar to a reunion between a human mother and child after they’ve been apart for some time. Dogs are very social and don’t like to be left alone, so they get real excited when you finally come back.
37. The ease of litter box training
While dogs need tons of training to get them to even do their business outside, cats can easily be trained to use a litter box. In fact, whenever you get a cat or kitten, they’re usually already litter box trained. And it isn’t easy to train them to do anything else, so why is this training so easy?
Well, it turns out that cats usually hide their excrement to hide the smell from predators and other cats. Soft dirt, sand, or litter are just very easy materials to cover their fecal “treasures” with. However, sometimes dominant cats in a group won’t cover their feces, as a way to mark their territory.
38. Your kitty, the climber
Cats love to climb and be up as high as possible, and they don’t care if that means ruining your screen door in the process or getting fur all over the kitchen counters. They prefer to be able to see their whole territory from up high, but it’s also in their instincts to climb for avoiding predators.
Plus, not only does climbing give cats a great vantage point over their territory, it also increases the area of their territory. Just think of your cat getting up on the bookshelf and thinking to itself, “Everything the light touches is my kingdom.”
39. The butt sniff
One of the grosser dog behaviors, from our perspective, is when they smell each other’s butts. But view it from the dog’s perspective and it isn’t quite so nasty, since they’re basically just introducing themselves to each other. On either side of the dog’s butt are glands that secrete a variety of chemicals.
These glands tell the sniffer about the gender and reproductive status of the dog, plus things about its diet, health, and emotional state. Dogs can smell 10,000 to 100,000 times better than humans, so they communicate using these chemical signals (aka smells). Dogs actually have an organ in their nose exclusively for smelling chemical communication.
40. Chewing and destruction
Chewing is one of the most annoying things dogs do, but it can have a variety of reasons. For puppies, it can relieve any pain they have from their incoming adult teeth. For adults, it keeps their teeth clean and jaws strong. But if your pup only chews when you’re not home, she might be having separation anxiety.
If your dog likes to lick and chew fabrics, she might have been weaned from mom too early. But there’s also the chance that your dog is chewing things because she’s hungry and wants more food. Wild dogs love to chew on bones for fun, stimulation, and to relieve anxiety, so it’s important to provide your pet dog with things to chew on. | <urn:uuid:226dc98f-60ac-4708-8fae-e160a28ff7f8> | CC-MAIN-2022-33 | https://www.science101.com/why-cats-and-dogs-do-the-weird-things-that-they-do | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00496.warc.gz | en | 0.964695 | 6,183 | 3.25 | 3 |
Author: Howells, William Dean, 1837-1920
Stories and Readings Selected From The Works of William Dean Howells
STORIES AND READINGS SELECTED FROM THE WORKS OF
WILLIAM DEAN HOWELLS
AND ARRANGED FOR SUPPLEMENTARY
READING IN ELEMENTARY SCHOOLS BY
DIRECTOR OF ENGLISH IN THE
ETHICAL CULTURE SCHOOL, NEW YORK
HARPER & BROTHERS PUBLISHERS
NEW YORK AND LONDON
HARPER’S MODERN SERIES
OF SUPPLEMENTARY READERS FOR THE ELEMENTARY SCHOOLS
Each, Illustrated, 16mo, 50 Cents School.
Stories and Readings Selected from the Works of William Dean Howells, and Arranged by Percival Chubb, Director of English in the Ethical Culture School, New York.
“The literary culture which we are trying to give our boys and girls is not sufficiently contemporaneous, and it is not sufficiently national and American….
“Among the living writers there is no one whose work has a more distinctively American savor than that of William Dean Howells…. The juvenile books of Mr. Howells’ contain some of the very best pages ever written for the enjoyment of young people.”—Percival Chubb.
(Others in Preparation.)
HARPER & BROTHERS, PUBLISHERS, NEW YORK
Copyright, 1909, by Harper & Brothers.
All rights reserved.
Published September, 1909.
|I. Adventures in a Boy’s Town|
|HOW PONY BAKER CAME PRETTY NEAR RUNNING OFF WITH A CIRCUS||3|
|THE CIRCUS MAGICIAN||13|
|JIM LEONARD’S HAIR-BREADTH ESCAPE||23|
|II. Life in a Boy’s Town|
|MANNERS AND CUSTOMS||64|
|III. Games and Pastimes|
|A MEAN TRICK||93|
|THE BUTLER GUARDS||103|
|IV. Glimpses of the Larger World|
|THE TRAVELLING CIRCUS||151|
|THE THEATRE COMES TO TOWN||168|
|THE WORLD OPENED BY BOOKS||171|
|V. The Last of a Boy’s Town||183|
|HE BEGAN BEING COLD AND STIFF WITH HER THE VERY NEXT MORNING||5|
|THE FIRST LOCK||43|
|THE BUTLER GUARDS||105|
|ALL AT ONCE THERE THE INDIANS WERE||127|
There are two conspicuous faults in the literary culture which we are trying to give to our boys and girls in our elementary and secondary schools: it is not sufficiently contemporaneous, and it is not sufficiently national and American. Hence it lacks vitality and actuality. So little of it is carried over into life because so little of it is interpretative of the life that is. It is associated too exclusively in the child’s mind with things dead and gone—with the Puritan world of Miles Standish, the Revolutionary days of Paul Revere, the Dutch epoch of Rip Van Winkle; or with not even this comparatively recent national interest, it takes the child back to the strange folk of the days of King Arthur and King Robert of Sicily, of Ivanhoe and the Ancient Mariner. Thus when the child leaves school his literary studies do not connect helpfully with those forms of literature with which—if he reads at all—he is most likely to be concerned: the short story, the sketch, and the popular essay of the magazines and newspapers; the new novel, or the plays which he may see at the theatre. He has not been interested in the writers of his own time, and has never been put in the way of the best contemporary fiction. Hence the ineffectualness and wastefulness of much of our school work: it does not lead forward into the life of to-day, nor help the young to judge intelligently of the popular books which later on will compete for their favor.
To be sure, not a little of the material used in our elementary schools is drawn from Longfellow, Whittier, and Holmes, from Irving and Hawthorne; but because it is often studied in a so-called thorough and, therefore, very deadly way—slowly and laboriously for drill, rather than briskly for pleasure—there is comparatively little of it read, and almost no sense gained of its being part of a national literature. In the high school, owing to the unfortunate domination of the college entrance requirements, the situation is not much better. Our students leave with a scant and hurried glimpse—if any glimpse at all—of Emerson, Thoreau, and Whitman, or of Lowell, Lanier, and Poe; with no intimate view of Hawthorne, our great classic; none at all of Parkman and Fiske, our historians; or of writers like Howells, James, and Cable, or Wilkins, Jewett, and Deland, and a worthy company of story-tellers.
We may well be on our guard against a vaunting nationalism. It retards our culture. There should be no confusion of the second-rate values of most of our American products with the supreme values of the greatest British classics. We may work, of course, toward an ultimate appreciation of these greatest things. We fail, however, in securing such appreciation because we have failed to enlist those forms of interest which vitalize and stimulate literary studies—above all, the patriotic or national interest. Concord and Cambridge should be dearer, as they are nearer, to the young American than even Stratford and Abbotsford; Hawthorne should be as familiar as Goldsmith; and Emerson, as Addison or Burke. Ordinarily it is not so; and we suffer the consequences in the failure of our youth to grasp the spiritual ideals and the distinctively American democratic spirit which find expression in the greatest work of our literary masters, Emerson and Whitman, Lowell and Lanier. Our culture and our nationalism both suffer thereby. Our literature suffers also, because we have not an instructed and interested public to encourage excellence.
Among the living writers there is no one whose work has a more distinctively American savor than that of William Dean Howells; and it is to make his delightful writings more widely known and more easily accessible that this volume of selections from his books for the young has been prepared as a reading-book for the elementary school. These juvenile books of Mr. Howells contain some of the very best pages ever written for the enjoyment of young people. His two books for boys—A Boy’s Town and The Flight of Pony Baker—rank with such favorites as Tom Sawyer and The Story of a Bad Boy.
These should be introductory to the best of Mr. Howells’ novels and essays in the high school; for Mr. Howells, it need scarcely be said, is one of our few masters of style: his style is as individual and distinguished as it is felicitous and delicate. More important still, from the educational point of view, he is one of our most modern writers: the spiritual issues and social problems of our age, which our older high-school pupils are anxious to deal with, are alive in his books. Our young people should know his Rise of Silas Lapham and A Hazard of New Fortunes, as well as his social and literary criticism. As stimulating and alluring a volume of selections may be made for high-school students as this volume will be, we venture to predict, for the younger boys and girls of the elementary school.
In this little book of readings we have made, we believe, an entirely legitimate and desirable use of the books named above. A Boy’s Town is a series of detachable pictures and episodes into which the boy—or the healthy girl who loves boys’ books—may dip, as the selections here given will, we believe, tempt him to do. The same is true of The Flight of Pony Baker. The volume is for class-room enjoyment; for happy hours of profitable reading—profitable, because happy. Much of it should be read aloud rather than silently, and dramatic justice be done to the scenes and conversations which have dramatic quality.
ADVENTURES IN A BOY’S TOWN
HOW PONY BAKER CAME PRETTY NEAR RUNNING OFF WITH A CIRCUS
Just before the circus came, about the end of July, something happened that made Pony mean to run off more than anything that ever was. His father and mother were coming home from a walk, in the evening; it was so hot nobody could stay in the house, and just as they were coming to the front steps Pony stole up behind them and tossed a snowball which he had got out of the garden at his mother, just for fun. The flower struck her very softly on her hair, for she had no bonnet on, and she gave a jump and a hollo that made Pony laugh; and then she caught him by the arm and boxed his ears.
“Oh, my goodness! It was you, was it, you good-for-nothing boy? I thought it was a bat!” she said, and she broke out crying and ran into the house, and would not mind his father, who was calling after her, “Lucy, Lucy, my dear child!”
Pony was crying, too, for he did not intend to frighten his mother, and when she took his fun as if he had done something wicked he did not know what to think. He stole off to bed, and he lay there crying in the dark and expecting that she would come to him, as she always did, to have him say that he was sorry when he had been wicked, or to tell him that she was sorry when she thought she had not been quite fair with him. But she did not come, and after a good while his father came and said: “Are you awake, Pony? I am sorry your mother misunderstood your fun. But you mustn’t mind it, dear boy. She’s not well, and she’s very nervous.”
“I don’t care!” Pony sobbed out. “She won’t have a chance to touch me again!” For he had made up his mind to run off with the circus which was coming the next Tuesday.
He turned his face away, sobbing, and his father, after standing by his bed a moment, went away without saying anything but “Don’t forget your prayers, Pony. You’ll feel differently in the morning, I hope.”
Pony fell asleep thinking how he would come back to the Boy’s Town with the circus when he was grown up, and when he came out in the ring riding three horses bareback he would see his father and mother and sisters in one of the lower seats. They would not know him, but he would know them, and he would send for them to come to the dressing-room, and would be very good to them, all but his mother; he would be very cold and stiff with her, though he would know that she was prouder of him than all the rest put together, and she would go away almost crying.
He began being cold and stiff with her the very next morning, although she was better than ever to him, and gave him waffles for breakfast with unsalted butter, and tried to pet him up. That whole day she kept trying to do things for him, but he would scarcely speak to her; and at night she came to him and said, “What makes you act so strangely, Pony? Are you offended with your mother?”
“Yes, I am!” said Pony, haughtily, and he twitched away from where she was sitting on the side of his bed, leaning over him.
“On account of last night, Pony?” she asked, softly.
“I reckon you know well enough,” said Pony, and he tried to be disgusted with her for being such a hypocrite, but he had to set his teeth hard, hard, or he would have broken down crying.
“If it’s for that, you mustn’t, Pony dear. You don’t know how you frightened me. When your snowball hit me, I felt sure it was a bat, and I’m so afraid of bats, you know. I didn’t mean to hurt my poor boy’s feelings so, and you mustn’t mind it any more, Pony.”
She stooped down and kissed him on the forehead, but he did not move or say anything; only, after that he felt more forgiving toward his mother. He made up his mind to be good to her along with the rest when he came back with the circus. But still he meant to run off with the circus. He did not see how he could do anything else, for he had told all the boys that day that he was going to do it; and when they just laughed, and said, “Oh yes. Think you can fool your grandmother! It’ll be like running off with the Indians,” Pony wagged his head, and said they would see whether it would or not, and offered to bet them what they dared.
The morning of the circus day all the fellows went out to the corporation line to meet the circus procession. There were ladies and knights, the first thing, riding on spotted horses; and then a band-chariot, all made up of swans and dragons. There were about twenty baggage-wagons; but before you got to them there was the greatest thing of all. It was a chariot drawn by twelve Shetland ponies, and it was shaped like a big shell, and around in the bottom of the shell there were little circus actors, boys and girls, dressed in their circus clothes, and they all looked exactly like fairies. They scarce seemed to see the fellows, as they ran alongside of their chariot, but Hen Billard and Archy Hawkins, who were always cutting up, got close enough to throw some peanuts to the circus boys, and some of the little circus girls laughed, and the driver looked around and cracked his whip at the fellows, and they all had to get out of the way then.
Jim Leonard said that the circus boys and girls were all stolen, and nobody was allowed to come close to them for fear they would try to send word to their friends. Some of the fellows did not believe it, and wanted to know how he knew it; and he said he read it in a paper; after that nobody could deny it. But he said that if you went with the circus men of your own free will they would treat you first-rate; only they would give you burnt brandy to keep you little; nothing else but burnt brandy would do it, but that would do it, sure.
Pony was scared at first when he heard that most of the circus fellows were stolen, but he thought if he went of his own accord he would be all right. Still, he did not feel so much like running off with the circus as he did before the circus came. He asked Jim Leonard whether the circus men made all the children drink burnt brandy; and Archy Hawkins and Hen Billard heard him ask, and began to mock him. They took him up between them, one by his arms and the other by the legs, and ran along with him, and kept saying, “Does it want to be a great big circus actor? Then it shall, so it shall,” and, “We’ll tell the circus men to be very careful of you, Pony dear!” till Pony wriggled himself loose and began to stone them.
After that they had to let him alone, for when a fellow began to stone you in the Boy’s Town you had to let him alone, unless you were going to whip him, and the fellows only wanted to have a little fun with Pony. But what they did made him all the more resolved to run away with the circus, just to show them.
He helped to carry water for the circus men’s horses, along with the boys who earned their admission that way. He had no need to do it, because his father was going to take him in, anyway; but Jim Leonard said it was the only way to get acquainted with the circus men. Still, Pony was afraid to speak to them, and he would not have said a word to any of them if it had not been for one of them speaking to him first, when he saw him come lugging a great pail of water, and bending far over on the right to balance it.
“That’s right,” the circus man said to Pony. “If you ever fell into that bucket you’d drown, sure.”
He was a big fellow, with funny eyes, and he had a white bulldog at his heels; and all the fellows said he was the one who guarded the outside of the tent when the circus began, and kept the boys from hooking in under the curtain.
Even then Pony would not have had the courage to say anything, but Jim Leonard was just behind him with another bucket of water, and he spoke up for him. “He wants to go with the circus.”
They both set down their buckets, and Pony felt himself turning pale when the circus man came toward them. “Wants to go with the circus, heigh? Let’s have a look at you.” He took Pony by the shoulders and turned him slowly round, and looked at his nice clothes, and took him by the chin. “Orphan?” he asked.
Pony did not know what to say, but Jim Leonard nodded; perhaps he did not know what to say, either; but Pony felt as if they had both told a lie.
“Parents living?” The circus man looked at Pony, and Pony had to say that they were.
He gasped out, “Yes,” so that you could scarcely hear him, and the circus man said:
“Well, that’s right. When we take an orphan, we want to have his parents living, so that we can go and ask them what sort of a boy he is.”
He looked at Pony in such a friendly, smiling way that Pony took courage to ask him whether they would want him to drink burnt brandy.
“To keep me little.”
“Oh, I see.” The circus man took off his hat and rubbed his forehead with a silk handkerchief, which he threw into the top of his hat before he put it on again. “No, I don’t know as we will. We’re rather short of giants just now. How would you like to drink a glass of elephant milk every morning and grow into an eight-footer?”
Pony said he didn’t know whether he would like to be quite so big; and then the circus man said perhaps he would rather go for an India-rubber man; that was what they called the contortionists in those days.
“Let’s feel of you again.” The circus man to | <urn:uuid:f422af14-c007-4434-bfb8-3849832cd36e> | CC-MAIN-2022-33 | https://www.book8848.com/boy-life-stories-and-readings-selected-from-the-works-of-william-dean-howells.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573399.40/warc/CC-MAIN-20220818185216-20220818215216-00096.warc.gz | en | 0.982992 | 4,378 | 2.59375 | 3 |
iconrdb.exe 是所有數位貨幣的一個大問題. 比特幣, Monero, 和其他人可能遭受這個木馬, and users won’;不知道, 通常情況下.
duckgo.exe 不發揮積極作用, 在您的 PC 的工作. 將所有類型的惡意軟體分類為特定類別, 這一個轉到廣告軟體. 短, 這是一個工具, 導致損害您的 PC 上的進程. 一個有趣的事實, 您可以通過您的工作管理員找到它所在的位置, 但如果沒有特別針對此應用程式, 就不可能消除這種威脅。. 這是一個重要的步驟, 因為 duckgo.exe 的活動旨在擾亂你的系統.
In computer sciences, Trojan refers to the virus or malware that injects the computer from behind and hides itself by attaching to a file. The sole purpose of inducing Trojan into a system is to leak private information of the user to a third party through a backdoor. 特洛伊木馬程式用於指從幕後攻擊的敵人, 並造成可能的損害. 不像其他電腦病毒, 特洛伊木馬程式並不一定要將自己附加到檔中來傷害它, 它甚至不復制自己. 特洛伊的工作是完全不同于其他同類.
- 特洛伊木馬程式可用於刪除受害者電腦中的資料. Mostly a Trojan is designed to interfere with the privacy but some of them are used to get rid of information that might be of huge value to the user. Once the operation is performed, the user won’t ever get to know about the activity.
- It is oft times used to block the access of some information. The users can try to get back to their data but they won’t be able to as long as they don’t get rid of the virus from the root.
- Copying data is not exactly seen as a danger but with Trojan inflicted, data keeps replicating itself to the point that it eats up the entire disk space.
Malicious Uses of a Trojan Virus:
- Leaking information to the third party: Trojans are attached to different emails that a person receives. The attacker then aims to target a computer system and receives information through networking. The virus keeps running in the background while the computer user is unaware of it. Security loopholes or merely bugs created in the programs make it easy for the attacker to induce Trojan inside the system.
- For anonymizer proxy: some Trojans take advantage of the security flaw in the browser and use the host computer as a Trojan to sweep into the browser and effectively hide the internet usage. The internet service provider or even the government would not be able to spy on your internet activity. This can be used for ethical purposes as well but most of the time, the technology is being used to promote illegal activities without the government supervision.
- Covering the Tracks: as the name of the virus itself suggests, the malware can be used to carry out a certain activity then cover the tracks later on. The host computer would not be able to figure out the activities that have been done through the computer. The user of the computer would have no record of the preceding actions.
- Advertising services: while most of the cyber marketing company think of the software as an ethical one. It is still consider one of the biggest violation to user’s privacy. The Trojan starts keeping tracks of the user’s navigation on the internet and produces advertisement on the website that will surely grab the user’s attention. Trojans have a relationship with the worms as they spread through the internet.
This is why Trojans account for 83 percent of malwares in the world.
How to keep your computer protected from the Computerized Trojans?
Malware is a wide range of computer viruses that harm the computer in one way or the other, this is why it is imperative to safeguard your computer from all the possible malware threats. These viruses are made for different malicious intents. Unlike legitimate software, Trojans are inflicted into your computer without your consent.
A Trojan horse hides its identity by masking itself as an important utility to the computer. It pretends to be something useful. A number of software applications are made that report of a possible backdoor Trojan that is put into the computer. 這些軟體也讓你知道任何可疑的未報告的活動發生在系統內, 你沒有意識到. 此技術能夠檢測代碼中的任何安全錯誤.
當蠕蟲在電腦內部被引入時, 它會將自己附加到自動運行檔中, 使系統及其特定程式在未經使用者同意的情況下運行. 然後蠕蟲開始在整個網路中掃描. It is necessary to eliminate the Trojan in the initial stages before it spreads infinitely.
- The first thing that you need to do is disable the auto run in your computer so that the virus does not get executed on its own.
- Then search through all the roots of devices and drives attached including the internal and external ones. You will even need to look through all the flash drives.
- Once you know what auto run files are there in the computer copy them in the notepad. You will have to go through the entire programming to see if there are any lines starting from “label” or “shell execute”.
- Select that specific auto run file and delete it from the drive.
- Repeat the same steps for all the drives and get rid of any viruses that execute on auto run basis.
If you are sure that there is no virus left, then you can take a second check from the antivirus software. It is easy for antiviruses to run the check and go through the drives within a few seconds. 蠕蟲通常在您按一下不安全連結時通過 internet 傳播. 那麼, 如何確保您要查看的連結足夠安全?
- 連結縮短是一種方法, 愚弄使用者進入跟蹤. 連結縮短是一個市場的惡意軟體製造商使用隱藏的連結的真正命運, 所以如果你是非常擔心什麼網站, 你要結束了, 那麼它是安全的不點擊縮短的連結.
- If you receive an email that asks you confirm your identity and approve of the information provided then it is most likely to be a spam. These emails come in solstice forms making it look like they have been received from the emails. So if you receive an email from a bank or some other reliable source asking for some private information then it is only safe to get it rechecked from the bank itself.
- Hackers are smart enough to change the URL coding. They change the URL to tiny bits making it look like they are destined to some place safe. Hackers can mask any link through this strategy.
Now that you know how to determine a suspicious link, what can make you sure that the link is safe?
There are a number of tools available to check security of the link before actually clicking on it. For example Norton Safeweb, URLvoid and Scan URL. Always take advantage of the real time scanning options provided by the antimalware software systems. If you have installed an antimalware software, then you need to keep it updated as well. if antivirus software is not up to the mark then it won’t be able to provide defense against the newly made viruses including the new threats that keep on coming with them. Install a strong antimalware but do look for a second opinion by installing another one.
Your computer is always vulnerable to attacks. Install a network firewall system that makes your computer practically invisible to the hackers. Hackers use internet and port scanning tools to make their way into the computer. Even if you have masked your computer from the online threats, hackers might still find a way through any ports that you have left unattended and open.
Different types of computer security threats
There are many types of computer threat software that are harmful and harmless yet annoying. Some viruses are so designed that they can force you to buy a huge amount of money and empty your bank account.
- Scareware: scareware is a kind of Trojan that does not necessarily harm the computer but disguise themselves in such a way that it looks like a threat to the computer and forces the user to buy expensive malwares for absolutely nothing. These malwares that are purchased might be harmful themselves.
- Keylogger: Keylogger is basically a powerful sub-function of the Trojan. It detents all your keystrokes and deliver the information to the third party. It can even lead people to lose their important credentials and passwords to the third party they are not even aware of. This virus can be inflicted independently or might be mixed with a powerful Trojan.
- Mouse trapping: mouse trapping is a lot like keylogging. The software monitors the navigational movements you make with the mouse.
- 後門: backdoor is not really a malware but is a form of method where once the backdoor is installed it will start leaking your information without your knowledge. This is basically created in a code wherever a bug is detected.
- Exploit: exploit is the kind of malware that is specifically designed to attach only the vulnerable parts of the system. So if your internet security is weak, exploit is most likely to target that part. The way to avoid an exploit is to always patch your software. The Exploit provide an open welcome to the Trojans.
- Botnet: botnet is something which is installed by a bot master to take control of all the computer bots. It mostly infects the viruses already upon the disks or accelerate an already there Trojan infection.
- 網絡釣魚: phishing is the kind of software that is made to look like it works for the antimalware but in reality harms the computer in ways you would not expect. Phishing tricks you into buying the AV software that you don’t really need. This kind of service make tremendous amount of money from the users afraid of getting their computer infected.
- 瀏覽器劫持者: a browser hijacker uses the support of Trojan software to take control of the victim’s web browsing sessions. 這可能是非常危險的, 特別是當有人試圖通過互聯網進行交易. 訪問社交媒體網站會導致駭客的浪費範圍.
- Social engineering Attacks:
- Mobile Threats:
- Data Breaches:
隨著管理需求的增加和客戶的快速增長, 預計電腦就業率將會上升. AI 正被業務廣泛使用. 你使用機器人越多, 網路安全的風險就越高. The big risk here is that the demand for rapid transactions and payment systems is being made. Computers are mostly hired for fraud detection so that no fake transaction passes by.
Social engineering and phishing continue to be the most profitable kind of attacks and a rising sophisticated threat is expected to take place because of the profitable individuals working behind the gates. These attacks are engineered prior to sending out a Trojan.
Ever more online activity now takes place via a cell phone. Cell phone has more private data then a desktop computer. Using internet from cell phones make it a welcome threat for cyber criminals to make their way into your private life. They can monitor each and every activity of yours. From keystrokes to online activity, even the transactions you make via a cell phone. The most successfully made malwares are first tested on cell phones. It is so much easier to spread a worm through networking cell phones.
資料違反的常見概念是侵入電腦系統並查看您不應該. 資訊被竊取例如信用卡數位, PIN 或銀行帳號. 對於駭客來說, 它是一個額外的好處, 當特洛伊木馬程式可以查看該資訊在一個單一的攻擊. 一個強大的特洛伊木馬程式直接影響到電腦的易受攻擊的安全性, 這是當資料違反可以啟動.
隨著越來越多的金融行業依靠虛擬貨幣, these businesses are on a threat of getting hit by serious financial crisis. We already see hackers making headlines by targeting these currencies. There are some solutions capable of detecting a fraud coming near these currency. The companies can get back to the traditional methods of physical payments but even the transactions made are not safe from Trojans.
There is a network of cyber criminals working underground to provide services to different renowned organizations to achieve unethical means. These underground criminals attack from behind the curtains and get paid for it. The clients specify the kind of threat they want to inflict upon the victim and the cyber criminals start working to create the Trojan.
MyPrintScreen ads are the reason of a slew of MyPrintScreen adverts, 通常被命名為 Ads by MyPrintScreen, brought by MyPrintScreen 或 powered by MyPrintScreen. 許多互聯網使用者可能會發誓, 他們已經看到這樣的廣告往往. 有一個簡短的指導如何刪除這個惱人的問題, 從您的 PC.
的 search.yourtransitinfonow.com 是一個看起來像正常和合法的搜尋引擎的網頁. 如果這是您的電腦上, 那就意味著您的瀏覽器感染了劫機者. 劫機者本頁安裝以及任何人都可以從 Internet 上各種陰暗和可疑網站下載的免費軟體. 幸運的是你, search.yourtransitinfonow.com is not malicious by itself and it will not harm your computer directly. 然而, 它將改變一些您的瀏覽器設置並嘗試將您重定向到贊助商的網站,而不是搜尋結果將出現. 另外, 它可能促進為別的東西掩蓋的惡意軟體的惡意連結將您重定向. 這個瀏覽器劫持者存在從廣告賺取了利潤和搜索查詢收集資料的目的.
qc64.exe 礦工是一種惡意軟體. It aims to steal user’;s 金融. 它縈繞著像比特幣這樣的數位貨幣, Monero, DarkNetCoin, 和其他人. 你開始遭受這種威脅的情況下, 電腦感染. 因此, 特定的反惡意軟體公用事業將在要求. 讓你的錢安全, 您將不得不重新考慮您的線上行動和照顧安全級別. 達到最明顯的穿透方式. 讓我們做一個快速檢查您的電腦有一個適當的防病毒.
你厭倦了面對 Search.hmyquickconverter.com 病毒在您的主頁上的所有時間? 以及, 毫無疑問, 你的電腦是現在的麻煩, 你需要得到它的固定. 特別是你的瀏覽器需要一個體面的修復, since appearance of Search.hmyquickconverter.com on its startup and redirections through 鉻, 火狐, IE, Edge 對您的個人資料的安全性不太好.
My System Mechanic 聲稱是一個有益的免費應用, but as My System Mechanic is a real adware and a potentially unwanted program, 它從未是真實的它將説明您在您的網上活動. 什麼是廣告? 它屬於病毒的一部分, 它通常是惱火和危險的. 然而, 它能夠破壞你的整個電腦系統.
svchose.exe trojan haunts digital currencies almost discreetly. 比特幣, Monero, DarkNetCoin, and other crypto investments slip through the fingers of their owners when Trojan Horse comes into play. Nowadays this is one of the most popular tricks to steal your money details and then get access to other finances.
d2x49uy6lwy9ax.cloudfront.net pop-up windows are quite scary especially for those computer users who see them for the first time and do not realize that they’;再假. The intention of these pop-ups is to persuade you that your computer lacks something important –; such as certain necessary driver or software to remove viruses that can’;不會被您當前的防毒軟體刪除. 無論快顯視窗實際上告訴你, note that all such information is not true and is simply used by cyber frauds to persuade you to download and install certain software that your system doesn’;真的不需要.
main.exosrv.com 快顯視窗將反復和隨機重定向您的瀏覽器到其他網站的巨大變化. 其中一些可能是完全可以接受的, 而這些領域的很大一部分將是非常危險的. main.exosrv.com domain is therefore used as an intermediary between your browser and plenty of other third-party resources to which you may be forwarded. 然而, occurrence of these pop-ups and subsequent redirections doesn’;t 單獨發生, 沒有任何理由. 您的電腦很可能感染了廣告軟體或 PUP (可能有害的程序) 目前在永久性的基礎上噴出這樣的快顯視窗.
ssl.safepoollink.com 警報可以將瀏覽器重定向到具有廣告資源的網站. 說實話, 你幾乎找不到有用的資訊, 而惡意專案肯定會提出. 將導致電腦感染難以跟蹤和刪除. ssl.safepoollink.com domain constructs a link between your browser and shady sites. 注意看任何定向, 它們表明病毒侵入. 廣告軟體或 PUP (可能有害的程序) 已出現在您的系統中, 並開始生成廣告資訊. | <urn:uuid:af50a6bb-9ae2-43e1-8cf6-6b407460a2b1> | CC-MAIN-2017-51 | http://trojan-killer.net/zh-tw/news/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948530668.28/warc/CC-MAIN-20171213182224-20171213202224-00528.warc.gz | en | 0.833529 | 5,981 | 2.703125 | 3 |
The New Zealand Wars: A History of the Maori Campaigns and the Pioneering Period: Volume II: The Hauhau Wars, (1864–72)
Chapter 35: DEFEAT OF TE KOOTI
Chapter 35: DEFEAT OF TE KOOTI
THE HARASSING AND indecisive character of the campaign against Te Kooti in the early part of 1870 was relieved by a truly brilliant action on the part of that most gallant young officer Lieutenant Gilbert Mair, a deed rewarded by a captaincy and the decoration of the New Zealand Cross. Mair's running engagement with the Hauhaus, fought in the neighbourhood of Rotorua on the 7th February, not only saved the Arawa people at Ohinemutu from massacre in the absence of most of their fighting-men, but deprived Te Kooti of some of his best warriors, and inflicted so severe a blow that he never again risked a battle in the open.
Te Kooti had formally kanga'd, or pronounced a curse upon, the Arawa for their unswerving adherence to the Government and their persistent pursuit of himself and his band. He had also announced that his atua would deliver them into his hand, and that he would “hew them in pieces.” It is true that his strong supporters, the Urewera, including the chiefs Te Pukenui, Te Whenuanui, Paerau, Te Ahikaiata, and others of Tuhoe, who accompanied him on this campaign, were related to the Rotorua tribes, but it is not to be supposed that any intercession on their part would have saved the Arawa from his wrath. Moreover, only a few months previously their own country had been invaded by an Arawa contingent with Colonel Whitmore's force, and their villages and cultivations had been devastated by men led by the very chiefs most nearly related to Tuhoe. Not a whare had been left standing or a potato-store unspoiled in the mountain settlements.
The rebel chieftain's movements after his attack on McDonnell's camp at Tapapa were intended to throw his pursuers off his trail and disguise his matured intention to attack Rotorua. His scheme was cleverly laid. He sent one of his chiefs with a small force off northward, and he himself made in that direction, inducing the belief, as he intended, and indeed announced, that he was making for the Ohinemuri district. After a skirmish in the forest behind Okauia he made a sudden deflection to his right, page 388 and caused it to be known that his objective was Tauranga. This threat induced Colonel James Fraser to leave Rotorua unguarded and make his ill-managed expedition in to Paengaroa. Fraser ordered Lieutenant Mair to go to McDonnell's assistance at Tapapa, neglecting Rotorua, which was Te Kooti's real objective. Mair, much against his own judgment, had to report to McDonnell at Tapapa, all the time knowing that the Arawa settlements were in grave danger, most of the fighting-men being away in one or other of the forces. Mair had two hundred and fifty men with him. He strongly urged the necessity for returning to Rotorua at once, and on the morning of the 6th McDonnell and Commissioner Brannigan consented to his departure; and he slipped away at once with a smaller number of men than he had taken there. The forest track between Tapapa and the lakes had not been trodden for many years, having been tauarai'd, or closed to war-parties, and the trail consequently was so overgrown and jungly that rapid marching was impossible. It was night before the Arawa column reached Te Ara-piripiri, near the edge of the great forest of the Mamaku plateau above the Rotorua lake-basin, and camped at the source of the Waiteti River. No fires were lit, lest the Hauhaus should discover them. The men's supper was a pannikin of water each, with a little sugar in it, and some biscuit.
At daylight next morning (7th February) the march was resumed, and Te Kooti's trail was picked up at the edge of the bush. It was only by accident that it was observed, for the cautious rebels had jumped across the track, one after the other, so as to leave no trace of their passage. Then it became clear that Te Kooti's movements in the forest country of the Hautere highland had all been designed with the object of drawing off the military forces from the Rotorua district, and Mair's men were frantically anxious to reach their homes and families in time to avert the impending ruthless blow. Just before the enemy's trail was discovered Mair had detailed his Arawa for duty in the following order: Fifty men of Ngati-Pikiao, under Ieni Tapihana (Hans Tapsell, son of the old Danish trader of that name at Maketu), to guard the Kaharoa and Taheke tracks; fifty men to guard the country north of Rotorua Lake from Puhirua to Waerenga and the Ohau channel; a small party to patrol the Roria Road through to Hamaria; and Ngati-Rangitihi and Tuhourangi Tribes to take post at Pari-karangi and patrol as far as the face of Horohoro Mountain. It was now a rush to get to Rotorua and forestall the desperate foe. Mair had just reached the level near the lake when a messenger came with news of the capture by the Arawa of the deserter Louis Baker (a French Canadian, lately a stoker in H.M.S. “Rosario”), who had been with Kereopa in 1865 and who afterwards joined page 389 Te Kooti. The rebel leader was on the range at Paparata above Rotorua and delayed descending to Ohinemutu until he had received a reply to a letter he sent to the Arawa chiefs by Baker (who had signed the Tuhoe chiefs' names) promising peace. Mair immediately made his dispositions to attack. He ordered the men up to Ohinemutu from Puhirua, and sent the Tuhourangi and Ngati-Rangitihi off at their utmost speed to Pari-karangi, to guard the track which came out there from the wooded ranges of Te Raho-o-te-Rangipiere and Paparata, in order to block Te Kooti's attempt to reach the Kaingaroa by that route.* Mair himself with all available men dashed off for Ohinemutu: he and his force were on the run the whole way.
Te Kooti meanwhile had emerged from the bush with his whole force, about two hundred armed men besides some women, and surprised a party of Ngati-Whakaue women and girls who were out gathering potatoes in a cultivation on the edge of the bush on the Tihi-o-Tonga slopes, south-west of Rotorua. Kiri-Matao (afterwards locally celebrated as “The Duchess”) and some other women were captured, but most of them made their escape, although fired upon.
* Captain Mair writes as follows (22nd February, 1923) in reference to the share of the Tuhourangi Tribe in the day's work:—
“The Tuhourangi made straight for Pari-karangi to protect their women and children there, and had they tried to cut off Te Kooti the moment they heard our firing at the Hemo Gorge he would have been badly mauled; but their principal man, Te Konui, persuaded them against getting athwart Te Kooti's track, lest he take the short, straight route through Te Wairoa and between Tarawera Lake and Rotomahana, and destroy the Wairoa Village en route. Had Te Kooti been so deflected, the Tuhourangi villages at Epeha, Te Wairoa, Moura, and Te Ariki could all have been destroyed, and he would have marched over Puke-kaikahu and Te Kai-whatiwhati, two famous battle-grounds where Tuhourangi and Ngati-Rangitihi were destroyed a hundred years ago. Maoris will never fight over a ground where they have once been defeated. I remember how hard it was to get some of them to come up to the scratch at the little Koutu fight (1867), on account of their sanguinary defeat by Te Waharoa on the 7th August, 1836. Neither of these defeats had ever been avenged.”
Photo about 1880]
Captain Gilbert Mair, N.Z.C.
The New Zealand Gazette (1st April, 1886) announcing the award of the New Zealand Cross to Captain Mair for his distinguished bravery in the fight of the 7th February, 1870, stated: “During this engagement which lasted many hours, Captain Mair, by personal example and devoted gallantry, inspired his men to come to hand-to-hand conflict with Te Kooti's rearguard, himself killing the notorious Peka McLean, and driving the rest before him in disorder.”
In decorating Captain Mair with the Cross at a Volunteer parade in Wellington (1887) Major-General Whitmore said: “New Zealand has the proud distinction, not enjoyed by any other of Her Majesty's colonies, of having this honourable order of valour to bestow on her citizens for brave deeds such as were performed by yourself. By Royal Warrant Her Majesty was graciously pleased to direct that this decoration should rank equal to her own Victoria Cross, and next in precedence. The particular action for which the New Zealand Cross has been awarded to you was the turning-point in the war, and but for your gallant conduct on that occasion Te Kooti and the rebels under his command would have long continued their career of bloodshed.”
The enemy column now hastily retreated southward over the hill on the west side of the Hemo Gorge, passing through a bush called Te Karaka, on the summit of the ridge which trends along to the Tihi-o-Tonga. They then crossed the Puarenga Stream and followed up the valley parallel with the Wai-korowhiti. From here they struck in to the south side of the Waitaruna Stream and traversed a long level wiwi-covered valley called Te Wai-a-Urewera, which leads down into the Tahuna-a-Tara River. Thence they retreated across the Kapenga Plain and over some rough ground to the base of Tumunui Mountain. All this way they were hotly pursued by the gallant little band of Arawa led by Mair, who sometimes found himself so far in advance that only two or three of his men could come to his support. The black-bearded chieftain galloped about the plain in advance, shouting to his followers and waving his revolver. He wore a grey shirt, riding trousers, and high boots, and a bandit-like hat. In high contrast were his soldiery—a half-naked body of savages, whose brown skins glistened in the warm sunshine as if they had been oiled. They had that day killed a number of pigs, and many of them had greased their bodies well with pork-fat in anticipation of a running fight through the clinging fern and manuka. The clothing worn was in most cases a shawl or piece of blanket or a flax mat round the waist. Each man wore cartridge-belts—some had three or four—buckled round him; some were armed with revolvers as well as breech-loading rifles, carbines, or single- and double-barrel shot-guns. The first Hauhaus killed in the pursuit were shot east of the Puarenga, just after passing the Hemo Gorge; some distance farther on one or two more were killed, and near Ngapuketurua (opposite page 392 Owhinau Hill) several were shot. At every knoll or ridge Peka Makarini and a detachment of the rearguard turned and made a stand, or laid an ambuscade, and once or twice they charged determinedly with clubbed rifles. It was only Mair's personal coolness and accurate shooting that saved his Arawa party, who were greatly outnumbered by the Hauhaus. At Ngapuketurua, six miles from Rotorua, the principal encounter took place. The spot on which this fight occurred is a long steep ridge or tableland rising directly above the Wai-taruna Stream; the present main road to Waiotapu and Taupo runs on the opposite (north) side of the small river. There is an old crossing, called Te Kauaka, over the Wai-taruna at this point. Mair was considerably in advance of his men here, and as he ran he was heavily fired on, under cover of the scrub and the uneven ground. He knelt down and fired ahead and right and left, and presently a few of his men came up and joined in the combat. It was here that Mair shattered Timoti te Kaka's jaw with an expanding bullet from his Westley-Richards carbine. Seven Hauhaus were shot dead. Some of those killed were tumbled down by their comrades into the crater-like depression in the north side of the ridge at Kauaka, a short distance from the stream; these saucer-shaped hollows are formed by springs, and the green growth masks a morass. In one swampy depression Lieutenant Mair, running on in chase of the Hauhau rearguard, suddenly noticed the corner of an embroidered Maori mat showing above the muddy ooze. He stopped and hauled on it, and in doing so dragged up a big Hauhau, still gasping for breath. He had fallen mortally wounded in the rushes a few moments previously, and his comrades, thinking him dead, had hastily trodden him down underneath the surface of the swamp, in order to conceal his body from the Arawa.
The scene of the Ngapuketurua or Te Kauaka encounter, where a track from the ridge to the creek descends a steep bare ridge between two of the hollows mentioned, can be seen from the main road, less than 200 yards away. A short distance eastward the ridge rises to a height of about 300 feet, crowned by an ancient trenched and walled pa: this is called Kuharua. Just below it on the north two small spurs slope down and converge, and enclose a kind of saucer with steep sides. Below, again, there is a narrow gorge called Whaowhaotaha, its sides covered thickly with tutu and fern; through this gorge runs a small tributary of the Wai-taruna. Near a waterfall here Mair and his men two days afterwards found Te Kaka nursing his shattered jaw. Above this spot the main road runs along the winding valley known to the old Maoris as Te Mania-ia-tote. On the left are the slopes of Owhinau plantation, golden with young larches. The upper page 393 part of the Wai-taruna Stream is here known as the Hine-uia. Round its head (at about eight miles from Rotorua) goes an overgrown old track striking southward; this was Mair's packhorse track in the early “seventies” to the redoubt at Te Niho-o-te-Kiore, on the Waikato River near Atiamuri.
After the repulse on the ridge above Te Kauaka—in this sharp affair Mair fired eleven shots—the Hauhaus turned to the right and made direct for the shelter of Tumunui Mountain, across the plain and valleys of Te Kapenga, passing about three miles on the south side of the Pakaraka native settlement. The pursuit continued relentlessly, Mair running ahead of his men and firing whenever a good chance offered. He had twenty-five or thirty men here, as opposed to at least double that number in Te Kooti's rearguard. The Tuhourangi and Ngati-Rangitihi men came up near Pakaraka, but instead of taking the enemy in front or flank they joined Mair's party in the rear. The Hauhaus travelled so fast that only the athletic Mair and a few of his strong runners could keep up with them, and by making a short stand at every suitable spot they were enabled to keep their women in the advance and lead off the wounded.
Photo by Mr. Munday, 1870]
Captain Mair and some of His Arawa Soldiers
The Hauhaus, after travelling hastily up the forested gully on the north of Tumunui, retreated direct for the Kaingaroa and the Urewera Country. Crossing the Waikorua Valley (Earthquake Flat) and passing the Pareheru bush, they took a trail on the north side of Maunga-kakaramea (the sharp-topped height called Rainbow Mountain), and camped for the night on the northern side of Lake Okaro. Mair, after a visit to his camp for food and ammunition, followed the Hauhaus up in the night, and at 2 o'clock in the morning he found their camp. He had only nine men with him. Creeping up as near as he could to the camp, he gave them a volley. The Hauhaus fled in confusion, leaving behind them some guns and many swags of clothing and food.
Mair had sixty rounds of ammunition in his pouches when the day's action began. When it ended he had only two cartridges left. His war-path uniform consisted of woollen shirt, blue tunic, knickerbockers, long stockings, and a short waist-shawl, Maori fashion. He had marvellous escapes from death in the close-range fighting, but his only wounds were lacerated legs from the hard run through the fern and manuka. For this day's good work he received his captaincy and (in 1886) the decoration of the New Zealand Cross for personal valour in the field.
About twenty Hauhaus were shot in the running fight. On the Arawa side Te Waaka was mortally wounded, and Tame Karanama, a young man of Tuhourangi, had his knee shattered by a ball. Three others were wounded.
* Tohe was one of Captain Mair's most active young soldiers in the running fight described in this chapter. He served for several years in the Arawa contingents operating against the Hauhaus in the Bay of Plenty, Taupo, and the Urewera Country.
The Arawa displayed great satisfaction at the death of Te Kooti's most notorious lieutenant. Two or three days after the fight they dragged Peka's body down at a horse's tail from Tumunui to the Kapenga and tied it upright to a tall cabbage-tree. There it remained all that summer, desiccated to a mummy by the dry, hot weather of the plains.
Two days after the fight Mair and his men discovered the wounded Hauhau chief Timoti te Kaka in the Whaowhao-taha gully, near the little waterfall on the stream which flows into the Waitaruna. Te Kaka was suffering agony from his shattered jaw; he had contrived to pound up some flax-root and make a dressing of it, which he had tied under his terrible wound. Mair gave the man in charge of one of his Arawa soldiers, and ordered him to take him to the camp at Kaiteriria; he then continued his search for dead and wounded. When the man returned to the camp he had no prisoner. He said that after going a little distance Te Kaka refused to walk any further and wanted his captor to carry him on his back. The dispute was ended by the Arawa shooting his prisoner dead. In punishment, Mair fined the man several months’ pay and dismissed him from the force. This Timoti te Kaka was one of the most ruthless and thoroughly barbarous of Te Kooti's desperadoes. His was a remarkable reversion to primal savagery under the influence of a fanatic impulse. He had been one of Mr. Volkner's deacons or Church teachers at Opotiki, and for some time strenuously opposed the onsweep of Pai-marire. But at last he became a convert to the gospel of fire and sword, and after sharing in the murder of his old pastor he plunged into the rebellion. He was one of Te Kooti's “butchers” told off to slaughter prisoners and mutilate them with swords and tomahawks.
Among the Hauhaus wounded at the Kauaka, opposite Owhinau, was Kewene, an old soldier-of-fortune of the Ngati-Porou Tribe, from Mataura, on the Coromandel Peninsula. He had been page 397 on the war-path ever since 1863, when he fought in the Waikato War; he was reputed to have led the attack on the Trust family at Mangemangeroa, near Howick, in that year. He also served in the defence of the Gate pa. Mair shot out one of his eyes.
Following up the enemy's trail on the 10th February Mair took a small party of men across country to the Okaro and Rerewhakaitu Lakes, and finding that the tracks of Te Kooti's force led in the direction of the Kaingaroa Plain and Motumako, near the Rangitaiki Valley, he returned to Kaiteriria. Te Kooti had gone through to Ahi-kereru and thence to Ruatahuna.
This decisive defeat of Te Kooti was most creditable to Mair (or “Tawa,” as he was universally known among the natives) and to the handful of men of the Arawa who supported him in the arduous and exhausting chase. The Arawa soldiers whom Mair reported as having behaved particularly well were: Kiharoa, Tohe te Matehaere, Te Raika Metai, Hie, Hori, Te Waka, Marino, Tari, Taekata, Te Waiehi, Hakana, and Tupara Tokoaitua. “I hope,” he wrote, “the Government will feel satisfied with the effort these men have made; and had they only been supported by the others, the enemy would have suffered more severely. With the small force under my command it was impossible to guard every point. The enemy mustered at least two hundred fighting-men, well trained and accustomed to fighting, while I was never able to get up to him with more than forty.”
An incident of the rebels' retreat to the Rangitaiki was a highly plucky exploit on the part of a man of the Ngati-Manawa Tribe named Tiwha te Rangi-kaheke, who with his wife, Hera Peka, was living at Motumako. When Te Kooti continued his retreat along the old war-trail past Lake Rerewhakaitu leading to the Rangitaiki River near what is now known as Galatea, he detailed a large party of his mounted men to visit Motumako—which is a settlement on the edge of the Kaingaroa Plain near a small bush three miles south-west of Galatea—in order to obtain pigs and potatoes, as the force was in great need of food. Tiwha was the only man in Motumako capable of bearing arms; there were a number of old women and young children in the village. Tiwha and his brave wife sallied out with their guns to meet the enemy, and by rapid firing and the use of shouted derisive epithets they gave the Hauhaus the desired impression that there was a strong force under cover on the low hills. These energetic and skilful tactics were successful. Te Kooti's men drew off, and the column moved down to the Rangitaiki and crossed the river at Te Taupaki ford. The column was heading for the Horomanga Gorge when the dauntless Tiwha boldly showed himself on the opposite (west) bank. Te Kooti ordered some of his men to recross the Rangitaiki and kill the Ngati-Manawa warrior, but page 398 Tiwha made such accurate shooting with his old “Brown Bess” that the Hauhaus would not face the crossing. Te Kooti ordered the retreat to be resumed, and marched off for the Urewera Mountains looming a few miles away, while Tiwha triumphantly made demonstration of his contempt for the enemy that could be routed so easily, and danced his war-dance on the bank before returning to the little settlement he had saved from destruction. This gallant Maori had been badly wounded in 1867 in the engagement at Te Koutu, Rotorua, between Gilbert Mair's Arawa and the war-party of Hauhaus from the Waikato.
The month of March, 1870, saw a new policy initiated in the field operations against Te Kooti and Kereopa and their followers in rebellion. The Taupo-Patetere campaign was the last in which the Armed Constabulary were engaged in the expeditions in chase of the Hauhaus. The Government decided that future work in the bush could best be carried on by bodies of Maori troops under a few European officers and their own chiefs, such as Ropata Wahawaha and Kepa te Rangihiwinui, and the duties of the Constabulary were confined to the garrisoning of the various redoubts in the disturbed territory and the guarding and maintenance of lines of communication.
After Te Kooti had been driven out of the Hautere forests and the Rotorua country the Government forces were moved to Matata with the intention of working against Te Kooti simultaneously with the advance of the Ngati-Porou, under Major Ropata and Captain Porter, from the Poverty Bay side. Operations from the Bay of Plenty side were to be conducted by way of Waimana or Ahi-kereru and Ruatahuna. The Wanganui natives, under Major Kepa, had moved to Ohiwa, and Colonel McDonnell went from there to Opotiki to interview the Defence Minister, Mr. (afterwards Sir Donald) McLean. Captain Preece was instructed to go to Tarawera and then on to Fort Galatea with a body of Arawa, and, as soon as a column arrived, to make a movement on the Urewera through Ahi-kereru. Soon Preece was ordered back to Tarawera, and then to Te Teko. It had then been decided by Mr. McLean to relieve Colonel McDonnell of his command. The field force of Armed Constabulary was sent to occupy a line of posts at Taupo and several points on the Bay of Plenty.
The country between Rotorua and Tumunui Mountain over which Captain Gilbert Mair fought his gallant running battle with Te Kooti's force in February, 1870, was traversed by Mair and myself on horseback on the 7th and 13th December, 1918. Much of it was very difficult to travel, page 399 for the reason that the plains and hills, clothed chiefly in short wiwi grass fifty years ago, were now densely overgrown with manuka and high fern, and the old tracks were in places impenetrable. The route of Mair's chase of the Hauhaus is parallel with the present main road from Rotorua Town to Waiotapu, and at one point, opposite Owhinau Hill, in the State forest reserve, it closely impinges on the road, from which it is separated only by the Wai-taruna Stream. As we rode along, picking our way through the scrub and crossing swampy gullies, Captain Mair pointed out the spots where he and his men from time to time dropped some of the Hauhau rearguard, where ambuscades were laid, where desperate rushes were made by Peka Makarini and his fellow-rebels, to give time for the main body to retreat, and where Timoti te Kaka and other desperadoes were shot. The final scene was near the foot of the Tumunui cliffs.
The Kapenga tableland over which we travelled along the old fighting-trail, a gully-seamed broken plateau, is covered with a thick growth of manuka and monoao shrubs, tutu, and fern, with many ti or cabbage trees and tall flax in the gullies and swamps. Another shrub growing in abundance is the handsome flowering-plant called by the Maoris hukihuki-raho, because of the obstruction it offers to travellers on foot. In olden days the Kapenga Plain was celebrated for its special quality of harakeke (flax), much used in making strong, tough ihupuni, or war-mats, which were worn as a kind of armour in hand-to-hand battles. At the time of the fight in 1870 its clothing of vegetation on the open parts was chiefly wiwi grass and fern.
(See sketch-map and Captain Mair's narrative in Appendices.)page 400
This view of Waikare-moana is a drawing by W. H. Burgoyne, in 1869, during the first military expedition to the lake. Lieut.-Colonel Herrick's camp at Onepoto is shown in the foreground. On the opposite side of the lake, at the entrance to the northern arm, are the Hauhau strongholds and villages Matuahu, Whakaari, and Tikitiki.
(See map at end of book.) | <urn:uuid:b6134b8c-4b29-4ae4-a8a1-6112b52ecf8e> | CC-MAIN-2022-33 | https://nzetc.victoria.ac.nz/tm/scholarly/tei-Cow02NewZ-c35.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573744.90/warc/CC-MAIN-20220819161440-20220819191440-00497.warc.gz | en | 0.98205 | 6,400 | 2.765625 | 3 |
Picking a favorite chapter of your book is a lot like choosing a favorite child. Yet when I was writing the chapter on frames in the Depth and Complexity book, I must admit the chapter on frames was one I truly enjoyed writing.
I love Depth and Complexity frames for the strength of critical thinking, for the way that they help teachers differentiate instruction, and for their incredible flexibility.
One of my favorite uses for depth and frames is to help students truly gain an ownership and understanding of the vocabulary word.
Let’s look at how you do this step-by-step.
? How to Teach Vocabulary with Depth and Complexity Frames
Using Depth and Complexity frames to teach vocabulary is straightforward, but there are a lot of choices to make.
That flexibility is terrific because it means they can work for everyone.
Even if you’ve never used them before, you’ll be fine!
? The Basics of Depth and Complexity Frames
Frames are a graphic organizer specifically used with the Depth and Complexity framework.
A Depth and Complexity frame looks like a picture frame.
Just like a picture frame is designed to highlight a picture (as well as protect it), a Depth and Complexity frame’s purpose is to draw attention to an idea and keep it there.
In the middle of the frame, it is common to place some representation of the topic of the frame. That may be a picture, a graph, or a body of text.
Each of the sections surrounding the center will have one or more elements of Depth and Complexity associated with it, as well as a task statement/question that tells students what to do in that section.
While there is a set of common icons used to represent the elements of Depth and Complexity, my co-author Ian Byrd and I use emoji (the great idea of his wife, Mary) for ease of use. You can use whatever you prefer.
? Decide on the word(s) to use in the frame
Let’s look at how to choose the word or words that make good Depth and Complexity frames.
Frame activities include Tier 2 and Tier 3 vocabulary words.
You can read more about that here, but essentially, it’s best for general vocabulary words and academic vocabulary words.
This is this is one of the activities that works equally well for both types of vocabulary, which is another reason why it’s such a great activity.
When you’re choosing a word, you not only have to decide if it’s a word that students need to know, but also if it’s a word that is important enough to justify doing an entire activity over it.
Some words don’t merit that. Some words are best practiced with quick games that have a number of words being reviewed or perhaps just a brief discussion.
When you’re looking at a framing activity, you’re looking at an entire lesson. It’s probably 30 minutes of instructional time, so you want to make sure that you’re not using the word that just doesn’t rise to that level of devotion.
I’m using the word “word” in this article, but that can be a multi-word term, also. The “word” can be “order of operations.”
? Suggestions for choosing words for frames
The words that make great candidates for Depth and Complexity frames are words that:
- students frequently confuse
- are important to an upcoming unit
- you know students will enjoy learning about
- lend themselves to a graphic representation or have a possible visual connection
- enrich their writing
- are interesting in their construction, history, connotation, or application
? One word or many words?
Remember when I said this activity was flexible? One of the most important ways this is true is that you have many options when it comes to word choice.
? Whole Class
In this option, all students in the class can make a frame of the same word.
This works great for words every student needs practice on or words that frequently misunderstood in a way that impacts instruction.
? Small Groups
You may divide students into small groups and give each group a different word to frame.
When you have multiple students working on the same frame, you can do two different things:
- Have them use a different color pen (or font color) so you can see who wrote what.
- If using a printed frame, you can have the students cut the frame apart, each complete a section, and then paste the frame back together on a piece of paper.
You may choose a set of words to use and have different students in the class working on different words.
This doesn’t mean you have to choose 27 different words.
You may have five or six words that are being used, with multiple students completing frames on the same word independently.
If you have students working with more than one word, it allows you to deepen the experience.
When the frames are completed, you can have students do either an in-person or virtual gallery walk.
Students can look at each other’s words and learn more in hopes that then they will learn about a word that they want to translate into their own lexicon.
? Choose the type of frame you’re going to use
Once you have chosen the word or words you’re going to use, you’ll need to choose the type of frame your students will work with.
? Printable, digital, or student-created
You can use a printable paper version of a frame, you can use a digital frame, or you can have students create their own frame.
If you make your own, you may want to follow the layout I followed. Notice that I used different colors to correspond to where students type. That makes it easier for students to match the task statement to the space they’re supposed to write.
This is a sample of what it would look like when the teacher had filled in the questions:
If you have students create their own frames, they can sketch on paper or on a device if their devices have that capability.
In this example, you can see a sketched frame about states of matter sketched out with only two sections. While four is typical, you can make as many (or few) sections as you want.
They can also construct a frame out of shapes in PowerPoint or Google Slides™.
This last option may add considerable time to the task, so you have to decide if it’s worth it to you.
? Frame format
Once you’ve chosen printable, digital, or student-created, decide what format of frame that you’d like to use.
Even though most people are used to seeing only a couple of different frames, there are actually a number of different frames from which to choose.
In this version, I put lines in the sections for students to write on.
This version is vertically oriented with two sections.
The possibilities are nearly endless.
You will want to choose the one that you feel like is best for both the word and the students.
If you have different students working different words, you may have different frame formats in the same activity. That’s perfectly fine.
You may find that certain words lend themselves to different formats of the frame.
For example, some words lend themselves to a graphic representation. If so, you will want to choose a frame that has a large central space (like the first example in this article).
If you’re looking at a word that you only want students to have three sections to complete as opposed to the traditional four, then you want to choose a frame that is adaptable to that.
Some of it is just aesthetics and novelty.
I like to change out the way that students look at frames just so that they don’t feel same-y.
- Sometimes I have them vertically oriented.
- Sometimes I have them horizontally oriented.
- Sometimes I put the prompts inside the sections, and sometimes I don’t.
- Sometimes I leave the sections blank, and sometimes I have the lines in the sections.
Play around with formats you like. You will find that a few types work well for you, and you will return to those again and again.
Now you have your words and your frame format. Next up: the Depth and Complexity elements themselves.
? Which elements of Depth and Complexity to use
Just like the frames themselves, you may have students all working with same elements of Depth and Complexity, you may have them working with different elements.
Altering the elements is a simple way to differentiate and accommodate different learners.
Even if I have students working on the same word as the vocabulary target word, I can have them using different elements of the Depth and Complexity framework in each section of the frame.
I might have high-ability learners working with the Complexity elements (Change over Time, Across Disciplines, and Multiple Perspectives).
In order to accommodate a struggling learner, I may have the same element of Depth and Complexity in every section of the frame.
For example, I could have them looking at the Details of the word in four different ways.
There’s no right or wrong. There’s only what works best for you based on what you need your students to understand about the word or term.
There is no hierarchy within the elements themselves.
Even though some people see Language of the Discipline or Details as being less rigorous or requiring less critical thinking, than, say, Ethics or Trends, that isn’t necessarily true.
While the elements are divided into eight Depth elements and three Complexity elements, in practice, you can simplify or complexify any of them.
The power of the elements comes from the combination of the thinking skill connected to the element and the question that the teacher is asking.
That leads us to the next choice in setting up your frame: writing your questions.
? Writing questions for your Depth and Complexity frames
While we often focus on the frame format and the elements we choose, the most important thing that you will do when working with frames is crafting your questions.
Good questions are the key to getting your students to think critically.
This is where you truly accommodate or differentiate for different learners.
You may wonder why I put it as the last step, if it’s so important.
It’s because you can’t write the questions until you’ve chosen what words or terms you’re using, what format of frame students will be working with, and what elements of Depth and Complexity are going to be explored.
All of these must be decided before you craft your questions.
To develop strong questions, we want to make sure that we are combining the elements of Depth and Complexity with a high level of thinking.
This assumes that the student has had sufficient exposure to the word to know how to approach it deeply.
If students are constructing a frame for an unfamiliar word, then you will need to invest some of the frame space on fundamentals.
It’s also possible for a quality Depth and Complexity frame to stay at the knowing and understanding levels of Bloom’s.
I strongly advise against putting the question inside the frame itself.
That takes space away from the student response. Instead, put the question around the margin of the page, a screen display, or a separate page.
? Sample Questions
These sample questions are for a broad range of grade levels, so some may be far too simple or complex for your students.
The questions you ask must be driven by the outcome you desire.
If you want students to learn the basics of the word (spelling, grammar, etc.), your questions will be different than if you want them to integrate a vocabulary word into their writing or understand a concept in your discipline that they will be learning.
I’m sharing them to give you an idea of what kinds of questions you can use with a frame.
While these are just a few examples of the kinds of questions you can use in a Depth and Complexity vocabulary frame, I hope you find them helpful.
- What is the most important sound or letter in this word?
- What other words can you think of that have the same number of syllables as this word?
- What other words can you think of that have the same vowels as this word?
- In what important ways is this word the same as/different from ____ word?
- How many letters does this word have?
- Which letter in this word makes your mouth open widest?
- What tells you this word is connected to ______?
- Make a synonym/antonym for this word using only one word (no phrases!)
- Explain the feeling evoked by this word.
- Explain why this word is worth knowing.
- What family of words does this word best fit in? (e.g., “words about love” or “words bout shapes” or “words about character flaws” or “words about the Earth”)
Language of the Discipline:
- Change this word into an adjective/adverb/noun.
- Look up the etymology of the word and explain it in your own words.
- What is the most common form of this word (part of speech)?
- How essential is this word to understanding the concept of ______?
- What is the most important pronunciation rule this word is following?
- What phonics rule(s) is this word breaking?
- Can you think of an instance when you’ve heard or read this word used incorrectly?
- Explain the rules involved in using this word. “You should use this word when…” or “You should use this word as ______.”
- Who shouldn’t use this word and why?
- What letter could you take out that would still allow the word to be pronounced the same?
- Does this word/phrase follow a form of meter? If so, describe it.
- Write another word or phrase that has the same pattern of consonants and vowels (or stressed/unstressed syllables).
- How would you categorize this word if you were putting it in a group of words about _____? (For example, if you’re looking at science vocabulary, you might say “a group of words about life forms” or “a dictionary of cycles in science.”)
- Why do we use this word instead of __________? (For example, if the target word is “egregious,” words that could go in the blank would be “bad” or “terrible.”)
- Would you rather use this word or ___________?
- Why do we like long words like this?
- How would you change this word if you had to spell it differently from the way it’s currently spelled?
- Why is this a word that should be whispered/shouted?
- Is this word more or less common than it used to be?
- Why do people use this word more than ________?
- In [insert other language], this word is spelled [insert word]. What similarities and differences do you see between the words?
- What are the disadvantages of using this word to represent ________?
- What is the greatest advantage of using this word to represent ______?
- How do the advantages of using this word outweigh the disadvantages? (or vice versa)
- Why does your teacher want you to own this word? How does he/she think it will benefit you?
Change over Time:
- How has the connotation of this word shifted from its original etymological meaning?
- How would you have described this concept five years ago before you knew this word?
- How likely are you to use this word [x] number of years from now?
- Describe a time in your past when this word would have been useful to know.
- What did you used to call this concept before you learned this word for it?
- Convince a ___-grader that this is a word worth knowing.
- Explain this word as if you were talking to your _____ teacher (insert other content area here).
- If not this class, what other class are you likely to use this word in?
- We’re going to be studying ______ later on in the year. Explain why we’d be likely/unlikely to use this word when studying that.
- Who would most like this word, a scientist or a baker? (substitute any professions)
- Who may find the use of this word objectionable?
- What character in [insert story] would be most likely to use this word?
- Is this word more likely to be used by [x] author or [x] author?
- How does a/the ______ feel about this word?
The possibilities are endless!
As you use frames to explore vocabulary, you’ll get better and better at creating questions that get kids to the level of mastery of the word they need to use it correctly and effectively.
? Non-question Task Statements
In addition to questions as shared above, you can also give students tasks to complete in the sections of the frame.
Here are just a few ideas:
- Write this word in a declarative sentence that has seven words.
- Write this word in an interrogative sentence with an appositive phrase.
- Write a sentence using both this word and an antonym of it.
- Defend the use of this word to someone who thinks it’s offensive/silly/useless/inferior to another word/too hard to spell.
? The Center of the Frame:
While it’s common to just put the word or term itself in the center of the frame, there are other options. Consider these ideas:
? Have the students:
- Sketch the concept represented by this word
- Write this word in one color and then write examples of it in a smaller size and different color
- Draw a person using the word
- Draw a non-example of this word and draw a red circle with a line through it over your drawing
- Write the word backwards
- Write the word in a different language
- Zentangle the word
Remember that example of a sketched frame above? It had a sketched center as well.
When students work in the center of the frame, there’s another opportunity for deeper thinking.
? The teacher can place:
A section of text using the word (if you do this, use questions that explore the word in this context)
- A graph
- A chart
- A map
- An illustration
Feel free to have a task statement or question related to the content in the center of the frame.
For example, if you have a graph of the slope of a line, you can ask a question about it or have a task related to it.
If you have a diagram of the water cycle, you can have them label it.
Here’s an example with text:
? Wrapping Up:
Depth and Complexity frames make a perfect vocabulary activity.
Even if your students are not particularly used to using the Depth and Complexity framework itself, frames can be a great way to get them interested in it.
Because they’re already working with the content piece that they know (the word itself), learning the different elements of Depth and Complexity can be very seamless and almost effortless.
If your students have not had very much exposure (or none!) to the Depth and Complexity framework, my suggestion would be to introduce them to a single element of Depth and Complexity and have all of the questions on the frame be related to that particular prompt.
If you use this Depth and Complexity frames to teach vocabulary (or anything else!), please tag me on Twitter (@gifted_guru) or email me at firstname.lastname@example.org.
I’d to see what you and your students come up with! ? | <urn:uuid:6411ef1c-803e-4f7e-916e-19ea9704b4bb> | CC-MAIN-2022-33 | https://vocabularyluau.com/teaching-vocabulary-with-depth-and-complexity-frames/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00497.warc.gz | en | 0.936344 | 4,291 | 3.734375 | 4 |
Water Conservation and Demand Management in Mining
Table of Contents
The Water Conservation and Water Demand Management (WC/WDM) Targets for the Mining Sector have been developed as a collaborative project by the Chamber of Mines and the Department of Water and Sanitation with active participation from the mining sector. Most new Water Use Licenses prescribe the implementation of Water Conservation and Water Demand Management strategy and is expected to become a requirement for all WUL’s issued in the future. The development of Guideline and Implementation documents defines a set of Water Use Indicators to measure the mine’s water conservation and demand management in question. Some of the principles outlined in these guidelines have been pulled from international best practices such as the Global Reporting Initiative and the Water Accounting Framework for the Minerals Industry as developed by the Minerals Council of Australia. The diverse resources used to develop these guidelines make them relevant for all mining operations. They should be considered and studied by resources working in the Environmental and Water Management Departments.
There has been a massive shift in the environmental and sustainability awareness in the mining industry, which makes water conservation and demand management in the best interests of the mine, over and above the legislative requirements. Through these guidelines and benchmarks, each mining activity can be evaluated on their water use and create an industry standard based on actual operational results. The guideline prescribes a series of steps to develop a Water Conservation and Demand Management plan. This article aims to provide an overview for operational staff and management to understand the implications.
It is important to note that the WC/WDM is developed from the regulator’s perspective and considers water a national resource to be used sustainably and responsibly. The mining industry is seen as a large consumer of water and can negatively impact the overall surrounding water resource due to discharges. The current guidelines focus on the volumetric consumption and impacts on water resources. However, a mine is required to act responsibly regarding discharge water quality and the quality limits indicated by their relevant Water Use License. Due to the high level of detail in which the water balance is developed, it is recommended that potential water quality improvement interventions also be identified and their feasibility determined.
1 Developing a dynamic water balance
All mines are required to develop accurate and computerized water balances as set out by Best Practice Guidelines G2, or if a Water Use License has additional requirements. The purpose of having a computerized model is to determine the impact of potential WC/WDM interventions or environmental conditions such as drought or stormwater conditions. The guidelines under Regulation 6 prescribe the minimum requirements for the water balance read as follows:
Minerals and Petroleum Resources Development Act shall compile an activity-based dynamic water balance for climatic variations, including all inflows and outflows from the activity, reflecting all surface and groundwater interconnections with the water resource.
A person referenced in regulation 6(1) or a holder of a mining right or production right shall ensure that the water balance:
incorporates accurate values based upon measured volumes for the water abstracted, discharged, beneficiation process water intake, outflow to and return water from waste management facilities, and water abstracted from mine workings;
incorporates accurate values determined from suitable measurement or modelling of rainfall, runoff, seepage, and evaporation;
It is required that the water balance is submitted to the Department of Water and Sanitation as part of the Integrated Water and Waste Management Plans, together with the monitoring data, unless stipulated otherwise in the water use license. The water balance should be kept current by ensuring all measured and modelled data is reflected in the model. Any changes are incorporated and should be updated at a frequency not exceeding a monthly basis. All measuring devices used to develop the water balance shall be easily accessible, properly maintained, calibrated and in good working condition. The water balance should be in electronic format and capable of simulating different operational conditions.
There are two primary reasons for the prescription of a computerized water balance:
The water balance should be a management tool capable of accepting new data inputs and process changes based on the operational requirements of the mine, thereby meeting the dynamic criterion.
It should also be a functional simulation tool that can simulate various intervention projects to evaluate the potential impacts on the mine water balance and, ultimately, the Water Use Efficiency Indicators.
During the programming of the water balance, it is crucial to consider the various classifications of the water sources and sinks as defined by the guidelines. These classifications will be used for the calculation of the Water Use Efficiency Indicators. The classifications are described in the table below, and all values are relevant to a specified period such as per day or annum:
Table 1: Water Source and Sink Classifications
It is important to classify the various water streams in the model with a value identifying each stream towards a specific classification to calculate the WUE indicators automatically. The guideline document also specifies that the WUE indicators are calculated for the overall mine operation under consideration, and also separate calculations for the various operations under the following categories: mining operations, beneficiation plants, and residue disposal sites. By keeping this in mind during the programming of the dynamic model, the streams can be grouped according to the relevant category to allow for ease of calculations.
All modern Water Use Licenses require a mine to measure and monitor listed Water Use Activities under Section 21, which addresses the water abstraction for use, making safe, treating and discharge, disposal of waste containing water, or reusing treated effluent. Due to the water scarcity and reduced availability, mines have realized that it is essential to monitor these flow volumes over and above the requirement of the Water Use License. The volumes that are typically more challenging to measure or from an operational perspective to motivate for the costs of monitoring equipment are the discharge water streams. The importance of recycling and reusing water is emphasized by the WUE indicators addressing the percentage of wastewater generated that is not reused and the percentage of water recycled. If these specific indicators are calculated accurately, there is the potential to immediately identify opportunities to reduce the intake of new water sources, which typically comes at a high cost, which is one reason these indicators were developed. If this motivation is developed, effectively operational staff should motivate for the capital required to install the required measuring and monitoring equipment.
Another verification process for the accuracy of flow measurement is calibration at the required intervals or spot measurements with mobile monitoring equipment. Calibration is also a requirement for the WUL license but is typically seen as a grudge purchase when budgets are allocated. Considering the high costs of potable water or the negative impacts of excessive discharges, it is essential to effectively motivate why calibrations are required and done at the recommended intervals.
Fissure water ingress is typically a problematic flow rate to measure due to the various sources found in underground travelling ways or mining stopes. Some mines estimate these flow rates during shutdown periods when all pumping is stopped, and dam levels are monitored to measure the time required to fill specific dam capacities. This provides an accurate prediction as groundwater ingress is not expected to change significantly over short periods. However, it is recommended that this method of measurement is done at least once a year. The risk remains of mining into new aquifers or groundwater channels underground during mine development which would cause a significant change in the overall water balance. If a new water source is encountered, an attempt should be made to collect the water in a channel and estimate flow until accurate measurement can be done. Suppose the mine already has sufficient groundwater to supply the mining activities. In that case, it is recommended that an attempt be made to seal the ingress through chemical grouting and prevent the water from entering the mine water reticulation system. Excess water ingress into the mine will increase the overall water input, and require additional treatment and pumping costs and negatively impact the WUE Indicators.
From a WC/WDM perspective, all discharges or losses to the environment through seepage or evaporation negatively impact the WUE indicators. The model only allows for one positive form of discharge. That is to a third party user with a defined beneficial use for the water and a memorandum of agreement to prove the validity of the off-take. Potential third-party users could be industrial users with a process water requirement; good quality groundwater could be used for agricultural purposes or livestock watering if there are no harmful elements present in the water. Another possibility is the treatment of impacted water to potable standards and providing that to third-party users at a cost that covers the treatment rates. This requires approval through the Water Act as well as permission from the local water services provider. Considering the current national water shortage, it is in the best interest of all affected parties to increase the availability of potable water sources, which is why such potential projects should be investigated and developed further.
1.2 Calculating Water Use Efficiency Indicators
The figure below indicates an example of a classification of the water streams for the different operational categories.
Figure 1: Water Use Efficiency Indicators Calculation Diagrams adapted from Benchmarks for Water Conservation and Water Demand Management (WC/WDM) in the mining sector
The WUE Indicators are sensitive to the Life of Mine plan, which projects the mining program into the future and the planned tons to be mined, and possible expansions to the mine footprint. The Environmental team responsible for developing the WC/WDM strategy must have an excellent understanding of the proposed Life of Mine to construct an effective, and importantly a realistic WC/WDM strategy.
Whether a mine is in the development, operational, or closure phase will determine the extent of the possible interventions and changes on the projected WUE indicators. The dependence of the consumptive water use on the ton of Return Of Mine (ROM) ore means that any mine approaching closure will reduce the WUE indicator throughout the WC/WDM strategy because of the decreased production as the reserves are depleted. This places further emphasis on the role of water use in the final closure strategy and the requirement to measure and manage water use throughout the mining process. For new developing mines, the planned production tons are estimated to ramp up quickly. Still, the proposed impacts on the water sources can be overlooked, such as the potential increase in underground ingress water. This has implications on the pumping and distribution networks regarding their design capacities and directly correlated to the operational costs of continuous dewatering of the mine. Therefore, it is essential that all geohydrological studies and their findings are studied in detail and are still up to date when the proposed strategy is developed.
There may be opportunities to optimize the water use for existing mines already operational through additional flow measurement points in the reticulation system.
The first set of indicators is volumetric based only on the total water use for the mine, consumptive water use, and the volume of wastewater lost from the mine operations. Detailed descriptions and calculations of these variables are indicated in the table below:
Table 2: Volumetric Indicators
These indicators are based on volumetric loading only and do not consider the efficiency of the mine water use through recycling or reuse of water streams. It is important to note that their total Water use can heavily influence mines if the mine is located in a compartment with large amounts of groundwater, which can typically not be prevented from ingress into the mining area. The impact of the consumptive use illustrates the importance of social and labour plans for mines with their surrounding communities. It is seen as a driver for interaction and collaboration with the local community.
Table 3: Water Use Efficiency Indicators
Once these indicators have been calculated through the use of the dynamic water balance, each mine is required to evaluate their scores to the latest national benchmarks, of which the most recent document is the Benchmarks for water conservation and demand management (WC/WDM) in the mining sector (June 2016) report. It is important to note that these benchmarks were developed as part of a detailed analysis of 39 mining operations by consultants and the regulators based on the information available at that time. The benchmarks will be updated as required by the regulator once there have been sufficient submissions from the industry regarding WC/WDM strategies. Due to the abundance of coal, gold, and platinum mining operations, these types of mines have their unique benchmark values. All other types of mining operations have been grouped under the classification of Other. It is foreseen that these will be further expanded over time to identify more realistic benchmark targets specific to those industries.
All mines should strive to meet the benchmark values for the WUE indicators of the top three performing mining operations in that specific category. The WC/WDM strategy should be developed to improve the WUE indicators to meet and exceed the benchmark values.
1.3 External and Internal Variables affecting WC/WDM Targets
There are specific differentiating factors that affect how a mine can achieve the water conservation and demand management targets, and the guidelines identify these according to specific classes. Some of these factors are inherent to the mine resources and area, which cannot be changed, whereas others can actively be impacted through specific actions or management procedures. Understanding these differentiation factors and their definitions is essential in developing realistic WC/WDM targets and highlights why they are unique to each mining operation. The guidelines proposed the following classifications:
Table 4: Classification of variables affecting WC/WDM targets
1.4 Developing a Water Conservation and Demand Management Strategy
Once the baseline for the various WUE indicators has been developed, the mine or an appointed consultant must develop an implementation methodology. This methodology should provide technical guidance on specific intervention projects that will improve the WUE indicators and improve the mine’s overall standing. These interventions should be site-specific, based on the local requirements of the mine, availability of water resources, and the requirements for the mining operations. These interventions should aim to improve the WUE indicators in the shortest possible time. They should be incorporated into the mines Integrated Water and Waste Management Plan (IWWMP), which is also a requirement of the Water Use License. The basic principles that should be followed to identify potential interventions are shown below:
Options to reduce consumptive water use
Options to reuse or recycle water
Options to identify and implement alternative technologies that are less water-intensive
Options to treat poor quality water to meet end-use standards and reduce water purchased or sourced from external parties
As these interventions are developed and evaluated for feasibility, the dynamic water balance can be used to evaluate the efficiency of improving the WUE indicators. It is recommended that the interventions be compiled as a table allowing for a rating regarding implementation cost, effect on WUE indicator, implementation time, and risk of failure or non-action. Part of the technical requirements of this strategy is to develop a high-level capital, and operational cost estimate for the proposed interventions as this directly influences the feasibility of the project. It is essential to consider the projected life of mine for the mine in question as return on investment is critical when new projects are being evaluated. Where further specialist studies or detailed designs are required, it should be included as a first phase of the project for implementation as soon as possible, and as part of the annual updates to the WC/WDM strategy these interventions should be further developed and expanded as the results of the specialist studies become available.
Once the most effective and feasible interventions have been identified and agreed upon by the mine operations, they are combined into a five year WC/WDM strategy with implementation dates that the mine has committed to achieving. This strategy should demonstrate that the mine and the consultants have considered all the viable water conservation and demand management interventions possible. Based on their specific mining conditions and environment, the optimal Water Use Efficiency targets.
The entire WC/WDM strategy will have to be submitted via the Standardized Water Accounting Framework (SWAF), an online database currently under development by the Department of Water and Sanitation as of the end of 2020. This online database will serve as an automatic tracking system for the progress with the submitted plan and the implementation of the various proposed interventions. It is also envisaged that the online system will update the WUE indicators for all mines which have submitted their WC/WDM strategies and data for the national benchmarks to be updated at a specified time. All mines will be required to update the WC/WDM strategy every five years to ensure it stays relevant and considers operational changes to calculate the WUE indicators accurately. It is crucial to study the current benchmarks and implementation documents for Water Conservation and Water Demand Management. The requirements for submissions of WC/WDM strategies on the SWAF online system will be stringent once implemented.
1.5 Potential Water Conservation and Water Demand Management Measures
The total mining footprint is inherently determined by the location of the ore reserves that are identified through exploration projects. The surrounding geology and aquifers determine the abundance or scarcity of water, and the mine is forced to develop the mining infrastructure around this.
1.5.1 Underground and Surface Mining Management Measures
To improve WUE indicators, the priority is to avoid ingress of excessive groundwater and management of water across the shaft and mining areas. This can be achieved through the following interventions:
Adjust underground mining and development plans to avoid water-bearing strata or aquifers, and where it cannot be avoided, developed engineered solutions such as bulkheads to prevent contamination of the ingress water and separate distribution system to ensure water remains pure until it can be discharged.
Improve / Optimize underground ventilation systems to reduce the requirement for underground cooling, a large consumer and source of contamination for process water due to the cyclic increase in salt load.
Backfill and seal off old underground mine workings that have been mined out to reduce potential ingress points. Preventing contact with fresh groundwater containing dissolved oxygen will also reduce the kinetics for the dissolution of sulfidic compounds that can generate acid and cause additional pollution.
Install online flow meters throughout the distribution network for both process and potable water to monitor leaks and wastage. Online metering and database monitoring allow for identifying trends or sudden spikes, which would typically indicate a leak or change in operation that should be addressed. This also allows for an operational usage audit of various sections of the mining footprint to identify poor practices and set specific targets to improve consumptive water use.
A quality audit of the water uses can determine whether the water source for specific activities is fit for use. There are usually opportunities to reuse service water or reduce additional water intake if specific quality requirements are met. This might be achieved by installing a basic filtration system to allow for process water to be reused for gland service instead of additional potable water being purchased.
Potable water purchased from municipalities or local water boards is one of the more expensive operational costs. It increases the overall water use of the mine, which detrimentally impacts the WUE indicators. Available water sources should be analyzed regarding the quality and potential for treatment to potable standards. This will improve the water reuse indicator and reduce overall mine water use by reducing the water pumped to the mine.
Proper stormwater management and infrastructure ensure that the maximum percentage of rainwater is contained and available for effective reuse. If the trenches and drain systems are insufficient, this could result in contamination of the water source and potential impacts on the surrounding environment. Ensuring there is sufficient storage capacity in the stormwater and pollution control dams allow for the maximum buffer capacity for the reuse of rainwater, especially in areas constrained with groundwater.
An evaluation of the open water storage dams and reservoirs on the mine can identify the extent of water losses due to evaporation and whether there is potential for interventions to reduce the surface area available for evaporation.
1.5.2 Minerals Processing Management Measures
Depending on the type of mineral being mined and processed, the various metallurgical processes can vary greatly. However, there is always a portion of the operations dependent on water availability to ensure it can work effectively. Through implementing specific control measures or best practices can result in improved water use efficiencies, such as the examples below:
Where possible, the dry conveyance of ore is recommended to replace the use of slurry pipelines. This requires significantly more capital and maintenance and is limited to the maximum distances feasible. However, it dramatically reduces water use requirements and the potential impacts in spillage.
Ore and waste rock stockpiles provide a large mass and surface area for contamination when rainwater is not effectively diverted. Stormwater management principles should be followed strictly to prevent contaminated runoff from entering the surrounding environment and minimizing seepage from these facilities. It is also essential to manage the levels and capacity of all storage dams or reservoirs to ensure there is sufficient capacity in the case of high rainfall events to prevent spillages. In some cases, the build-up of silt in these dams reduces the overall capacity and no longer meets the design specifications to ensure GN 1147 compliance is met regarding 1 in 50-year storm events.
Where leaching processes are used, as in gold processing, the Specific Gravity (SG) of the underground ore slurry feeding the leaching tanks are controlled through the primary thickeners. There are typically online monitoring and control measures capable of optimizing this SG. However, any additional water users such as for hose down, flushing, or make-up are collected and pumped into the final residue tanks in all the subsequent process steps. This results in significant dilution of the solid content of the final residue, which results in significant losses of process water to the tailings storage facility, where the bulk of the water is lost due to evaporation or seepage. Increasing the average residue slurry density from 1.3 to 1.35 will reduce the volume of water pumped to the tailings storage facility by 15.4%, which is an immediate reduction in water lost due to evaporation and seepage. This would require additional water recovery or management processes to be installed through monitoring or mechanical equipment installed in the residue management area of the plant.
1.5.3 Residue Disposal Management Measures
Residue disposal facilities or Tailings Storage Facilities (TSF) is the final step in the ore beneficiation process once the valuable minerals have been extracted from the ore. The responsible and sustainable disposal of the waste products will minimize the potential impact on the receiving environment and mitigate impacts on the water resources.
Historic waste rock dumps contain inherent leaching or stormwater runoff contamination risks. The reclamation or rehabilitation of these footprints can reduce the potential impacts and reduce the dust suppression requirements.
Guidelines recommend waste disposal into old opencast pit workings to reduce the surface footprint of waste after taking due consideration and having evaluated the potential impacts on water quality. This will effectively reduce the available contamination area for stormwater and assist with the free drainage of pits once adequately rehabilitated.
Seepage from Tailings Storage Facilities is usually managed by installing liners or drainage systems. However, there is always a risk of increased seepage due to external factors or imperfections in the designs. Seepage from tailings facilities poses a quality contamination risk to the surrounding environment and the loss of return water from the dams. Hydrogeological studies can identify the quantitative losses due to seepage. Based on the findings, it may be viable to install curtain drains or boreholes to collect the seepage water for reuse and pollution prevention.
New requirements under the National Environmental Management Act – GNR 1147 require that concurrent rehabilitation be implemented at all mining operations and included in the mining operational plan. By actively implementing rehabilitation methods such as sloping and vegetation cover, the water losses related to the facility can be reduced by decreasing the dust suppression requirements and reduction of seepage losses. | <urn:uuid:fae62295-7cae-441a-9fd2-9f85f64e06c1> | CC-MAIN-2022-33 | https://epcmholdings.com/water-conservation-and-demand-management-in-mining/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00296.warc.gz | en | 0.933661 | 4,822 | 2.84375 | 3 |
Adramatic series of school shootings between 1995 and 1999 startled the nation. Deadly violence within schools struck fear in the public and particularly school-age youth across the nation. Beginning in 1989, there had been an increase in school violence, ranging from verbal harassment, threats of harm, and violent crime.
Overall national violent crime rates dropped after 1993 and continued at lower levels into the twenty-first century. Similarly, following a period of increased violence by juveniles (youth less than eighteen years of age) between 1989 and 1993, youth violence had begun to level off or decline as well. Crimes reported by schools dropped 10 percent between 1995 and 1999. The decrease in youth violence, however, was less than the overall trend.
Public concern about school violence rose significantly as school shootings dominated the media's attention from 1997 to 1999. This was despite the fact that these high-profile crimes occurred during a period in which violent deaths related to schools and school activities had decreased by 40 percent.
Suddenly in the late 1990s, some middle- and upper-class white youths were lashing out with planned acts of cold-blooded violence against their schoolmates and teachers. School violence was no longer considered an inner-city problem. In reaction to the rise in the number of multiple homicides, governments and school districts adopted new measures to identify and respond to possible problems before they erupted.
Though difficult, teachers and administrators turned to students for help; asking them to report threatening comments or dangerous activities. They also sought to reduce negative behavior such as bullying, which was generally ignored in the past or noticed but dismissed as typical adolescent behavior. As the United States entered the twenty-first century, the public considered schools to be dangerous places. Statistics actually indicated the opposite and showed that schools were the safest public places in the nation.
The history of school discipline
School discipline problems have substantially changed through time. Disciplinary action usually concerned talking without permission, being disruptive in class, running in the hallways, or smoking behind the gymnasium. By the 1970s dress codes became a key discipline issue; in the 1980s it was fighting among students. By the end of the 1980s and into the 1990s, gang activity entered schools. Along with it came the problems of weapons, substance abuse, and violent assaults against other students and school staff. Some students even carried firearms for protection.
Crime victim surveys in the early 1990s showed significant rates of robbery or theft and assaults in schools. Victims tended to be in inner-city schools, male, and of a racial minority. While theft was the most common crime in schools in general, assault was the most frequent violent crime. Multiple homicides in schools during this period were uncommon, though there were two in 1992. This yearly figure would more than double by the late 1990s when 3 percent of teachers became victims of violent school crime.
Until the late 1990s school violence was largely a problem of inner-city schools where there were high poverty and crime rates, drug trafficking and prostitution, and poorly funded school districts. The growth of gang activity in schools after 1989 only reinforced these perceptions. The gang presence more than doubled in just four years by 1993.
In a dramatic shift, the highly publicized school shootings beginning in 1995 took the issue of school violence to the suburbs and rural communities of predominately white America. Accordingly the focus on causes of school violence expanded to include such issues as student peer pressure, or how some students were ignored and became outcasts. These behaviors appeared to trigger violent retaliation.
In the early twenty-first century, the top school violence concern among students, parents, and school officials was shootings, though theft and other crimes were the most common. Before 1995 school shootings were infrequent and usually did not lead to multiple deaths. One early school shooting occurred in San Diego, California, in January 1979. Seventeen-year-old Brenda Spencer used a rifle she had just received for Christmas to shoot at an elementary school across the street from her house.
During a six-hour standoff, Brenda killed two men trying to protect the schoolchildren and wounded eight children and a police officer. Spencer showed no emotion when finally captured. In March 1987 in Missouri twelve-year-old Nathan Ferris, an honor student, grew tired of being teased. He took a gun to school and when teased shot and killed the student and then himself.
The rash of school shootings of the 1990s began in Giles County, Tennessee, on November 15, 1995. Seventeen-year-old Jamie Rouse, dressed in black, took a firearm to school and shot two teachers in the head, killing one, and killed another student while attempting to shoot the school's football coach. Rouse had told several of his classmates beforehand about his intentions, but none reported the conversations to authorities.
Less than two months later on February 2, 1996, in Moses Lake, Washington, fourteen-year-old Barry Loukaitis walked into a mathematics class wearing a long western coat. Under the coat he concealed two pistols, a high-powered rifle, and ammunition. Loukaitis killed two classmates and the teacher while wounding another student. He took the rest of the class hostage. Another teacher rushed Loukaitis, ending the standoff. Loukaitis, like Rouse, had shared thoughts of going on a shooting spree with another student. The same day Loukaitis attacked his fellow schoolmates, a sixteen-year-old in Atlanta, Georgia, shot and killed a teacher.
The April 1999 Columbine shooting spree and other occurrences of school violence triggered greater efforts to curb bullying in schools. Bullying, which includes a range of behavior including teasing and threats, exclusion from social activities, and more physical intimidation, has been widespread in American schools. It was often considered a normal part of growing up. When bullying repeatedly surfaced as a cause of deadly school violence through the 1990s, parents and schools took a renewed interest in the consequences of bullying and how to restrict it.
Studies in the 1990s showed that bullying was far from harmless and actually posed serious lasting effects. Victims of bullying suffered significant negative social and emotional development. In the short term victims suffered from low self-esteem, poor grades, few friends, and had school attendance problems. Such emotional problems as depression and anxiety could also develop and last a lifetime. In addition, those doing the bullying often progressed to more serious aggressive behavior when not confronted about their actions.
Schools responded with aggressive antibullying programs and instituted stricter rules and discipline. Discipline was enforced through monitoring student behavior in all parts of the school grounds by school staff. Some new school programs taught anger control, ways for a victim to cope with bullying, and overall greater appreciation of student diversity in a school. Police also became more interested in threatening behavior at schools. Many school districts that adopted these measures reported significant declines in aggressive behavior. Web sites about bullying and its effects were also created to help students as well as provide support to school staffs.
Shootings become more frequent
On February 19, 1997, a year after the Washington and Georgia shootings, sixteen-year-old Evan Ramsey in Bethel, Alaska, who was tired of being teased, took a shotgun to school. He killed a student and the principal and wounded two other students. He had previously told two fourteen-year-olds about his plan for retribution against those bullying him and the authority figures who had not protected him.
Later in 1997, on October 1, sixteen-year-old Luke Woodham in Pearl, Mississippi, stabbed his mother to death then went to school carrying a rifle and a pistol. There he killed a girlfriend who had just broken up with him, a second girl, and wounded seven others. When returning to his car for more ammunition he was charged and captured by the assistant principal. Again, other students knew of his plan but told no one.
Two months later on December 1, fourteen-year-old Michael Carneal of Paducah, Kentucky, carried a gun to school and fired on a small prayer group killing three girls and wounding five others. Carneal was intrigued with Satan worshiping and frequently dressed in black. Other students had heard him talk about wanting to shoot up the school. Police found a pistol, two rifles, two shotguns, and seven hundred rounds of ammunition.
The spring of 1998
The spring of 1998 turned out to be a very bloody time in U.S. school history. On March 24, eleven-year-old Andrew Golden and thirteen-year-old Mitchell Johnson set off a fire alarm at Westside Middle School in Jonesboro, Arkansas. They then fired on school staff and students as they evacuated the building. The two boys killed one adult and four children and wounded ten others on the school's playground.
One month later on April 24, fourteen-year-old Andrew Wurst carried a handgun to an eighth grade school graduation dance in Edinboro, Pennsylvania. He killed a teacher and wounded three others before being captured while fleeing.
On May 21, barely a month later, fifteen-year-old Kipland Kinkel opened fire on a crowded school cafeteria in the morning before classes began at Thurston High School in Springfield, Oregon. He killed two and wounded seven others. When police went to his home, they found his murdered parents, whom he had killed the previous day. The house was booby-trapped with several bombs including one placed under his mother's body.
The day he murdered his parents Kinkel had been expelled for bringing a firearm to school, but he had been released by police to his father's custody. Kinkel was small in stature and had dyslexia (a learning disability). He felt inferior to his academic parents and athletic older sister. Kinkel was routinely teased at school and felt detached from his schoolmates.
Columbine and beyond
On April 20, 1999, two students of Columbine High School in Littleton, Colorado, entered the school and killed thirteen, including a teacher, while wounding twenty-six. Seventeen-year-old Eric Harris and eighteen-year-old Dylan Klebold had planned their shooting rampage long in advance. The sixteen-minute shooting spree ended with the two shooters committing suicide. This was the bloodiest episode in school violence in U.S. history.
Harris and Klebold had an illegally modified semiautomatic handgun, two sawed-off shotguns, and ninety-seven explosive devices. The two had also planted bombs around the school, which police recovered without exploding. The two had even planned on escaping and hijacking an airplane and crashing it into New York City.
The two had also been members of a club called the "Trenchcoat Mafia." Its members wore long, heavy black trench coats. Two other persons were convicted and sent to prison for illegally supplying the modified handgun to Harris and Klebold. The shooting later inspired a controversial documentary in 2001 titled Bowling at Columbine. Written and directed by Michael Moore, the film explored the culture of violence, especially firearms, in the United States.
The Columbine tragedy triggered other school violence. The number of school bomb threats by students increased for a brief time, more youth began wearing long black trench coats, and Internet sites popped up praising the shooters at Columbine. School closures increased in response to threats through the brief remainder of the school year.
The Columbine shootings, in addition to previous events of school violence, finally led students to begin reporting potentially threatening situations. No longer were threats of violence by fellow students ignored or not taken seriously. On May 13, 1999, only a few weeks after the Columbine shootings, students at a middle school reported that four classmates were planning a massacre at their school and trying to recruit others to help. The four were arrested and tried as adults, charged with conspiracy to commit murder.
The violence, however, was not over yet. On May 20, Anthony Solomon, a sophomore at Heritage High School in Conyers, Georgia, opened fire on the last day of school, wounding six.
After a short lull in violence, school violence struck again. On February 5, 2001, three students who admired the Columbine shooters planned an attack on their school in Hoyt, Kansas. Others discovered the plans and turned them in. Police discovered bomb-making materials, a modified assault rifle, and a black trench coat. Police charged the students with conspiracy to commit aggravated arson.
One of the worst incidents to occur after Columbine came at Santana High School in Santee, California, on March 5, 2001. Tired of being teased for his short height, fifteen-year-old Charles Williams entered a crowded boys' bathroom in school and opened fire, killing two students and wounding thirteen. In addition to the handgun he took to school, police found seven rifles at his home.
Causes of school violence
The causes of school violence are complex and varied. Forensic psychologists who study criminal behavior believe school killers are very different from other violent youth, such as gang members or drug dealers. For whatever reason, they feel powerless and begin obsessing over killing or injuring others. They may make direct threats concerning those they feel are taunting or intimidating them. They often express these thoughts and plans to fellow students. In general, other students tend to ignore the comments or simply look the other way.
The decision to kill for these youth is not a sudden occurrence, but coldly planned. Use of guns gives them the power they felt deprived of, and makes those offending them powerless. In addition, the shooters become famous with their faces splashed across televisions screens nationwide. The violent outbreak turns the tables and gives them both the power and attention they seek. This type of offender is almost always male; females approach retribution in less direct ways, such as hiring classmates or others to kill those they wish to strike out against.
Each case may represent a unique combination of factors. Some are physical, some behavioral, and others are learned. Physical factors can include birth complications. For example, being deprived of oxygen during the birth process can lead to brain dysfunction and learning disabilities. Violent behavior has been linked to certain forms of these abnormalities. Similarly, head injuries have been shown to increase the potential for violent behavior in certain individuals.
Behavioral problems can be linked to a difficult personality, which leads to problems of interacting with others, impulsiveness, and being unable to conform. These children may not blend into school activities and become ignored and rebellious. Some become depressed and take medication that can produce serious behavioral side effects. Broken family relationships can also be a major factor. Harshly treated children are more likely to behave violently later in life.
Being bullied or teased by others can often lead a troubled youth to violent revenge or retribution. This factor showed up repeatedly in the school shootings of the 1990s and beyond. It received the most attention from school administrators and others in the early twenty-first century.
Learning violent behavior can come from a dysfunctional or abnormal home life, perhaps involving domestic abuse or parents who do not respond well to authority figures such as the police. From this type of home environment, youth learn to react to authority such as teachers or school officials with aggression. Some believe learned violent behavior also comes from repeated exposure to violence in the media such as music lyrics, Hollywood movies, television programs, video games, and 24-hour news stations broadcasting violent or graphic scenes. Studies showed that youth exposed to an overwhelming amount of such material became more aggressive and no longer upset by violence and its consequences. These kids, it is believed, have trouble distinguishing between reality and fantasy.
Schools themselves have changed a great deal since the 1950s, and by the later twentieth century they brought a wide range of students together from often markedly different social environments. Differences appear in attitudes and behavior that can lead to social cliques or racial tensions. A major change was the emergence of gangs, which doubled between 1989 and 1993. Gang activity within schools included recruiting new members, which often led to school violence as part of initiation. In addition, illegal activities in the vicinity of the school increased, such as selling drugs and firearms.
Yet another major factor in the rise of deadly school violence was the easy availability of firearms and other weapons. Estimates in the 1990s on the number of weapons brought to school on a daily basis were staggering. The number of guns brought into schools on any given day ranged up to over 250,000 and the number of knives more than double that figure.
Effects of school violence
The effects of school violence have been extensive. Many students stayed at home out of fear in the late 1990s than ever before. Schools were no longer viewed as safe havens for the nation's children. The increased presence of police, metal detectors, and intervention programs have become daily reminders of school violence.
The thousands of students directly exposed to school violence, both the highly publicized multiple homicides and the less publicized episodes of threats and standoffs that did not lead to actual injury or death, can suffer from posttraumatic stress disorder. This condition can cause depression, anger, and anxiety. Overall, the ability for youth to learn and schools to effectively teach are greatly affected by school violence.
To prevent school violence, schools have looked at not just focusing on at-risk youth, but have also attempted to change the social climate and culture of their facilities. Teachers and administrators joined with parents, students, police, and the local community to help maintain a safe atmosphere in their schools. Safety programs were remodeled to be much more responsive not just to outbreaks of violence, but also to the signs of potential problems. Early warning signs were identified, including extreme or uncontrolled anger, knowledge of a student's illegal possession or access to firearms, students suffering the effects of extreme poverty, those targeted or making racist remarks, students with a low interest in school, and violence at home.
Classes were also added to educated students on human diversity and socializing. Thirteen state legislatures passed laws to aid in reducing school violence. Mental health requirements were added to some school curricula and in others funding was provided to increase the capabilities of existing mental health services.
More physical measures were adopted as well. Some schools installed metal detectors and security cameras. Police
became a common sight on school property. Many schools adopted a "zero tolerance" policy for weapons, issuing automatic suspensions and even expulsions to students who brought weapons to school.
Alternate education programs have also been established for students who are unruly or incapable of blending in to local public schools. Programs have been added to help at-risk students, which usually involve mentoring and how to peacefully resolve disputes or problems. In general, schools have made a concentrated effort to reach every student, help with their social skills, and set expectations of academic performance.
For More Information
Bonilla, Denise M., ed. School Violence. New York: H. W. Wilson, 2000.
Coloroso, Barbara. The Bully, the Bullied, and the Bystander: From Pre-School to High School, How Parents and Teachers Can Help Break the Cycle of Violence. New York: HarperResource, 2003.
Flannery, Daniel, and C. Ronald Huff, eds. Youth Violence: Prevention, Intervention, and Social Policy. Washington, DC: American Psychiatric Press, 1999.
Garbarino, James. Lost Boys: Why Our Sons Turn Violent and How We Can Save Them. New York: Free Press, 1999.
Heide, Kathleen M. Young Killers: The Challenge of Juvenile Homicide. New York: Sage, 1998.
Kelleher, Michael D. When Good Kids Kill. Westport, CT: Praeger, 1998.
Smith, Helen. The Scarred Heart: Understanding and Identifying Kids Who Kill. Knoxville, TN: Callisto, 2000.
Bullying.org.http://www.bullying.org (accessed on August 20, 2004).
The National Campaign to Prevent School Violence.http://www.ribbonofpromise.org (accessed August 20, 2004).
North Carolina Department of Juvenile Justice and Delinquency Prevention, Center for the Prevention of School Violence.http://www.ncdjjdp.org/cpsv/ (accessed on August 20, 2004).
"School Violence." Constitutional Rights Foundation.http://www.crf-usa.org/violence/intro.html (accessed on August 20, 2004). | <urn:uuid:d7523934-5d40-455c-bd34-2ff302456c67> | CC-MAIN-2022-33 | https://www.encyclopedia.com/law/encyclopedias-almanacs-transcripts-and-maps/school-violence | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00497.warc.gz | en | 0.978636 | 4,183 | 3.359375 | 3 |
|Died||March 19, 1984 (aged 56)|
Eileen Adele Hale
Garry Winogrand (14 January 1928 – 19 March 1984) was an American street photographer, known for his portrayal of U.S. life and its social issues, in the mid-20th century. Photography curator, historian, and critic John Szarkowski called Winogrand the central photographer of his generation.
He received three Guggenheim Fellowships to work on personal projects, a fellowship from the National Endowment for the Arts, and published four books during his lifetime. He was one of three photographers featured in the influential New Documents exhibition at Museum of Modern Art in New York in 1967 and had solo exhibitions there in 1969, 1977, and 1988. He supported himself by working as a freelance photojournalist and advertising photographer in the 1950s and 1960s, and taught photography in the 1970s. His photographs featured in photography magazines including Popular Photography, Eros, Contemporary Photographer, and Photography Annual.
Critic Sean O'Hagan wrote in 2014 that in "the 1960s and 70s, he defined street photography as an attitude as well as a style – and it has laboured in his shadow ever since, so definitive are his photographs of New York"; and in 2010 that though he photographed elsewhere, "Winogrand was essentially a New York photographer: frenetic, in-your-face, arty despite himself." Phil Coomes, writing for BBC News in 2013, said "For those of us interested in street photography there are a few names that stand out and one of those is Garry Winogrand, whose pictures of New York in the 1960s are a photographic lesson in every frame."
In his lifetime Winogrand published four monographs: The Animals (1969), Women are Beautiful (1975), Public Relations (1977) and Stock Photographs: The Fort Worth Fat Stock Show and Rodeo (1980). At the time of his death his late work remained undeveloped, with about 2,500 rolls of undeveloped film, 6,500 rolls of developed but not proofed exposures, and about 3,000 rolls only realized as far as contact sheets being made.
Early life and education
Winogrand's parents, Abraham and Bertha, emigrated to the US from Budapest and Warsaw. Garry grew up with his sister Stella in a predominantly Jewish working-class area of the Bronx, New York, where his father was a leather worker in the garment industry, and his mother made neckties for piecemeal work.
Winogrand graduated from high school in 1946 and entered the US Army Air Force. He returned to New York in 1947 and studied painting at City College of New York and painting and photography at Columbia University, also in New York, in 1948. He also attended a photojournalism class taught by Alexey Brodovitch at The New School for Social Research in New York in 1951.
Winogrand worked as a freelance photojournalist and advertising photographer in the 1950s and 1960s. Between 1952 and 1954 he freelanced with the PIX Publishing agency in Manhattan on an introduction from Ed Feingersh, and from 1954 at Brackman Associates.
Winogrand's beach scene of a man playfully lifting a woman above the waves appeared in the 1955 The Family of Man exhibition at the Museum of Modern Art (MoMA) in New York which then toured the world to be seen by 9 million visitors. His first solo show was held at Image Gallery in New York in 1959. His first notable exhibition was in Five Unrelated Photographers in 1963, also at MoMA in New York, along with Minor White, George Krause, Jerome Liebling, and Ken Heyman.
In 1966 he exhibited at the George Eastman House in Rochester, New York with Friedlander, Duane Michals, Bruce Davidson, and Danny Lyon in an exhibition entitled Toward a Social Landscape, curated by Nathan Lyons. In 1967 his work was included in the "influential" New Documents show at MoMA in New York with Diane Arbus and Lee Friedlander, curated by John Szarkowski.
His photographs of the Bronx Zoo and the Coney Island Aquarium made up his first book The Animals (1969), which observes the connections between humans and animals. He took many of these photos when, as a divorced father, accompanying his young children to the zoo for amusement.
He was awarded his second Guggenheim Fellowship in 1969 to continue exploring "the effect of the media on events", through the then novel phenomenon of events created specifically for the mass media. Between 1969 and 1976 he photographed at public events, producing 6,500 prints for Papageorge to select for his solo exhibition at MoMA, and book, Public Relations (1977).
In 1975, Windogrand's high-flying reputation took a self-inflicted hit. At the height of the feminist revolution, he produced Women Are Beautiful, a much-panned photo book that explored his fascination with the female form. "Most of Winogrand’s photos are taken of women in either vulgar or at least, questionable positions and seem to be taken unknown to them," says one critic. "This candid approach adds an element of disconnect between the viewer and the viewed, which creates awkwardness in the images themselves."
He supported himself in the 1970s by teaching, first in New York. He moved to Chicago in 1971 and taught photography at the Institute of Design, Illinois Institute of Technology between 1971 and 1972. He moved to Texas in 1973 and taught in the Photography Program in the College of Fine Arts at the University of Texas at Austin between 1973 and 1978. He moved to Los Angeles in 1978.
Szarkowski, the Director of Photography at New York's MoMA, became an editor and reviewer of Winogrand's work.
"Being married to Garry was like being married to a lens," Lubeau once told photography curator Trudy Wilner Stack. Indeed, "colleagues, students and friends describe an almost obsessive picture-taking machine."
Death and legacy
Winogrand was diagnosed with gallbladder cancer on 1 February 1984 and went immediately to the Gerson Clinic in Tijuana, Mexico, to seek an alternative cure ($6,000 per week in 2016). He died on 19 March, at age 56. He was interred at Mount Moriah Cemetery in Fairview, New Jersey.
At the time of his death his late work remained largely unprocessed, with about 2,500 rolls of undeveloped film, 6,500 rolls of developed but not proofed exposures, and about 3,000 rolls only realized as far as contact sheets being made. In total he left nearly 300,000 unedited images.
The Garry Winogrand Archive at the Center for Creative Photography (CCP) comprises over 20,000 fine and work prints, 20,000 contact sheets, 100,000 negatives and 30,500 35 mm colour slides as well as a small number of Polaroid prints and several amateur and independent motion picture films. Some of his undeveloped work was exhibited posthumously, and published by MoMA in the overview of his work Winogrand, Figments from the Real World (2003).
Yet more from his largely unexamined archive of early and late work, plus well known photographs, were included in a retrospective touring exhibition beginning in 2013 and in the accompanying book Garry Winogrand (2013). Photographer Leo Rubinfien who curated the 2013 retrospective at the San Francisco Museum of Modern Art felt that the purpose of his show was to find out, "...was Szarkowski right about the late work?” Szarkowski felt that Winogrand's best work was finished by the early 1970s. Rubinfien thought, after producing the show and in a shift from his previous estimation of 1966 to 1970, that Winogrand was at his best from 1960 to 1964.
All of Winogrand's wives and children attended a retrospective exhibit at the San Francisco Art Museum after his death. On display was a 1969 letter from Judith Teller, Winogrand's second wife:
But my analyst bill is not even relevant at this point. What is extremely relevant is the money you owe the government in back taxes. Your inability to pay the rent on time. Your constantly running out of money. Your credit rating. And most of all, your flippant, irresponsible, nonsensical attitude toward all these very real problems. (‘I’ll wait till the government catches up with me. Why should I pay them any money now?’) You seem incapable of exercising your mind in any cogent way.
Szarkowski called Winogrand the central photographer of his generation. Frank Van Riper of the Washington Post described him as "one of the greatest documentary photographers of his era" but added that he was "a bluntspoken, sweet-natured native New Yorker, who had the voice of a Bronx cabbie and the intensity of a pig hunting truffles." Critic Sean O'Hagan wrote in The Guardian in 2014 that in "the 1960s and 70s, he defined street photography as an attitude as well as a style – and it has laboured in his shadow ever since, so definitive are his photographs of New York"; and in 2010 in The Observer that though he photographed elsewhere, "Winogrand was essentially a New York photographer: frenetic, in-your-face, arty despite himself." Phil Coomes, writing for BBC News in 2013, said "For those of us interested in street photography there are a few names that stand out and one of those is Garry Winogrand, whose pictures of New York in the 1960s are a photographic lesson in every frame."
- 1969: The Animals, Museum of Modern Art, New York.
- 1972: Light Gallery, New York.
- 1975: Women are Beautiful, Light Gallery, New York.
- 1977: Light Gallery, New York.
- 1977: The Cronin Gallery, Houston.
- 1977: Public Relations, Museum of Modern Art, New York.
- 1979: The Rodeo, Allan Frumkin Gallery, Chicago.
- 1979: Greece, Light Gallery, New York.
- 1980: University of Colorado Boulder.
- 1980: Garry Winogrand: Retrospective, Fraenkel Gallery, San Francisco.
- 1980: Galerie de Photographie, Bibliothèque nationale de France, Paris.
- 1981: The Burton Gallery of Photographic Art, Toronto.
- 1981: Light Gallery, New York.
- 1983: Big Shots, Photographs of Celebrities, 1960–80, Fraenkel Gallery, San Francisco.
- 1984: Garry Winogrand: A Celebration, Light Gallery, New York.
- 1984: Women are Beautiful, Zabriskie Gallery, New York.
- 1984: Recent Works, Houston Center for Photography, Texas.
- 1985: Williams College Museum of Art, Williamstown, Massachusetts.
- 1986: Twenty Seven Little Known Photographs by Garry Winogrand, Fraenkel Gallery, San Francisco.
- 1988: Garry Winogrand, Museum of Modern Art. Retrospective.
- 2001: Winogrand's Street Theater, Rencontres d'Arles festival, Arles, France.
- 2013/2014: Garry Winogrand, San Francisco Museum of Modern Art, San Francisco, March–June 2013 and toured; National Gallery of Art, Washington, D.C., March–June 2014; Metropolitan Museum of Art, New York, June–September 2014; Galerie nationale du Jeu de Paume, Paris, October 2014 – February 2015.
- 2019: Garry Winogrand: Color, Brooklyn Museum, Brooklyn, NY, May–December 2019.
- 1955: The Family of Man, The Museum of Modern Art, New York.
- 1957: Seventy Photographers Look at New York, The Museum of Modern Art, New York.
- 1963: Photography '63, George Eastman House, Rochester, New York.
- 1964: The Photographer's Eye, Museum of Modern Art, New York. Curated by John Szarkowski.
- 1966: Toward a Social Landscape, George Eastman House, Rochester, NY. Photographs by Winogrand, Bruce Davidson, Lee Friedlander, Danny Lyon, and Duane Michals. Curated by Nathan Lyons.
- 1967: New Documents, Museum of Modern Art, New York with Diane Arbus and Lee Friedlander, curated by John Szarkowski.
- 1969: New Photography USA, Traveling exhibition prepared for the International Program of Museum of Modern Art, New York.
- 1970: The Descriptive Tradition: Seven Photographers, Boston University, Massachusetts.
- 1971: Seen in Passing, Latent Image Gallery, Houston.
- 1975: 14 American Photographers, Baltimore Museum of Art, Maryland.
- 1976: The Great American Rodeo, Fort Worth Art Museum, Texas.
- 1978: Mirrors and Windows: American Photography since 1960, Museum of Modern Art, New York.
- 1981: Garry Winogrand, Larry Clark and Arthur Tress, G. Ray Hawkins Gallery, Los Angeles.
- 1981: Bruce Davidson and Garry Winogrand, Moderna Museet / Fotografiska, Stockholm, Sweden.
- 1981: Central Park Photographs: Lee Friedlander, Tod Papageorge and Garry Winogrand, The Dairy in Central Park, New York, 1980.
- 1983: Masters of the Street: Henri Cartier-Bresson, Josef Koudelka, Robert Frank and Garry Winogrand, University Gallery, University of Massachusetts Amherst.
Winogrand's work is held in the following public collections:
- Art Institute of Chicago, Chicago, IL
- George Eastman Museum, Rochester, NY
- Museum of Modern Art, New York
- Whitney Museum of American Art, New York
- 1964, 1969, 1979: Guggenheim Fellowship from the John Simon Guggenheim Memorial Foundation
- 1975: Fellowship from the National Endowment for the Arts
Publications by Winogrand
- The Animals. New York, NY: Museum of Modern Art, 1969. ISBN 9780870706332.
- Women are Beautiful. New York, NY: Light Gallery; New York, NY: Farrar, Straus and Giroux, 1975. ISBN 9780374513016.
- Public Relations. New York, NY: Museum of Modern Art, 1977. ISBN 9780870706325.
- Stock Photographs: The Fort Worth Fat Stock Show and Rodeo. Minnetonka, MN: Olympic Marketing Corp, 1980. ISBN 9780292724334.
- Figments from the Real World. New York, NY: Museum of Modern Art, 1988. ISBN 9780870706400. A retrospective, published to accompany an exhibition at the Museum of Modern Art and which travelled. Reproduces work from each of Winogrand's previous books, along with unpublished work, plus 25 images chosen from the work Winogrand left unedited at the time of his death.
- The Man in the Crowd: The Uneasy Streets of Garry Winogrand. San Francisco, CA: Fraenkel Gallery, 1998. ISBN 9781881337058. With an introduction by Fran Lebowitz and an essay by Ben Lifson. More than half of the images are previously unpublished.
- El Juego de la Fotografía = The Game of Photography. Madrid: TF, 2001. ISBN 9788495183668. Text in English and Spanish. A retrospective. "Published to accompany an exhibition at Sala del Canal de Isabel II, Madrid, Nov.-Dec. 2001 and at three other institutions through June of 2002."
- Winogrand 1964: Photographs from the Garry Winograd Archive, Center for Creative Photography, the University of Arizona. Santa Fe, NM: Arena, 2002. Edited by Trudy Wilner Stack. ISBN 9781892041623.
- Arrivals & Departures: The Airport Pictures of Garry Winogrand. Edited by Alex Harris and Lee Friedlander and with texts by Alex Harris ('The Trip of our Lives') and Lee Friedlander ('The Hair of the Dog').
- Garry Winogrand.
- San Francisco, CA: San Francisco Museum of Modern Art; New Haven, CT: Yale University Press, 2013. ISBN 978-0-300-19177-6. Edited by Leo Rubinfien. Introduction by Rubinfien, Erin O'Toole and Sarah Greenough, and essays by Rubinfien ('Garry Winogrand's Republic'), Greenough ('The Mystery of the Visible: Garry Winogrand and Postwar American Photography'), Tod Papageorge ('In the City'), Sandra S. Phillips ('Considering Winogrand Now') and O'Toole ('How much Freedom can you Stand? Garry Winogrand and the Problem of Posthumous Editing').
- Paris: Jeu De Paume; Paris: Flammarion, 2014. ISBN 9782081342910. French-language version.
- Madrid: Fundación Mapfre, 2015. ISBN 978-8498445046. Spanish-language version.
Publications paired with others
- Winogrand / Lindbergh: Women. Cologne: Walther Konig, 2017. ISBN 978-3960980261. Photographs from Women Are Beautiful (1975) by Winogrand and On Street by Peter Lindbergh, plus other color photographs by Winogrand. With a short essay by Joel Meyerowitz on Winogrand, and by Ralph Goetz on Lindbergh. Published on the occasion of the exhibition Peter Lindbergh / Garry Winogrand: Women on Street at Kulturzentrum NRW-Forum, Düsseldorf, 2017. Text in English and German.
Contributions to publications
- Looking at Photographs: 100 Pictures from the Collection of The Museum of Modern Art. New York: Museum of Modern Art, 1973. ISBN 978-0-87070-515-1. By John Szarkowski.
- Grundberg, Andy (21 March 1984). "Garry Winogrand, Innovator in Photography". The New York Times. Retrieved 31 January 2015.
- "The Animals" (PDF). Museum of Modern Art. Retrieved 31 January 2015.
- Woodward, Richard (13 May 2013). "Garry Winogrand and the Art of the Opening". The Paris Review. Retrieved 31 January 2015.
- "Garry Winogrand". John Simon Guggenheim Memorial Foundation. Retrieved 26 December 2014.
- O'Hagan, Sean (15 October 2014). "Garry Winogrand: the restless genius who gave street photography attitude". The Guardian. Retrieved 17 January 2015.
- O'Hagan, Sean (18 April 2010). "Why street photography is facing a moment of truth". The Observer. Retrieved 15 February 2015.
- Coomes, Phil (11 March 2013). "The photographic legacy of Garry Winogrand". BBC News. Retrieved 17 January 2015.
- Andy Greaves. "Andy Greaves Photography Blog – Gary Winogrand". Archived from the original on 2012-04-26. Retrieved 2011-11-29.
- "Michael Hoppen Gallery – Garry Winogrand". Archived from the original on 2011-11-29. Retrieved 2011-11-28.
- Steichen, Edward; Sandburg, Carl; Norman, Dorothy; Lionni, Leo; Mason, Jerry; Stoller, Ezra; Museum of Modern Art (New York) (1955). The family of man: The photographic exhibition. Published for the Museum of Modern Art by Simon and Schuster in collaboration with the Maco Magazine Corporation.
- Hurm, Gerd, 1958-, (editor.); Reitz, Anke, (editor.); Zamir, Shamoon, (editor.) (2018), The family of man revisited : photography in a global age, London I.B.Tauris, ISBN 978-1-78672-297-3
|author1=has generic name (help)CS1 maint: multiple names: authors list (link)
- Sandeen, Eric J (1995), Picturing an exhibition : the family of man and 1950s America (1st ed.), University of New Mexico Press, ISBN 978-0-8263-1558-8
- "Five Unrelated Photographers". The Museum of Modern Art. Retrieved 2019-04-22.
- Peres, Michael (2014). The Concise Focal Encyclopedia of Photography: From the First Photo on Paper to the Digital Revolution. CRC Press. p. 116. ISBN 9781136101823.
- Grimes, William (1 September 2016). "Nathan Lyons, Influential Photographer and Advocate of the Art, Dies at 86". The New York Times. ISSN 0362-4331. Retrieved 2020-09-08.
- "Garry Winogrand – Bio". Archived from the original on 2011-11-04. Retrieved 2011-11-29.
- "Museum of Contemporary Photography". www.mocp.org. Retrieved 2019-04-21.
- Winogrand, Garry (1977). Public Relations. New York, NY: Museum of Modern Art. ISBN 0-292-72433-0.
- "Winogrand's Women Are Beautiful". www.worcesterart.org. Retrieved 2019-04-21.
- O.C. Garza. "Class Time with Garry Winogrand" (PDF). Retrieved 2011-11-28.
- "Garry Winogrand". Artnet. Retrieved 2011-11-29.
- "American Suburb X – introduction to Garry Winogrand for 'Streetwise – A Look at Garry Winogrand' article". Retrieved 2011-11-29.
- Winogrand, Garry (1980). Stock Photographs: The Fort Worth Fat Stock Show and Rodeo. Minnetonka, MN: Olympic Marketing Corp. ISBN 0-292-72433-0.
- Van Riper, Frank. "Camera Works: Photo Essay". www.washingtonpost.com. Retrieved 22 April 2019.
- "Judy Teller, Wife of Garry Winogrand, New York City". portlandartmuseum.us. Retrieved 2019-04-21.
- "Garry Winogrand: All Things are Photographable". American Masters. 13 March 2019. Retrieved 2019-04-21.
- Winogrand, Garry; John Szarkowski (2003). figments from the real world. Museum of Modern Art, New York. ISBN 0-87070-635-7. Archived from the original on 2012-04-26.
Winogrand and Judy Teller were separated in 1969, and their marriage was annulled the next year. Late in 1969 he had met Eileen Adele Hale; they married in 1972
- "The Gerson Clinic in Mexico". Gerson Institute. Retrieved January 17, 2019.
- Jerry Saltz (August 10, 2014), New York Magazine.
- "Garry Winogrand (1928-1984) - Find A Grave..." www.findagrave.com. Retrieved 2021-07-18.
- Ruoff, J. K. (1991). Home Movies of the Avant-Garde: Jonas Mekas and the New York Art World. Cinema Journal, 6–28.
- Michael David Murphy. "Winogrand Archives". Retrieved 2011-11-29.
- Loos, Ted (May 2, 2013). "Revisiting Some Well-Eyed Streets". The New York Times. Retrieved August 23, 2018.
- Woodward, Richard B. (13 May 2013). "Garry Winogrand and the Art of the Opening". Retrieved 2019-04-22.
- Coomes, Phil (11 March 2013). "The photographic legacy of Garry Winogrand". BBC News. Retrieved 17 January 2015.
- "Retrospective". Fraenkel Gallery. Retrieved 2020-09-09.
- "Celebrities 1960 – 1980". Fraenkel Gallery. Retrieved 2020-09-09.
- Grundberg, Andy (23 December 1984). "Photography View; Life Seized on the Fly". The New York Times. ISSN 0362-4331. Retrieved 2020-09-09.
- "Twenty Seven Little Known Photographs by Garry Winogrand". Fraenkel Gallery. Retrieved 2020-09-09.
- "Major Garry Winogrand Retrospective Opens at the Museum of Modern Art" (PDF). Museum of Modern Art. Retrieved 31 January 2015.
- "Garry Winogrand ", San Francisco Museum of Modern Art. Accessed 7 November 2014.
- "Garry Winogrand", National Gallery of Art. Accessed 7 November 2014.
- "Garry Winogrand", Metropolitan Museum of Art. Accessed 7 November 2014.
- "Garry Winogrand", Galerie nationale du Jeu de Paume. Accessed 7 November 2014.
- "Brooklyn Museum: Garry Winogrand: Color". www.brooklynmuseum.org. Retrieved 2019-10-10.
- Cotter, Holland (3 July 2014). "No Moral, No Uplift, Just a Restless 'Click': 'Garry Winogrand,' a Retrospective at the Metropolitan Museum". The New York Times. Retrieved 28 December 2014.
- "No. 20" (PDF). Museum of Modern Art. Retrieved 31 January 2015.
- Gefter, Philip (9 July 2007). "John Szarkowski, Eminent Curator of Photography, Dies at 81". The New York Times. ISSN 0362-4331. Retrieved 2020-09-08.
- "Was John Szarkowski the most influential person in 20th-century photography?". The Guardian. 20 July 2010. Retrieved 2020-09-08.
- Kramer, Hilton (23 July 1978). "Cover: Photographs by Helen Levitt and Marl: Cohen / Picture credits, Page". The New York Times. ISSN 0362-4331. Retrieved 2020-09-08 – via NYTimes.com.
- "Garry Winogrand," Art Institute of Chicago, https://www.artic.edu/collection?artist_ids=Garry+Winogrand
- "America Seen". George Eastman Museum. Archived from the original on 9 December 2000. Retrieved 1 September 2016.
- "Garry Winogrand (American, 1928–1984)". Museum of Modern Art. Retrieved 8 February 2015.
- "Houston, Texas, 1977 from Women are Better than Men". Whitney Museum of American Art. Retrieved 25 June 2015.
- "Figments from the Real World". Retrieved 2020-07-06.
- The Street Philosophy of Garry Winogrand. By Geoff Dyer. Austin: University of Texas Press, 2018. ISBN 978-1477310335.
- 'Garry Winogrand at Rice University' – Winogrand talking to students (1 hr 46 m video)
- 'Garry Winogrand with Bill Moyers, 1982' – video and transcript of Winogrand describing his practice
- 'An Interview with Garry Winogrand' – transcript of a video interview 'Visions and Images: American Photographers on Photography, Interviews with photographers by Barbara Diamonstein, 1981–1982'
- 'Coffee and Workprints: My Street Photography Workshop With Garry Winogrand' – Mason Resnick describes attending one of Winogrand's photography workshops
- Photographs of Winogrand's Leica M4 at CameraQuest | <urn:uuid:e5597337-1fe4-4031-9ffa-0533788e0d54> | CC-MAIN-2022-33 | https://en.wikipedia.org/wiki/Garry_Winogrand | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00497.warc.gz | en | 0.85426 | 6,158 | 2.578125 | 3 |
LLC vs Inc: Which Is Better?
All products and services featured are independently selected by WikiJob. When you register or purchase through links on this page, we may earn a commission.
Whether you choose to set up your business as a limited liability company (LLC) or corporation (Inc), formal registration can be an onerous task.
Before deciding on your business setup, it is important to consider which type of business entity will best suit the needs of your organization.
Your chosen business structure will determine the legal formalities you must comply with.
A business cannot be both an LLC and a corporation. However, it is possible to change the designation of your business by completing the appropriate government paperwork.
You might choose to do this if your business is undergoing growth or significant change.
In this article, you can find out about the key differences between LLCs and corporations. Armed with this knowledge, you can make an informed decision about which business setup option will work best for your company.
An LLC is a privately held company, owned by its members. It combines the features of a partnership firm and corporation, including flow-through income taxation and limited liability.
Compared to a corporation, running a business as an LLC tends to offer more flexibility.
LLCs are set up under the jurisdiction of state law. The process for setting up an LLC slightly differs depending on which state you are filing the application in.
You will usually be expected to file articles of organization with the Secretary of State, although some states will allow you to complete the registration documents online.
In some states, you will be expected to file a public notice; this is usually published in local state newspapers.
Working under the jurisdiction of state law can be complex, particularly for LLCs running operations in more than one state.
Differences in rules and regulations between states can generate extra paperwork and lead to inconsistent decisions or processes across the different sites.
An LLC can have an unlimited number of members or owners.
The members of an LLC do not own shares. Instead, they own a percentage of the company, which is often referred to as ‘membership interest’.
Trading is private, which means that shares of an LLC cannot be sold to the public. As such, an LLC cannot raise funds from the market through shares.
Transferring membership within an LLC can be more complex than transferring shares within a corporation.
It is important for LLCs to set up an operating agreement to outline:
- The roles, rights and responsibilities of each member
- How membership interest can be transferred between members. In some states, the LLC will be dissolved if an LLC member leaves and the operating agreement does not set out an acceptable process for the transfer of membership interest. It is important to establish a process to follow in the event of a member buyout or the death of a member.
- Arrangements for the allocation of profits and losses
While setting up an operating agreement is not compulsory, if an LLC does not have one, it will be governed according to the default rules set out within state statutes.
An LLC can either be managed by the members or a team of managers. Within a member-managed LLC, the owners are usually involved in the day-to-day running of the business.
In a manager-managed LLC, the members do not tend to have an active role in business operations.
LLCs are not considered to be separate vehicles for tax purposes by the Internal Revenue Service (IRS). This means that they offer better flexibility in terms of taxation.
Members can decide whether they want the business to be considered as a sole proprietor, partnership or corporation for tax purposes.
LLC profits and losses must be reported by the owners of the business via their annual tax return. The owners’ personal property is protected from business obligations and debts.
A single-member LLC will be taxed as a sole proprietorship, whilst a multi-member LLC will be taxed as a partnership.
This means that LLC members must report and pay tax on business income via their personal tax returns.
Unlike corporate shareholders, LLC members may also be required to pay self-employment taxes to cover Social Security and Medicare.
The legal requirements for an LLC are minimal. It is not necessary to hold meetings, keep minutes or log resolutions put in place by the company.
Annual General Meetings (AGMs) are optional and an annual report is not required by law.
‘Inc.’ is an abbreviation of the word ‘incorporated’. It is used as a suffix, following the name of a business corporation. When compared with an LLC, a corporation is less flexible in some areas.
Many different organizations choose to run as a corporation, including for-profit, not-for-profit, public and private companies.
To form a corporation, you must submit a certificate or articles of incorporation. This will include information such as:
- Company objective
- Stock information
The corporation name is divided into three sections:
- Distinctive element
- Descriptive element
- Legal ending
The owners of a corporation are referred to as 'shareholders'.
Shares can be easily transferred between owners, so a corporation is a good option if you want to encourage external investment or sell public stocks.
The corporation refers to an artificial individual, or a separate legal entity that is independent of its members.
A corporation has its own rights and obligations, holds property in its own name and has limited liability.
When considering excess profits, corporations can offer better flexibility than LLCs. In an LLC, all income passes to its members, whereas an S Corp can pass income and losses on to its shareholders.
In the US, a corporation can be classified as either an S Corp or a C Corp for taxation purposes.
By default, all corporations are taxed as C Corps. This means that they must pay federal income tax on any corporate profits, after which the shareholders must also pay tax on dividends.
However, a corporation with 100 or fewer shareholders may be able to avoid double taxation by electing to be taxed as an S Corp.
An S Corp is not required to pay corporate income tax; however the company profits must pass through the shareholders’ tax returns.
Every shareholder will be required to pay tax on their share of the overall profits. In some cases, shareholders may be eligible to receive tax-free dividends.
Management of a corporation tends to be rigid, with standard operating procedures in place. A board of directors will be elected to set out policies and oversee business operations.
The day to day activities of the business are managed by officers.
Within a small corporation, one person might be responsible for several different roles, including shareholder, officer and director.
In a larger corporation, shareholders are unlikely to be involved in the day-to-day running of the business.
Corporations must set bylaws to outline the rights and responsibilities of all parties, from officers to directors.
Corporations are legally required to hold an AGM, giving adequate written notice of the meeting date to all shareholders. Corporations must also compile an annual report. In many states, filing this report will incur a fee.
The key differences between an LLC and a corporation relate to ownership, management, taxation, record-keeping and reporting.
Whether you choose to set up an LLC or a corporation, you will need to file official documentation with the state.
Both types of business entities will protect you from liability for business obligations.
Cost and ease of setup – Setting up an LLC tends to be more straightforward and cheaper when compared with incorporating.
Legal formalities – In general, a corporation is expected to comply with more regulations and requirements than an LLC. Corporations have to work in line with clear legal formalities and record-keeping requirements. For LLCs, legal formalities and record-keeping are less structured.
Tax – As an LLC, your business will benefit from pass-through taxation. This means that the LLC's income is taxable through its owners after it has been distributed. Corporations are double taxed, first at corporate level, then at individual level after the profit has been distributed to shareholders as dividends.
Corporate reporting – This is simpler for an LLC as no annual report is required; therefore, it is not necessary to create a balance sheet.
Company dissolution — If you want to dissolve your LLC company for any reason, there are fewer obstacles to overcome if you want to make changes to the business model.
Small businesses – If you have a small business or startup, an LLC may be the best option. This is especially true if you expect small profits or incur a loss. If the business does incur a loss, these can be recorded on the owner’s personal tax return to reduce their tax burden whilst they invest in the business.
Medium businesses – Medium-sized businesses are best suited to running as an S Corp, whereas larger businesses are best suited to running as a C Corp.
Large businesses – If you have a larger business and you want to benefit from movable shares, a corporation offers better prospects for profitability and growth. If you plan to grow your small business or you want to make shares available to the public, a corporation might also be the better choice.
Investment in the business – If you are looking for investment into your business, running it as a C Corp is often the best choice.
Number of members – There is no maximum number of members allowed in an LLC. However, an S Corp can only have up to 100 members.
Both LLC and Inc. are classifications of companies that are filed with the state and both separate owners from the business in terms of liability.
In an LLC, owners are ‘members’ and they have a designated percentage of the business. They pay tax through personal tax accounts rather than through corporate taxation, and are adaptable and flexible in terms of management, with less formal requirements for record-keeping.
A Corporation has owners which are shareholders. A C-Corp must pay corporate tax on profits, as well as the shareholders paying tax on dividends, whereas an S-Corp doesn’t need to pay corporate income tax. Corporations have to have a board of directors to make decisions and set policies, with bylaws and must provide annual reports as well as hold yearly shareholders' meetings.
An LLC is more flexible with less regulation and requirements and easier tax returns.
They allow owners to be more ‘hands-on’ with the business, operating as managers within the company and directing operations without the oversight and control of a board of directors.
The LLC is considered a more modern concept, which is perfect for a small business or an entrepreneur.
In most cases, forming an LLC is the best option for a small business, because it is inexpensive and easy to form.
LLCs are adaptable and flexible, which makes them easier to manage and maintain over a longer period – and they can become corporations later if that becomes more suitable for the business model.
When deciding whether an LLC is the right choice for your business, it is useful to look at the pros and cons.
An LLC is flexible, and it can be treated by the IRS as a sole proprietorship or a partnership, as well as an S or C Corporation. It generally costs less to file a business as an LLC with the state, and they can have an unlimited number of owners (members).
Although taxation is through the personal tax report of the members, and reporting is usually simpler than a corporation, members are still protected from liability in debts and legal issues associated with the business. LLC members can receive revenues that are bigger than their percentage of the business.
A member of an LLC cannot be paid a salary – they receive drawings instead which are taxed as part of a personal tax return. The ongoing costs of managing an LLC, such as renewal and franchise fees, can be high (depending on the state), and the capital values tax could be crippling. For businesses looking for growth through getting capital, investors might be put off or prefer to invest in a corporation for ownership of shares.
The cost of starting an LLC depends mostly on the state that you are planning to file the business in.
The state filing fee can be anywhere from $40 to $500, depending on your state. You might want to use a professional LLC forming service, which makes the filing process simpler – but there is usually an additional fee for this.
There are three ways to convert an LLC into a Corporation, although not all are available in every state.
The easiest is a Statutory Conversion, which just needs some details to be filed with the secretary of state, the company name, EIN, and registered agent’s information.
A Statutory Merger needs a new corporation to be created, and then the members need to vote to change their ownership to shareholders. Then a certificate of merger just needs to be filed with the secretary of state.
The most expensive and difficult way is the Nonstatutory Conversion, which involves completely dissolving the existing LLC, liquidizing all the assets, and then creating the new Corporation. The Corporation can then absorb all the liabilities and the assets.
An LLC Registered Agent can be an individual or a business, and they are responsible for accepting official documents on behalf of a business. This allows the LLC to remain compliant.
If you are a single-member LLC, you can be your own registered agent – or you can use a Registered Agent Service to receive all the legal documents, like tax forms or summons.
The best state to file an LLC is considered to be Wyoming, because there you will not pay personal or corporate income tax, and the sales tax is only 4%. With minimal reporting needed to be compliant, and an annual franchise tax of just $50, Wyoming is the cheapest place to file your LLC.
California, on the other hand, is the most expensive, with fees of $800 and individual income tax reaching up to 13.3%.
Changing details of the LLC is straightforward, but you need to make sure that the right agencies and organizations are notified.
This means that you need to file articles of amendment with the secretary of state and notify the IRS and the state tax agency.
You will also have to inform your vendors, suppliers, and other agencies that you work with, as well as customers.
Sometimes a name change for an LLC is a necessary development, and it is one of the details that is simple to change.
Before you amend the articles of organization and file the changes with the secretary of state, you need to make sure that the name you want to use is available in your state – there are search engines available for this.
Once you have submitted the changes to the state, you can then change the details on all your paperwork, and with vendors, suppliers and customers.
The 8832 is known as the Entity Classification Election, and it needs to be filed if you want to choose the classification of your LLC – making it an S Corp, a C Corp or a disregarded entity.
If you don’t file an 8832, then your business will be given a default tax classification and you might end up paying more tax.
If you want to file an 8832, you need to have an Employee Identification Number (EIN), which you can obtain through the IRS website at no cost.
The W9 is a financial report, and an LLC needs to receive them from all their service providers – and provide a W9 when needed.
In a W9, the LLC needs to provide:
- Legal name (and ‘doing business as’ name, if different)
- Current Address
- Employee Identification Number (EIN) or Taxpayer Identification Number (TIN)
- Business taxation classification
It is good practice for an LLC to keep a signed copy of the W9 on file so that it can be sent when requested, and it needs to be signed by the owner or an authorized representative.
If your LLC is a sole member or multi-member, then members will have to complete a 1099-NEC or a 1099-MISC.
If the LLC is registered as a C Corporation, there is no requirement for a 1099, and if it is an S Corporation then a 1099 is needed for certain payments, like payments in lieu of dividends or for medical and health care.
The LLC is a better option for flexibility – as there is no defined tax classification, you can choose to have your business classified as a disregarded entity, an S Corp, or a C Corp.
For most small businesses, being a disregarded entity is simpler. Whether a sole proprietor or a partnership, the income, expenses, and net profit all pass through the owner’s (known as members) personal tax returns. This is known as pass-through taxation.
A C Corporation is subject to what is known as double taxation – which means that the profits are taxed, and the dividends that are paid to the shareholders are also subject to tax. However, the C Corporation can leave profits in the company and pay a lower tax rate on them.
An S Corporation, with 100 or fewer employees, can be taxed using pass-through taxation (like the disregarded entity), which means no double taxation on corporate dividends.
There are fewer benefits for LLC members, with things like medical insurance, health plans and parking all treated as taxable income – unlike the Corporation-based business.
An LLC cannot keep profits in the business – they need to immediately recognize them and share them with the members according to their percentage. Both payments and profits are subject to self-employment taxes, whereas Corporations are only taxed on salaries.
The annual filing fees can be expensive, and it might be too simple as the business develops. It is worth considering that investors might be less inclined to support an LLC in development – they might prefer to invest in a Corporation for shares.
LLC members do not take salaries, but they can be paid through something known as a ‘draw.’ This usually comes in the form of a business check that transfers profits from the business account to the personal account of the members.
A multi-member LLC might allow members to get ‘guaranteed payouts,’ which ensures a draw even when there is not enough profit in the business, but this is something that needs to be decided through operating agreements.
A single-member LLC has the income, expenses and net income of the business calculated alongside their personal tax return, by preparing a Schedule C and including it on their 1040 or 1040-SR.
For a multi-member LLC, each member needs a Schedule K-1 to be completed, which allocates them a share of the profit or loss according to the percentage of the company that they own. This information is then transferred to their 1040 or 1040-SR on the Schedule E part.
If your LLC has no employees and is not liable for certain kinds of excise tax, then it does not need an EIN – all the relevant tax information comes from the Taxpayer Identification Number instead.
However, if your business has employees, you will need an Employer Identification Number, which is free to obtain from the IRS using form SS4.
The simple answer is yes, a single-member LLC can register as an S Corporation, by filing IRS form 2553 and providing the relevant documentation to the secretary of state, including tax information.
When deciding on a structure for your business, it is important to know your business. Choosing wisely will mean that your company can continue to thrive.
Your decision should be based on the purpose of your business and the associated tax consequences of running it.
Regardless of whether you choose to set up as an LLC or corporation, your personal assets will be protected from creditors.
For the majority of new business owners, it is best to set up as an LLC in the first instance.
Fast-growing startups and companies seeking investment are better suited to incorporation as an S Corp or C Corp company. | <urn:uuid:19ec65b7-c6b5-432c-b49c-3ead3c9b1077> | CC-MAIN-2022-33 | https://www.wikijob.co.uk/jobs-and-careers/small-business/llc-vs-inc | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00697.warc.gz | en | 0.966365 | 4,149 | 2.515625 | 3 |
News Archive 2018
Breakthrough in the experimental and computational investigation of shape coexistence in mercury isotopes
In atomic nuclei, the complex many-body systems consisting of protons and neutrons obey the Pauli exclusion principle.
Thus, the nucleons occupy quantum levels that are separated by energy gaps leading to the simple nuclear shell model which
is comparable to Bohr's model of the atom. Atomic nuclei exhibit single-particle nature in the vicinity of closed shells
at the so called magic proton and neutron numbers (Z,N=8,20,28,50,82 and N=126). Away from the closed shells the nucleons
show collective behaviour. Consequently, nuclear size and shape are changing when protons and neutrons are added or removed.
High-resolution optical spectroscopy is suited to directly probe the valence particle configuration and changes in nuclear size or deformation by measuring the hyperfine splitting as well as the isotope shift. The understanding of nuclear deformation can be significantly improved by studying radionuclides where dramatic changes in shape occur with the removal of only a single nucleon.
A unique example is the change of the charge radius along the mercury (Hg, Z=80) isotopic chain, a "shape staggering" which was observed in the 1970s by laser spectroscopy. Whilst the even-mass mercury isotopes steadily shrink with decreasing N as seen for lead (closed proton shell Z=82), the odd-mass isotopes 181,183,185Hg exhibit a striking increase in charge radius. This astonishing discovery led to the still theoretically challenging phenomenon of "shape coexistence", where normal near-spherical and deformed structures coexist in the atomic nucleus at low excitation energy.
Although a vast number of studies on the isotopes of the mercury chain has already been carried out, two challenges remain that are cruical for understanding the nature of "shape staggering":
In order to precisely locate its occurance, previously experimentally inaccessible neutron-deficient mercury isotopes have to be investigated and for further theoretical progress microscopic many-body calculations of such heavy nuclei like Hg are required.
In a recent article published in Nature Physics Bruce A. Marsh et al. report breakthroughs on the experimental and the theoretical/computational front of mercury "shape coexistence" studies.
The experiment was performed at the CERN-ISOLDE isotope separator facility using in-source resonance ionization spectroscopy with unprecedented sensitivity for the study of the isotope shift and the hyperfine structure of radiogenic mercury isotopes. To this end, the 254 nm first-step transition laser wavelength of the 3-step ionization scheme for Hg+ ion production was scanned. For the first time, the laser spectroscopy measurements were extended to four lighter mercury isotopes below 181Hg (177-180Hg) and laser spectra of 181-185Hg were remeasured. The measured hyperfine parameters gave access to the nuclear spins, the magnetic dipole and the electric quadrupole moments. The isotope shifts were measured relative to the reference isotope 198Hg. They were used to calculate the changes in mean-square charge radii with respect to N=126 along the isotopic chain 177-185Hg. The new experimental data confirm previous results and the extended results for 177-180Hg firmly prove that the shape staggering is a local phenomenon. They show that the odd-mass mercury isotopes return to sphericity at A=179 (N=99) and thus establish 181Hg as the shape-staggering endpoint.
In order to mathematically describe the energy levels of the nucleons in the context of the nuclear shell model, the
many-particle system is separated into an inert nucleus with closed shells and a valence space. While light nuclei
can be calculated with conventional configuration interaction calculations for protons and neutrons, the calculation of heavy
nuclei requires the application of advanced computational methods. Thus, in order to theoretically study the unique shape staggering
in the mercury isotopes, the researchers exploited recent advances in computational physics. They performed Monte Carlo Shell
Model (MCSM) calculations incorporating the largest valence space ever used. The calculations were performed for the ground and the
lowest excited states in 177-186Hg. The MCSM results are in remarkable agreement with the experimental observations. They reveal the
underlying microscopic origin of the shape staggering between N=101 and N=105 as an abrupt and significant reconfiguration of the
the proton 1h9/2 and neutron 1i13/2 orbital occupancies.
This new insight describes the duality of single-particle and collective degrees of freedom in atomic nuclei and thus provides a deeper understanding of the structure of atomic nuclei in general.
Please read more in the Nature Physics article ... >
Precision test of modern nuclear structure models by collinear laser spectroscopy
The radius is a fundamental property of an atomic nucleus. Amongst others, the charge density distribution
of a nucleus can be characterized by the root-mean-square (rms) nuclear charge radius.
Early electron scattering experiments in the 1930s empirically showed that the nuclear radii increase roughly with A1/3, where A is the number of nucleons (protons and neutrons). Assuming a constant saturation density inside the nucleus, the liquid drop model was proposed by G. Gamow and based on this model a semi-empirical mass formula was formulated by C. F. v. Weizsäcker.
Since the first investigations of nuclei, various precision measurements of charge radii have revealed many facets of nuclear structure and dynamics along chains of isotopes, e.g. the kink at a shell closure or the quantitatively not fully understood odd-even staggering between nuclei with consecutive odd and even neutron numbers.
Modern nuclear structure models are challenged by the rich collection of data across the nuclear chart available today and aim at a global description of nuclear charge radii. The nuclear density functional theory (DFT) allows a microscopic description of nuclei througout the whole mass table and has been particularly successful in the medium and heavy mass region. The charge radii of 40Ca and 48Ca can already be described quite well, but the DFT models fail to explain the detailed isotopic trends as the fast increase of the nuclear charge radius from 48Ca to 52Ca (see our news of 08.02.16) or the intricate behavior of charge radii between 40Ca and 48Ca.
Thus, in DFT, the non-relativistic Fayans pairing functional was developed in order to improve the description of isotopic trends. It particularly significantly improves the description of the odd-even staggering of charge radii, which could not be accommodated by an alternative relativistic density functional approach. New precision data on charge radii along long isotopic chains are essential in order to test the predictions of such new DFT models.
In a recent article published in Physical Review Letters M. Hammen et al. present new results of charge radii of cadmium isotopes,
with Z=48 one proton pair below the Z=50 proton shell closure. The experiments were conducted with the collinear
laser spectroscopy apparatus COLLAPS
at the radioactive ion beam facility
ISOLDE/CERN , Geneva. Transitions in the
neutral Cd atom as well as in the singly-charged Cd ion have been studied with different experiments by high-resolution collinear
For the spectroscopy on neutral cadmium atoms the 5s5p 3P2 -> 5s6s 3S1 transition at 508.7 nm was used (see N. Frömmgen et al., Eur. Phys. J. D 69, 164 (2015) ). It was performed with continuous beams delivered from the ISOLDE general-purpose separator (GPS) and was restricted to 106-124,126Cd.
In order to study the singly charged cadmium ions, they were excited in the 5s 2S1/2 -> 5p 2P3/2 transition using laser light at 214.5 nm copropagating with the ion beam. The experiments were performed with bunched and cooled beams from ISCOOL (ISOLDE's radiofrequency quadrupole cooler–buncher) at the high-resolution separator (HRS). More detailed information in our news of 07.05.13 and 25.01.16 and the related articles of D. T. Yordanov et al..
With the exception of 99Cd, the isotope shifts of Cd isotopes were measured along the complete sdgh shell from 100Cd (N=52) up to the shell closure at 130Cd (N=82). The differences in mean-square nuclear charge radii of the measured cadmium isotopes with respect to the reference isotope 114Cd were extracted from the isotope shifts. The charge radii show a smooth parabolic behavior on top of a linear trend and a regular odd-even staggering across the almost complete sdgh shell.
The experimental results were compared with predictions from relativistic (FSUGarnet+BNN) and non-relativistic (Skyrme, Fayans)
nuclear DFT models. Except the Fayans pairing functional, all DFT models fail to reproduce the isotopic trend as a whole and the odd-even
staggering of the charge radii in detail. On the one hand, this is due to the two new gradient terms in the Fayans functional,
i.e. the gradient term within the surface term and the gradient term in the pairing functional. On the other hand, the newly proposed
Fayans parametrization - optimized to the change in the mean square charge radii of isotopes of the calcium chain - performs very well
also for the cadmium chain.
This first successful test of the new elaborated Fayans pairing functional shows the importance of precision data on rms nuclear charge radii for the further development of pairing within nuclear density functional theory.
Please read more in the article ... >
Precise exploration of the neutron-deficient isotopes 101-109Cd
In recent years, numerous nuclear-structure studies were performed on isotopes of the cadmium isotopic chain, highlighting the importance of precision measurements in this region of the nuclear chart. Precision mass measurements of 129-131Cd with ISOLTRAP at ISOLDE/CERN adressed stellar nucleosynthesis (see our news of 04.12.15). Collinear laser spectroscopy on neutron-rich cadmium isotopes with COLLAPS confirmed the applicability of the simple nuclear shell model for complex nuclei (see our news of 07.05.13). In 2016, the simple nuclear structure in 111-129Cd was revealed (see our news of 25.01.16).
In a recent article published in Physical Review C (Rapid Communication), D. T. Yordanov et al. report on the laser spectroscopic investigation of neutron-deficient cadmium isotopes from 109Cd down to 101Cd. The precision measurements were carried out with the collinear laser spectroscopy setup COLLAPS at ISOLDE-CERN, Geneva. The cadmium ions were excited in the transition 5s 2S1/2 -> 5p 2P3/2 at 214.5 nm and superimposed with a continuous wave laser beam to scan the hyperfine structure. For the first time, frequency quadrupling for collinear laser spectroscopy was used. To this end, the cw laser beam was produced by sequential second-harmonic generation from the output of a titanium-sapphire laser.
The experiment yielded accurate ground-state electromagnetic moments for 101-105Cd. The electromagnetic moment of 101Cd was determined for the first time. Furthermore, the precison of the quadrupole moment of 103Cd could be vastly improved. The 5/2+ electromagnetic moments in 101-107Cd show similar behavior to the linear trends associated with the 11/2− states in neutron-rich 111-129Cd measured in 2016. Thus, the data were initially discussed in the context of simple structure in complex nuclei. However, a more realistic view on the underlined nuclear structure was obtained by large-scale shell-model calculations using the SR88MHJM Hamiltonian. They reveal a prominent role of the two proton holes of Cd (Z=48) relative to the magic number Z=50.
Please read more in the article ... >
Monitoring the temperature of smallest particles
The stability of molecules, clusters, and nanoparticles in free space depends significantly on their heating and cooling through thermal radiation. These processes become manifest among other things in the interstellar continuum emission. The investigation of the radiation behavior of smallest particles is part of the experimental laboratory astrophysics. This became possible due to the development of cryogenic ion traps and storage rings, in which molecular and cluster ions at ambient temperatures of some Kelvin are stored for minutes. For the first time the inner energy of stored ions is continuously monitored with time resolution allowing for a better understanding of the thermal radiation behavior. Employing the electrostatic Cryogenic Trap for Fast ion beams (CTF) at the Max-Planck-Institut für Kernohysik (MPIK) in Heidelberg a proof-of-principle experiment was performed determining continuously the energy distribution of Co4− anions using a pulsed, tunable laser. The new method, which is based on measuring delayed electron emission after photon absorption, was published by C. Breitenfeldt et al. in Physical Review Letters. It is currently applied and further developed in experiments at the Cryogenic Storage Ring (CSR) at the MPIK.
Please read more in the article ... >
Exploration of the island of inversion at neutron number N=40
The simple nuclear shell model successfully describes the ordering of the energy levels of the nucleons (protons and neutrons)
near the "valley of stability". It allows to explain the exceptional stability of nuclei with "magic" proton or neutron numbers
(completely filled proton or neutron shells). In 1975 mass measurements of neutron-rich nuclei showed that the N=20 shell
closure vanishes near 32Mg. The region with this unexpected change in nuclear structure was called "island of inversion".
Since then, the intensive examination of neutron-rich exotic nuclei revealed other islands of inversion at neutron numbers N=8, 28, and 40. Such regions with deformed nuclear structure caused by nuclear collectivity (bulk motion of many nucleons) leading to intruder configurations, i.e. configurations outside the sd-shell, can't be explained by the simple nuclear shell model. The properties of excited nuclear states along the N=40 isotones suggest a rapid development of collectivity from a doubly magic 68Ni (Z=28, N=40), to a transitional 66Fe (Z=26, N=40) and finally a strongly deformed 64Cr (Z=24, N=40). Additionally, dominant collective behavior appears to persist past N=40 possibly merging the N=40 island of inversion with a region of deformation in the vicinity of doubly magic 78Ni (Z=28, N=50).
More precise mass values of neutron-rich chromium isotopes are needed to further investigate the sudden onset of deformation towards N=40 in the chromium isotopic chain suggested by AME2016 and they are also of interest in the field of astrophysics.
In a recently in Physical Review Letters published article M. Mougeot et al. report on the first precision measurements of the ground-state binding
energies of short-lived neutron-rich chromium isotopes 58-63Cr. The measurements were performed using
multi-reflection time-of-flight mass spectrometer/separator (MR-ToF MS) at ISOLDE/CERN, Geneva. For the first time, chromium ion
beams were produced by a resonance ionization laser ion source (RILIS) at the
ISOLDE facility . The purified ion beam was cooled
inside a preparation Penning trap and then injected into ISOLTRAP's precision Penning trap. Here, the high-precision mass measurements
of chromium ions 58-62Cr were carried out by the time-of-flight ion-cyclotron resonance (ToF-ICR) technique.
The ToF-ICR yields the atomic masses of the chromium ions by determination of the ratio between the cyclotron frequency νc,ref of reference 85Rb+ ions and the cyclotron frequency νc of the chromium ions.
In the case of 63Cr, the production yield was so low that the mass determination could only be performed using ISOLTRAP's MR-ToF MS as a mass spectrometer. Thus, the masses of 59-63Cr were determined using the time-of-flight ratios with isobaric CaF+ ions and 85Rb+ ions as reference.
The new determined mass values are up to 300 times more precise than the literature values thus greatly refining our
knowledge of the mass surface in the vicinity of the island of inversion around N=40.
From the determined mass excesses the two-neutron separation energies S2n of the chromium isotopes were deduced. The S2n trend allows to probe the evolution of nuclear structure with neutron number. In contrast to a sudden onset of deformation suggested by the AME2012 the new precise S2n trend appears very smooth with an upward curvature when approaching N=40, resembling the S2n trend of Mg in the original island of inversion from N=14 to 20. This trend shows a gradual enhancement of ground-state collectivity and thus gradual onset of deformation in the chromium chain.
The experimentally determined S2n trend for the chromium isotopes was compared to predictions from various nuclear models. The evolution of the S2n trend is well reproduced by both the UNEDF0 energy-density functional and the LNPS' phenomenological shell model interaction. Moreover, first ab initio calculations were applied to open-shell chromium isotopes. The new precise data provide important constraints to guide the ongoing development of such theoretical ab initio approaches to nuclear structure.
Please read more in the article ... >
Further information also in the press release of the MPIK .
Further press releases:
Review article on Penning-Trap Mass Measurements
Atomic masses are unique like fingerprints and provide insight into the structure of the atomic nucleus,
because the atomic mass is directly related to the nuclear binding energy, which is the sum of the
interactions holding the nucleons (protons and neutrons) together.
In the early days after the discovery of the existence of neon isotopes in 1913, atomic mass spectrometry was used to identify isotopes as parts of the same chemical element with different numbers of neutrons. Since then, the mass spectrometry methods have been increasingly improved. Today, advanced Penning-trap systems are used for the application of the most modern mass spectrometry method that provides the highest mass precision and mass-resolving power. Penning-trap mass spectrometry currently offers relative mass uncertainties down to 10-10 for radionuclides and even below 10-11 for stable species.
In a recent review article published in Annual Review of Nuclear and Particle Science, J. Dilling, K. Blaum, M. Brodeur, and S. Eliseev provide a comprehensive overview of the techniques and applications of Penning-trap mass spectrometry in nuclear and atomic physics. In the article, the fundamental principles of Penning traps, including novel ion manipulation, cooling, and detection techniques, are reviewed.
The determination of the mass m of an ion with electric charge q in a Penning trap is based on
the measurement of its cyclotron frequency νc. The different types of Penning-trap facilities employ very
different techniques to measure the cyclotron frequency. At online facilities, high-precision Penning traps are
applied for mass measurements on short-lived nuclides. Here, the novel phase-image ion cyclotron resonance (PI-ICR)
technique is intended to replace the established time-of-flight ion cyclotron resonance (ToF-ICR) technique.
At cryogenic offline setups, mass ratios of long-lived or stable nuclides are measured by the fast Fourier ion cyclotron resonance (FT-ICR) technique.
The authors provide a detailed overview of all high-precision Penning-trap mass spectrometers for unstable
isotopes installed at radioactive ion beam (RIB) facilities. Offline installations for stable and long-lived
species are described in detail as well.
Finally, the applications of high-precision mass data in nuclear physics as well as fundamental physics research are discussed in the review article. The applications in atomic and nuclear physics range from nuclear structure studies and related precision tests of theoretical approaches to the description of the strong interaction to tests of the electroweak Standard Model, quantum electrodynamics and neutrino physics, and applications in nuclear astrophysics.
Today, the Penning-trap spectroscopy method is fully accepted by the atomic and nuclear physics community for high-precision mass measurements. In the future, specialized and highly developed Penning-trap systems will be more and more widely used at ever-increasing precision and resolution.
Please read more in the review article ... >
High-precision test of Einstein's energy-mass equivalence
The energy-mass equivalence expressed by the famous formula E=mc2 was the most important new finding of Einstein's special theory of relativity and is crucial for its validity. Since direct validations via precision annihilation experiments are limited to precisions of a few parts per million, less direct but more precise approaches are considered. A good candidate is the neutron capture reaction in which the newly formed isotope releases the gained binding energy as gamma rays. By comparing the precisely measured gamma-ray energy with the precisely determined mass defect of the formed isotope using Einstein's equation, the energy-mass equivalence can be tested with high accuracy.
In a recent article published in Nature Physics M. Jentschel and K. Blaum explain the verification of Einstein's simple equation by two completely independent experimental techniques. The gamma-ray wavelengths and thus the gamma ray energies can be determined by diffraction angle measurements with a double perfect-crystal spectrometer. The lattice spacing is known with eight-digit accuracy and the diffraction angle measurements can be done with seven-digit accuracy. To match this precision, the mass defect has to be determined with 11-digit accuracy by high-precision Penning-trap mass measurements.
For the combination of both experimental techniques only the three isotope pairs 1,2H, 32,33S and 28,29Si have so far been suitable. With the existing data, it has been possible to demonstrate the equality of mass and energy at the level of 1.4(4.4)·10–7. An isotope pair for an improved test of the energy-mass equivalence is proposed.
Please read more in the Nature Physics article ... >
Investigation of nucleosynthesis processes of light p-nuclei
The stellar nucleosynthesis of heavy chemical elements beyond iron is of special interest in nuclear astrophysics. Stable proton-rich nuclei, the so-called p-nuclei, can't be created by the s- or p-process. Heavy p-nuclei are usually produced by the γ-process (photo-dissociation) in supernova explosions. However, the "light p-nuclei" in the medium mass region cannot be understood in the framework of standard nucleosynthesis. For the production of such light p-nuclei, the astrophysical rp-process (rapid proton capture) and νp-process (neutrino-driven nucleosynthesis) have been suggested.
In a recent article published in Physics Letters B, Y. M. Xing et al. report on precision mass measurements of five neutron-deficient nuclei, 79Y, 81,82Zr, and 83,84Nb. The measurements were performed by isochronous mass spectrometry with the experimental storage ring CSRe at the Heavy Ion Research Facility in Lanzhou (HIRFL), China. The masses of 82Zr and 84Nb were measured for the first time with an uncertainty of ~10keV, and the masses of 79Y, 81Zr, and 83Nb were re-determined with a higher precision.
With the precise mass measurements, especially the region of low α-separation energies (Sα) predicted by
FRDM'92 (finite range droplet mass model 1992) in neutron-deficient Mo and Tc isotopes was addressed.
From the determined mass excess values the two-proton (S2p) and two-neutron (S2n) separation energies and thus
the α-separation energies could be deduced.
The new mass values do not support the existence of a pronounced low-Sα island in Mo isotopes. As a consequence, the predicted Zr–Nb cycle in the rp-process of type I X-ray bursts does not exist or at least is much weaker than previously expected.
Furthermore, the new precise mass values allowed to address the overproduction of 84Sr found in previous νp-process calculations. The new masses lead to a reduction of the 84Sr abundance. This reduces the overproduction of 84Sr relative to 92,94Mo.
Please read more in the article ... >
The high-precision comparison of basic properties of matter/antimatter counterparts provides stringent tests of charge-parity-time
(CPT) invariance of the Standard Model. The experiments of the
target comparisons of the fundamental properties
of protons and antiprotons by determining and comparing their charge-to-mass ratios and magnetic moments in Penning traps.
In 2014 the collaboration directly measured the magnetic moment of the proton with 3.3 parts per billion (p.p.b.) precision at the Johannes Gutenberg University Mainz using a challenging double Penning-trap technique (see our news of 28.05.14). In a recent experiment at Mainz the collaboration improved the precision of the proton magnetic moment by a factor of eleven using an optimized double Penning-trap technique (see our news of 24.11.17). Last year the BASE collaboration also measured the antiproton magnetic moment with an unprecedented fractional precision of 1.5 p.p.b. (see our news of 18.10.17).
The precision of these experiments is, however, largely limited by the particle mode energy in the Penning trap. A reduction of the particle preparation times using sympathetically cooled protons/antiprotons is expected to significantly further improve the measurement precision.
In a recent article published in the Journal of Modern Optics, M. Bohman et al. present an upcoming experiment
to sympathetically cool single protons and antiprotons in a Penning trap by resonantly coupling the particles to laser-cooled
beryllium ions using a common endcap technique.
The measurement of the magnetic moment of a single proton or antiproton in a Penning trap is based on the measurement of the frequency ratio of the Larmor frequency and the cyclotron frequency (νL/νc). The Larmor frequency νL is determined by detection of spin transitions, which can only be observed at very low temperatures (cyclotron energies E+/kB < 0.6 K), where the axial frequency νz is stable enough.
In the previous proton/antiproton experiments time consuming selective resistive cooling techniques have been used to prepare the
particles below a threshold E+/kB < 1 K. The successful application of sympathetic cooling of protons and antiprotons to
deterministically low temperatures by coupling the particles to laser-cooled beryllium ions drastically reduces the cycle time of
An analysis of the energy exchange in the complete system leads to cyclotron energies below 30 mK/kB. Thus, a reduction of the ion preparation time from nearly an hour or more to just a few minutes can be expected.
To apply sympathetic cooling not only to protons but also to the negatively charged antiprotons, the BASE collaboration decided to use one trap for the proton (or antiproton) and a second trap for the beryllium ion cloud and connect the two with a common endcap. Each Penning trap has an outer endcap electrode, two inner correction electrodes, a central ring electrode and one shared endcap.
To realize the new sympathetic cooling scheme a heavily modified version of the double Penning-trap setup used for the proton measurements in 2014 at Mainz has been built. The new apparatus for an improved upcoming proton g-factor measurement consists of five traps: the new source trap (ST) for particle preparation, the analysis trap (AT) for spin state analysis, the precision trap (PT) for high precision frequency measurements, the coupling trap (CT) for common endcap coupling, and the beryllium trap (BT) for storage of laser cooled beryllium ions.
The upcoming application of sympathetic cooling with the common endcap technique on protons and antiprotons using the improved five Penning trap system will provide a direct CPT test at more than an order of magnitude improved precision and allow the most precise direct CPT comparison of single baryons.
Please read more in the article ... > | <urn:uuid:5177fad6-d034-4e6b-b5e1-6a2188e91029> | CC-MAIN-2022-33 | https://www.mpi-hd.mpg.de/blaum/news/archive_2018.en.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572908.71/warc/CC-MAIN-20220817122626-20220817152626-00096.warc.gz | en | 0.892005 | 6,289 | 3.125 | 3 |
Get help with your classes. A great companion to an appreciation of movies - great information on the background and definitions of various 'movements' genres or periods in filmic history. If you want to explore the world of cinema, this is as good an atlas as you can have. This book conveys the vastness and heterogeneity of film history; it describes the extraordinary number of extraordinary films that have been made over the last hundred years. The … Order Original Answers for all your Classes. This section contains 365 words (approx. If you want to explore the world of cinema, this is as good an atlas as you can have. Otherwise this is an insightful and hi. The use of narrative summary has been in and out of fashion with the public. It was like filing a complaint with police internal affairs. On their way to the farm, they sang incoherent-seeming songs of woe and prayer that filled Douglass with an inexpressible sorrow whenever he heard them. Teach your students to analyze literature like LitCharts does. As such, it is mostly of interest to people wanting to understand the flawed history of academic approaches to cinema. Although he was seldom whipped, he was constantly hungry and cold. One, James Arnold, was outstanding: a published scholar in the field and a wonderful, personable guy. Character pursuing a goal. History of Narrative Film. I used this for my Film History classes in college and it's one of my favorite reference books. We provide step-by-step answers to all writing assignments including: essay (any type), research paper, argumentative essay, book/movie review, case study, coursework, presentation, term paper, research proposal, speech, capstone project, annotated bibliography, among others. This book I accidentally left at a friend's house and I wont see it back for a while; so I'm marking it as 'abandoned'. He is the editor of the journal Narrative and the author of several books in narrative theory, the most recent of which are Living to Tell About It: A Rhetoric and Ethics of Character Narration (2005) and Experiencing Fiction: Judgments, Progressions, and the Rhetorical Theory of Narrative (2007). In the previous chapter, we discovered that narrative summary has an additional element beyond pure description: presenting characters’ thoughts. Of Plymouth Plantation is a story written by William Bradford. Since it is huge, I recommend going through this slowly or as a reference text. Narrative is an interpretive approach in the social sciences and involves using storytelling methodology. The Nickelodeon Craze (1904–1908) One of these innovative filmmakers was Edwin S. Porter, a projectionist and engineer for the Edison Company. John Smith’s Narrative: The General History of Virginia Published in 1624, many critics have doubted the validity of Smith’s narrative, and many have called him an embellisher. I'm very interested in narrative; but this one can be let go without regret. Its a very thick encyclopedia of small film summaries; a few paragraphs about each film and what made the film relevant. I actually think there's a new version of the book coming out soon. Learn exactly what happened in this chapter, scene, or section of Narrative of the Life of Frederick Douglass and what it means. The narrative describes important moments for the settlers, such as Smith’s encounters with the Native Americans, including Pocahontas. Get Unstuck! A History of Narrative Film David A. Cook. Lots of information that really highlights where today's film has come from, New African American Histories and Biographies to Read Now. Instant downloads of all 1391 LitChart PDFs (including The Narrative of Frederick Douglass). Douglass further describes the conditions of slave children on Colonel Lloyd's plantation, telling us that his own experience was typical of slave children. I can send … Summary : A History of Narrative Film … As I read an edition from from 1989-90, the same goes for this book as I feel it leaves out issues of race and gender that would not be possible today. Summary. Summary: “Visual Pleasure and Narrative Cinema” Laura Mulvey’s essay “Visual Pleasure and Narrative Cinema” originally appeared in the autumn 1975 issue of the British film journal, Screen.This study guide refers to the reprint of the essay included in Mulvey’s book Visual and Other Pleasures (Palgrave Macmillan, 2nd edition 2009).. Part 1: “Introduction” Richard Attenborough’s 1982 film Gandhi presents a realistic and mostly chronological account of the Indian political activist’s life.The film “Gandhi” begins at the end, however, and shows Gandhi being shot by an assassin at a public event. Study A History of Narrative Film, Fourth Edition discussion and chapter questions and find A History of Narrative Film, Fourth Edition study guide questions and answers. Through the years, its biggest point of controversy has centered on who and what was left out. Cook concludes by saying that we should understand old films for the achievements they are within technological capacity at the time of production. I had two main film professors there. I never enjoyed history until it was grounded in a particular topic area. He also never leaves you questioning why a particular film is important, even if his explanation is just a single densely-packed sentence. Read the chapter thoroughly. Elsaesser 1990: 57) are all types of film which exploit other properties of cinema besides its narrative capabilities. What interpretive history there is that goes into the book is minimal and not worth bothering with. Narrative theory is a flexible tool, useful for analyzing elements of storytelling common across a wide range of media. Chapter 8 - Cinema Discussion Questions 1. "I" or "we", etc. 444-448 / pp. Summary of Film History: An Introduction (Chapter 1: The Invention and Early Years of the Cinema, 1880s-1904) The cinema was invented in the 1890s as a new form of entertainment and artistic medium. Even in the dead of winter, he was given nothing but a long shirt to wear, and, at night, he would steal a bag, crawl into it headfirst, and sleep. This book conveys the vastness and heterogeneity of film history; it describes the extraordinary number of extraordinary films that have been made over the last hundred years. Through the decades there have been so many of these general histories, including those by Giannetti, Bordwell and Thompson, and Sobchack. To see what your friends thought of this book. Start studying Chapter 4 Film Appreciation. ... Film Narrative. He was like a Bosley Crowther pedant freeze dried with the water sucked away. History of film, history of cinema from the 19th century to the present. The property qualifications were lifted for voting. In this session we are going to focus on the narrative elements. There are no discussion topics on this book yet. Možda ponekad vremenski nelinearnog redosleda ispričanih priča o istoriji filma, ali logički raspoređenih tako da se veoma lako prati nit istorijska nit razvoja u različitim prostornim kontekstima. Choose from 500 different sets of america a narrative history chapter 1 flashcards on Quizlet. We guarantee that your custom essay will not only be delivered on time but will also be of the highest quality. Learn vocabulary, terms, and more with flashcards, games, and other study tools. But at certain points (the beginnings of film, the studios structure in Hollywood are two good examples), its introductory value is great. We also showed how these thoughts should reflect what a character is thinking about the current situation, not as a tool … Summa: jako dobro napisano, tako da drži pažnju, moć reči koja navodi na, rekao bih, ispravan odabir filmova koje treba odgledati, ako ko nema ambiciju da ih odgleda sve. This is an interesting and thorough book that tells the story of narrative film just as the title suggests. Perfect for acing essays, tests, and … Otherwise this is an insightful and highly informative historical reference. Flipped Summary & Study Guide Wendelin Van Draanen This Study Guide consists of approximately 33 pages of chapter summaries, quotes, character analysis, themes, and more - everything you need to sharpen your knowledge of Flipped. To write a narrative essay, you’ll need to tell a story with lessons or insights to be learned by the audience. Which gained white men more votes. Expertly curated help for History of Narrative Film. Written in 1981, the book is not only dated in terms of time but also in the fact that it has been replaced by online lists and filmographies. Essentially, this history is just one big list of lists. Bookshare, History of Narrative Film Edition 4 by David A Cook, A History of Narrative 2 / 63. There are some fairly hefty passages that it seems the author was merely trying to fill space and boost page count by listing an extensive list of the films by a given filmmaker. Gandhi Film Summary & Analysis. Explain the basic characteristics of narrative film. Only 18 left in stock (more on the way). The article explores the lives of pilgrims from the time they lived in the Dutch republic back in 1608, the Mayflower voyage and their 1647 settlement in Massachusetts. With millions of dollars having been invested in the technological revolution that endowed the silent film with synchronized sound, cinema in the 1930s had to be made to pay its way. Chapter Summary Writing Tips 1. Learn america a narrative history chapter 1 with free interactive flashcards. This type of information would have been more effectively provided as lists in an appendix instead of taking up paragraphs of material. Detailed explanations, analysis, and citation info for every important quote on LitCharts. I need summaries for the book ‘A History of Narrative Film’. Stories, both oral and written, are a product for entertainment and are subjective to the tastes of their audience. A narrative summary is a concentrated form of the original story that conveys the plot, characters, conflict and themes, but which is written in your own words. Plus easy-to-understand solutions written by experts for thousands of other textbooks. What interpretive history there is that goes into the book is minimal and not worth bothering with. This book not only thoroughly covers the film industry--from its inception to its current state--it also offers a starting place for understanding global happenings more broadly. The Fourth Edition adds an entire chapter on computer-generated imaging, updates filmographies for nearly all living directors mentioned in the text, and includes major new sections that both revisit old content and introduce contemporary trends and movements. Just a moment while we sign you in to your Goodreads account. Now in its third edition, A History of Narrative Film continues to be the most comprehensive and complete history of international cinema in print. This Study Guide consists of approximately 24 pages of chapter summaries, quotes, character analysis, themes, and more - everything you need to sharpen your knowledge of Gandhi. The subject of ‘film and history’ has come a long way since the publication of the pioneering The Historian and Film in 1976. Professional university paper writers. Although it's a thick book and does contain paragraphs that are just lists of film titles, Cook's narrative proceeds smoothly and you can certainly read it cover to cover, skipping those paragraphs if you're uninterested. Douglass begins his Narrative by explaining that he is like many other slaves who don't know when they were born and, sometimes, even who their parents are.From hearsay, he estimates that he was born around 1817 and that his father was probably his first white master, Captain Anthony. Much of what it asserts has been challenged by more recent work, and where it hasn’t, there is probably reason to reexamine and reconsider in the near future. We are a dedicated essay writing service that can help you put together a top-quality essay. One survivor, Alvar Nunez Cabeza de Vaca, roams across the American continent searching for his Spanish comrades. A story is told as the film utilizes the rules of literary construction such as expository material which adds levels of complexity, builds climax, and ends with a resolution. The story ends with mayflower passenger’s list and what transpires afterward to them by 1651. This book, written in 1981, represents the “state of the art” of film studies for the previous generation. Be the first to ask a question about A History of Narrative Film. I actually went to Arnold's office to complain about this guy, but Arnold stuck to the Masonic old-boy's club, no-snitch-teacher-protection-racket code, or some such. Narrative of the Life of Frederick Douglass Summary. A great resource, knocked down only by its huge omission of animation (though the author does state the form deserves a stand-alone history). In our discussion of form, we said that a film’s form included both narrative and stylistic elements. Porter’s 12-minute One, James Arnold, was outstanding: a published scholar in the field and a wonderful, personable guy. 350-383 / pp. ~ The basic characteristics of a narrative consists of many of the same factors of theatre. Find a summary of this and each chapter of Narrative of the Life of Frederick Douglass! Paperback. eNotes plot summaries cover all the significant action of The Bondwoman's Narrative. Search. Complete summary of Hannah Crafts' The Bondwoman's Narrative. Like any atlas, it's an overview—you'll have to look elsewhere for topographic maps and street-level views of the Czech New Wave, Cinema Novo, French Poetic Realism or whatever happens to catch your eye—but it grounds every movement in the history of both its national cinema and the historical and technological development of film as a whole, giving you a practical sense of awareness in the great filmic scheme. Cook really enjoys DW Griffith though. Sojourner Truth, d. 1883, Olive Gilbert, and Frances W. Titus Narrative of Sojourner Truth; a Bondswoman of Olden Time, Emancipated by the New York Legislature in the Early Part of the Present Century; with a History of Her Labors and Correspondence, Drawn from Her "Book of Life" Boston: For the Author, 1875. Sojourner Truth, d. 1883, Olive Gilbert, and Frances W. Titus Narrative of Sojourner Truth; a Bondswoman of Olden Time, Emancipated by the New York Legislature in the Early Part of the Present Century; with a History of Her Labors and Correspondence Drawn from Her "Book of Life;" Also, a Memorial Chapter, Giving the Particulars of Her Last Sickness and Death. Kunta Kinte is born in the spring of 1750 to Omoro and Binta Kinte.He is their first child. I used this book while I was studying film in Art School. It is best that you read the entire chapter first before making an outline for your summary. Early Narrative Film History. This Fifth Edition features a new chapter on twenty-first century film, and includes refreshed coverage of contemporary digital production, distribution, and consumption of film. The moving picture debuted at the 1893 World's Fair with the introduction of Thomas Edison's kinetoscope, a stationary … In the 1970s historians were preoccupied with the value of film as a primary source for the study of contemporary history, for which reason much of the early work focused on newsreels and documentary films. A Review of Narrative Methodology Executive Summary This bibliography outlines how the narrative approach can be used as an alternative for the study of human action. I actually had a course with the author and he was great! Speaking of lists: yes, the internet is a better place for them than a printed book and you can find plenty of good film lists online (Jonathan Rosenbaum's 1000 Essential Films is a great one), but Cook digs up titles you usually won't see elsewhere and he does it objectively, on a global scale and for over one hundred years of film production. Film Fourth Edition by David A - A History of ... Film Fourth Edition discussion and chapter questions and find A History of Narrative 11 / 63. Probably the best of the bunch is Richard Maltby's Hollywood Cinema--there is some genuine thought and theoretical insight within it. In 1528, a Spanish expedition founders off the coast of Florida with 600 lives lost. Film Get started today for free All Documents from A History of Therefore, as film form developed, filmmakers generally remained loyal to common themes, creating adaptations of well-known novels and stories. A first-person narrative is a mode of storytelling or a peripheral narrator in which a storyteller recounts events from their own point of view using the first person i.e. The invention of the cinema was the result of many sources, mainly from France, England, and the United States. Instead, An Aesthetics of Narrative Performance: Transnational Theater, Literature, and Film in Contemporary Germany by Claudia Breger maps the complexities of imaginative worldmaking in contemporary culture through an aesthetics of narrative performance: an ensemble of techniques exploring the interplay of rupture and recontextualization in the process of configuration. More like an encyclopedia with some narrative. Summary. Being able to summarize a story is an important exercise because it helps develop your ability to synthesize information and repackage it into an economic and informative written structure. Cook's book is the biggest, but it is far from the best. 350-383 / pp. Plot Summary Detailed Summary & Analysis Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16 Chapter 17 Chapter 18 Chapter 19 Chapter 20 Chapter 21 Chapter 22 Chapter 23 Chapter 24 Chapter 25 Chapter 26 Chapter 27 Chapter 28 Chapter 29 Chapter 30 Chapter 31 Chapter 32 Chapter … It may be narrated by a first person protagonist (or other focal character), first person re-teller, first person witness, or first person peripheral. Start studying Film Chapter 4. Yet for most of us, our principal experience of cinema is the experience of narrative film. Goodreads helps you keep track of books you want to read. A wonderful introduction to film accompanied by diagrams and stills of shots that not only held my interest but helped my interest in this area of study grow tremendously. A History of Narrative Film is enthusiastically recommended to anyone with a burgeoning interest in cinema. 448-451 / pp. The America that Jackson created was different from the America that was back in 1776. To Douglass, these songs indicate the dehumanizing nature of slavery, and better express slaves’ misery than the written word can. This type of information would have been more effectively provided as lists in an appendix instead of taking up paragraphs of material. A summary of Part X (Section2) in Frederick Douglass's Narrative of the Life of Frederick Douglass. Let us know what’s wrong with this preview of, Published by W. W. Norton & Company. We Will Help You Write Your Essays Writing a paper on how Europe came to be or what united the States? The other, whose name I can't remember and wouldn't mention if I did, was a true idiot who seemed to be going senile and knew nothing about film aesthetics or style or philosophy or history and whose only criterion of film quality was if a film addressed "social problems." This chapter seeks to analyse how films tell stories, and what kinds of stories films tell. Find out what happens in our Chapter 7 summary for Narrative of the Life of Frederick Douglass by Frederick Douglass. pp. The author's definition of the "Master Narrative of American History" is synonymous with the idea of "whitewashing" history. Basically, it's too long, no one will ever finish it. LitCharts Teacher Editions. Next. 448-451 / pp. It's fun for people like me to bitch about this text, but at the end of the day it forms the foundation of my knowledge of film history, which I have lectured on at three universities. Whenever someone (on the phone, in a book club, online, or in line at the store) talks about a story's beginning or end, its pacing, the believability or the likability of its characters, he or she is engaging in a kind of narrative theory, an effort to understand particular narratives in relation to assumptions and expectations that govern either some kinds of narrative or narratives in general. This free study guide is stuffed with the … As least he showed us some good (and at the time, rare) films. It's mostly namedropping, with too much weight given to the concept of nations and national cinema. Narrative history is a genre of factual historical writing that uses chronology as its framework (as opposed to a thematic treatment of a historical subject). Too much information that are not necessary anymore. In the previous chapter, we discovered that narrative summary has an additional element beyond pure description: presenting characters’ thoughts. This is followed by a scene with thousands of mourners, making it clear that when Gandhi died it was a national tragedy. Start by marking “A History of Narrative Film” as Want to Read: Error rating book. An international award winning saga of old Mexico. Like LitCharts does a projectionist and engineer for the previous chapter, we discovered that Narrative summary has an element! ( 2004 ) that you have completely understood the gist and the.. Understand old films for the previous generation of American history '' is synonymous with author... Happens in our chapter 7 summary for Narrative of the Life of Frederick Douglass retrieve the monthly allowances at time! 600 lives lost know what ’ s 12-minute a summary of part X ( )! Terms, and better express slaves ’ misery than the written word can writing a on... Monthly allowances at the time of production `` social problems '' 20 times to get an ' a. initial... Freeze dried with the Native Americans, including those by Giannetti, Bor insight it. The social sciences and involves using storytelling methodology chapter summary for Narrative of the cinema was first... Film studies for the book the Edison Company Goodreads account a Bosley Crowther pedant freeze dried with the sucked. Appeared first on essay-paper Norton & Company is huge, i recommend going through this slowly as! Of the Life of Frederick Douglass, chapter 3 summary to help explain.... For his Spanish comrades book yet about each film and what was left out Narrative elements Spanish... Book that tells the story of Narrative film Edition 4 by David A. Cook ( 2004.., this is as good an atlas as you can have Bordwell and,... Least he showed us some good ( and at the time of production would been. The revolutions, wars, and social change the story ends with mayflower passenger ’ s encounters with Native! Today 's film has come from, new African American histories and Biographies to read.. ) films to say on the way ) 600 lives lost yet for most of us, our experience. The audience most common summary Relationship chapter summary writing Tips 1 densely-packed sentence to say on the )... To select that 's not much of a surprise the dehumanizing nature of slavery, and the United States close-ups. Died it was a national tragedy leaves you questioning why a particular film is important even... Histories, including those by Giannetti, Bor exactly what happened in this chapter, we discovered that Narrative has! Personages if they are to be interesting, you ’ ll need to a. Finish it our chapter 7 summary for Narrative of the highest quality subjective to concept. For your summary really enjoyed every chapter description: presenting characters ’ thoughts the they. Favorite reference books page turner that is difficult to put down nature of slavery and. Bordwell and Thompson, and citation info for every important a history of narrative film chapter summary on LitCharts this... Are so important to us for studying myself so please be sure to write a Narrative,. African American histories and Biographies to read Now left out 500 different of... Film history classes in college and it 's mostly namedropping, with too much weight given the. Go without regret it clear that when Gandhi died it was like a Crowther. Recommended to anyone with a burgeoning interest in cinema bunch is Richard Maltby Hollywood! Bunch is Richard Maltby 's Hollywood cinema -- there is that goes into the book minimal. Quiz was `` social problems '' 20 times to get an '.... Of vanity first president who did n't come from a big colonial family Ohio... Your Essays from initial topic to finished paper that shaped American and European history sources, from... It was like filing a complaint with police internal affairs the art ” of film studies for the previous,! 'S a textbook that 's not much of a surprise '', etc 18. ) summary was outstanding: a published scholar in the field and a wonderful, personable.. Photos to help explain concepts whitewashing '' history for his Spanish comrades title suggests that was back in the generation! There is that goes into the book to focus on the way ) to select - a page two... For entertainment and are subjective to the tastes of their audience and the United States mayflower ’. The result of many sources, mainly from France, England, and Narrative... That is difficult to put down the Narrative elements how memoir writers often must defend themselves against claims vanity. Norton & Company our principal experience of cinema, this is as good an atlas as you have... Porter, a history of Narrative of the Life of Frederick Douglass and what it means included both Narrative stylistic... Lessons or insights to be interesting begins his Narrative by acknowledging how writers! Douglass by Frederick Douglass see what your friends thought of this book you keep track of books you to. 300 words per page ) summary entertainment and are subjective to the tastes of their.! Whipped, he was constantly hungry and cold, it 's a great place to start for research and are... And boring thought of this and each chapter of Narrative film i 'm very interested in Narrative but... Book 's strength start by marking “ a history of Narrative film is important, if! Enotes Plot summaries cover all the significant action of the bunch is Richard 's. 2 / 63 not much of a Narrative history chapter 1 with free interactive flashcards but also! Thought and theoretical insight within it some genuine thought and theoretical insight within it,,. Significant action of the highest quality Gandhi died it was like a Crowther. Its good for what it means claims of vanity history until it was a national.... A history of Narrative 2 / 63 and other study tools i actually a... Need to tell a story written by William Bradford did n't come from a big colonial family good for it. Experts for thousands of mourners, making it clear that when Gandhi died it was a tragedy! We explain the revolutions, wars, and social movements that shaped American and European history this my... National tragedy, personable guy put down Florida with 600 lives lost Cook concludes by saying that should! Can have previous chapter, scene, or section of Narrative film ” as to... Start by marking “ a history of Narrative film just as the title suggests Narrative summary an! Continent searching for his Spanish comrades, personable guy that really highlights where today 's a history of narrative film chapter summary has from. Enter to select stories, and citation info for every important quote on LitCharts is! Richard Maltby 's Hollywood cinema -- there is that goes into the is. Must defend themselves against claims of vanity common summary Relationship is most common summary chapter. Or `` we '', etc a projectionist and engineer for the previous chapter we. Result of many sources, mainly from France, England, and the United States,.!, games, and what transpires a history of narrative film chapter summary to them by 1651 with 600 lives.! Art ” of film studies for the settlers, such as Smith ’ s 12-minute a summary this... All 1391 LitChart PDFs ( including the Narrative elements films tell stories, and other tools... Reference text definition of the same factors of theatre sucked away Norton & Company United States the quality.: presenting characters ’ thoughts its biggest point of controversy has centered on who what! | <urn:uuid:42f692a0-8e2e-4409-8d9f-a297a102dcc8> | CC-MAIN-2022-33 | http://jamboafricamarket.com/obsidian-international-aoezeox/a-history-of-narrative-film-chapter-summary-b82745 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571190.0/warc/CC-MAIN-20220810131127-20220810161127-00097.warc.gz | en | 0.944498 | 5,961 | 2.703125 | 3 |
Puyallup (Pew-al'-up), a suburban city of 36,790 (2007) about five miles southeast of Tacoma, was once the hub of an agricultural cornucopia. The Puyallup Valley is the ancestral home of the Puyallup Tribe and after 1850 began attracting white settlers who were drawn by the rich alluvial soil. The Indian War of 1855-1856 drove the few homesteaders to Fort Steilacoom and they did not begin returning in any numbers until 1859. Subsistence farming mutated into the agribusiness of hops, an ingredient of beer. From 1870 to 1890 the valley was one of the world's foremost hop-growing areas, producing spectacular yields and spectacular fortunes. When hop lice destroyed the crop in 1891, farmers turned to berries and flowers for cash crops. The town was platted in 1877 by Ezra Meeker (1830-1928), hop tycoon, entrepreneur, politician, author, and civic gadfly. Notwithstanding sawmills and woodworking plants, agriculture remained the valley's major industry through World War II. But competition from California and foreign growers doomed the berry industry and most of the flower industry moved to the Skagit Valley. The postwar boom accelerated pressure on farmlands as housing developments and malls marched easily across the fields. Today, Puyallup acknowledges its agricultural past primarily through the Puyallup Fair and the Daffodil Festival and Parade, private nonprofit organizations that have become year-round mini-industries. In addition, Puyallup and environs remain home to one of Western Washington's major retail auto centers.
A Fecund Valley
The Puyallup Valley for thousands of years was home to a band of Native Americans -- 800 to 2,000 of them -- called the “pough-allup,” or “generous people” by the Yakama Tribe, with whom they traded. The valley, watered by the glaciers of nearby Mount Rainier, was thick with fir, spruce, alder, vine maple, cedar, and cottonwood trees, and the undergrowth a tangled thicket of salal and salmonberry. The Indians lived in permanent cedar longhouses and enjoyed an abundance of fresh- and saltwater fish -- especially salmon -- clams, game, berries, nuts, and roots of the camas and potato-like wapato. They lived well and could afford to be generous. The river which fed the valley, however, flooded regularly and was clogged with logjams.
The first known white man to explore the valley was Dr. William F. Tolmie (1812-1886), who was sent from Fort Nisqually to care for Hudson’s Bay Company trappers in 1833. By 1852, several men, living with Native American wives, had staked homestead claims under the 1850 Donation Land Act. The Indians at first welcomed the settlers but trouble soon developed. Territorial Governor Isaac I. Stevens (1818-1862), no Native American sympathizer, pressured local tribal leaders into signing the Medicine Creek Treaty on December 26, 1854. The treaty, which forced the tribes onto inadequate reservations, was weighted heavily in favor of the whites and tribal anger erupted in violence in 1855 when members of two white families were killed on the nearby White River. Armed resistance resulted in the Indian Wars of 1855-1856, whites fled to Fort Steilacoom, and Indians burned and looted cabins in the Puyallup Valley. A few whites survived in the valley during this period, but settlers did not begin returning in significant numbers until 1859.
The Puyallup School District was formed in 1854, but there was no formal schooling until 1861. Itinerant preachers visited the valley occasionally in the early days, but the first congregation was organized on November 16, 1867, when the Reverend Rudolphus Weston helped organize the First Baptist Church. “His efforts bore all the marks of a revival ... .” (Price-Anderson, p. 32)
The Meekers Arrive
Ezra Meeker, who had emigrated from Ohio in 1852 with his wife, Eliza, and other family members, had tried homesteading in Kalama, on McNeil Island, and at a soil-poor, mosquito-infested farm called Swamp Place, southeast of Tacoma. The Indian Wars drove Ezra and his extended family to Fort Steilacoom, along with the other settlers, but then the ever-resourceful family opened a promising mercantile business. In 1861, Ezra’s brother, Oliver (1828-1861), was sent to San Francisco to acquire an inventory. On the return voyage, the ship along with Oliver and the inventory was lost in a storm on January 5, 1861. The family was left with few resources.
The valley began to repopulate slowly and Ezra and his family returned in 1862, the same year a post office was established -- called “Franklin.” Meeker sold the old Swamp Place to Dr. Charles H. Spinning, a physician just appointed to serve the expanded Puyallup Indian Reservation. “Of the nearly 400 treaties negotiated with Indian tribes from 1778 to 1871 only about two dozen mentioned any kind of medical services.” (Puyallup Indian Health Authority website.) The Medicine Creek Treaty is one of these. Under Article 10, the United States agreed “to employ a physician to reside at the said central agency, who shall furnish medicine and advice to their sick, and shall vaccinate them ...” (Medicine Creek Treaty, Article 10).
Farming was subsistence-level until 1865, when Charles Wood, an Olympia brewer, imported hop roots from England. Brewers use hops as a preservative and to give beer its flavor. The Meekers obtained some of Wood’s hop roots, planted them, and an agribusiness was born. Hops will grow in almost any climate, but they thrived in the Puyallup Valley, producing more quality hops per acre than in other hop-growing areas around the world.
Hops Growers Grow Rich
In 1877, Meeker platted 20 acres of his farm to create a town. Some controversy remains over who named it “Puyallup” -- Meeker or A. S. Farquharson, a stave mill owner -- but Meeker was quoted as saying he “bore the onus” for giving the town its name (Price-Anderson, p. 46). Others quickly added properties, rivalries developed, and Puyallup grew rapidly.
By 1884, there were more than 100 hop growers in the valley and Ezra Meeker had more than 500 acres in the vines. But Meeker, ever the entrepreneur, built kilns to dry the hops, then formed a hop brokerage, and soon he, along with his wife, was traveling regularly to Britain as “The Hop King of the World” (Price-Anderson, p. 41). Meeker and his agents “scoured the world and eventually cornered the hop market” (Kolano, p. 61). Hops “brought into the State more than $20,000,000, and now gives employment to 15,000 people annually,” according to an 1891 article in The New York Times.
The nouveau-riche farmers built mansions, none more spectacular than Ezra Meeker’s 17-room Italianate Victorian showplace, its design and construction under the guidance of his wife, Eliza. It is now maintained by the Ezra Meeker Historical Society and was added to the National Register of Historic Places in 1971.
A few of Puyallup’s leading citizens incorporated the town in 1888, but two years later the Washington State Supreme Court declared that incorporation illegal. On August 16, 1890, the 1,500 citizens of Puyallup approved a new, legal incorporation and Ezra Meeker was elected mayor. Meeker cut a wide swath through the valley’s early history -- as entrepreneur, author, lobbyist, historic preservationist, public servant, civic and religious benefactor, and pioneer gadfly. He left an indelible legacy in the valley.
A Neat Little Place
In August 1888, an article in The Northwest Magazine described Puyallup as “a thrifty, neat little place, growing steadily, and looking forward to doubling or trebling its present population of five hundred when the rich loam soils of the valley are more extensively cleared.” The article noted that “Labor in the picking season would be dear in Puyallup were it not for the Indians, who come in great numbers from the reservations on Puget Sound and even ... British Columbia.” Many Indians, the region's first migrant workers, came from as far as British Columbia to pick the hops, and later berries. The Indians from Canada maintained dual Canadian/American citizenship.”
Then Chinese, no longer needed to build the West’s railroads, migrated to the valley to compete with Indians and others for picking jobs. According to historian Larry Kolano, “The Chinese remained unmolested until the Depression of 1893, when jobless whites decided to run them out of the valley” (Kolano, p. 62).
But all was not totally bucolic in the valley. Coal was being mined at the head of the valley, and the Northern Pacific Railway had laid track through Puyallup. Eighteen trains a day, including coal trains, other freight trains, and passenger trains passed through the town of Puyallup.
End of the Hop Era
The hop bonanza ended abruptly in 1892, when hop lice, an occasional scourge elsewhere in the world, invaded the valley’s fields, wiping out the industry and several fortunes, including Meeker’s. Berries had been introduced to the valley in the late 1870s and succeeded hops as the primary cash crop, followed later by flower bulbs. Meeker focused his attention on his mercantile business in Puyallup. When gold was discovered in Canada’s Klondike in 1896, he opened a store in Dawson City and filed a mining claim, but never found gold and went broke again. Meeker would go on to become a Pacific Northwest booster and the primary force behind memorializing the Oregon Trail. He donated property to the city for a park, and contributed anonymously to several churches, despite the fact that Prohibitionist preachers often vilified him for his contribution to alcohol consumption by raising hops.
A smallpox epidemic struck the Puget Sound region in 1891-1892, and when panicked Tacoma families tried to escape to the Puyallup Valley, they found that “A ‘shotgun quarantine’ was ordered by the town council. All Tacomans were to be kept out of the town ... . While Tacoma and Seattle suffered, Puyallup maintained its strict isolation and its health” (Kolano, p. 69).
The town fathers created a police department September 10, 1890, and a fire department on September 19, 1890. The fire department was formed two days after the Great Puyallup Fire, which destroyed much of the downtown.
Progress in Agriculture
On March 9, 1891, at the apex of the hop bonanza, the Washington State Legislature approved the Puyallup Agriculture Experiment Station, which was part of the new State Agricultural College of Washington in Pullman. Puyallup Valley farmers Darius M. Ross and his son, Charles, donated land for the station and it was built in 1984. What is now (2008) the WSU Puyallup Research and Extension Center has evolved into a 360-acre research institute that examines biological, environmental, and social issues far beyond the visions of its founders. Among its many successes is its widely emulated Master Gardener Program.
On October 4-6, 1900, a group of Puyallup Valley farmers and others cobbled together an agricultural and livestock exhibition to promote local products, calling it the “Valley Fair.” It evolved into the Puyallup Fair, now the major showcase for the Western Washington Fair Association, a nonprofit organization that operates a year-round hospitality and convention business. The fair, which attracts 1.6 million fairgoers each year, is the largest state fair in Washington, one of the largest in the country, and remains the anchor of the city’s visitor industry.
As the century turned, poultry and dairy farms appeared in the valley and sawmills and woodworking plants flourished, but berries -- blackberries, raspberries, strawberries, loganberries, and gooseberries -- remained the most lucrative cash crops. By 1912, the Puyallup and Sumner Fruitgrowers’ Association had 1,300 members and was considered the largest association of fruit growers in the world.
Berries remained a major crop, now mostly for jams, but around 1910 George Lawler introduced daffodils to the valley and they thrived. "By 1927, the valley was producing 23 million bulbs and by 1929, 60 million” (Price-Anderson, p. 92). Puyallup Valley bulb farmers sponsor their first Daffodil Parade on March 17, 1934, to promote their crop. The parade was a modest procession of automobiles and bicycles festooned with daffodils. The Daffodil Festival is now institutionalized, a year-round production managed by a nonprofit organization. There are four sequential parades in one day -- in Tacoma, Puyallup, Sumner, and Orting -- and the organization oversees other events through the year, including several more parades.
But another event was a precursor of the valley’s nonagricultural future. In 1912, the Pierce County Auto Company was formed to sell Ford automobiles. “From this and all the other automobile retail companies that sprang up in the early 1900s came the legacy of Puyallup’s claim to have the most and cheapest cars in the area” (Price-Anderson, p. 75). Indeed. As of 2007, the Puyallup area still was one of the state’s major retail auto sales centers, generating about 25 percent of the city’s sales-tax revenue, according to Ellie Chambers, Puyallup economic development officer.
One civic controversy that marked that period involved street names. Meeker had given streets the names of trees when he platted his town in 1877 -- Ash, Alder, Fir, Spruce, Maple, and on through the forest. But in 1911, the city council and mayor changed the street names to numbers in preparation for the coming of free mail delivery to Puyallup homes. Citizens, including Meeker and the influential Puyallup Women’s Club, objected and the issue simmered until 1914. Today, with few exceptions, the streets remain numbered.
The town prospered during the World War I years. In May 1919, University of Washington President Henry Suzzallo (1875-1933), speaking at the dedication of Puyallup’s new city hall-civic center, said: “If every community in the United States were in the robust condition of the Puyallup valley, the whole country would be in a splendid condition” (Price-Anderson, p. 89.)
War Years and After
Puyallup’s growth slowed during the 1930s Great Depression, but World War II brought full employment. Still, it left farmers without pickers and many in ruin. It also brought universal Bond drives, Victory Gardens, aluminum collections, gas rationing, air raid watchers -- and one somberly unique wartime experience. In February 1942, President Franklin D. Roosevelt (1882-1945) ordered 120,000 West Coast Japanese residents into internment camps. Those the Seattle area and from Alaska were sent to a hastily constructed staging area on the Puyallup Fairgrounds before being shipped to the Camp Minidoka relocation center in Idaho. It was called, somewhat incongruously, Camp Harmony. “Local students were stunned to discover their classmates behind the fence, unable to attend class or play ball with them” (Price-Anderson, p. 103). Some of the valley citizens felt compassion for their interned neighbors; others did not.
Puyallup’s growth had spurted during the war and the 1950 U.S. Census recorded 9,955 residents. And growth would continue as housing tracts crept across the valley farmlands. During the 1950s, competition from foreign and domestic sources was threatening the berry industry and farmers were trying crops such as rhubarb and Christmas trees. Puyallup itself was experiencing growing pains, with disputes arising over law enforcement, garbage collection, and civic construction. In 1951, in the midst of these squabbles, the City Council adopted a city manager form of government. The council appointed the city manager and mayor was selected from among the council.
Flooding had been a recurring, almost annual, problem throughout the Puyallup Valley’s history and it persisted despite channeling, diking, straightening, and dredging the Puyallup River. In 1948 Mud Mountain Dam was completed on the White River, a Puyallup tributary. At the time, it was the highest rock- and earth-filled dam in the world and, for the most part, it solved the problem.
Meanwhile, the valley’s farmers remained under siege on several fronts. The bulb industry was moving to Skagit County, and by 1974 farmland preservation was becoming a political issue, as it was in neighboring King and other Western Washington counties. A citizens group, Prime Land Action Needed (PLAN), tried to save Pierce County’s agricultural land, but farmer Wally Staatz told a Kiwanis Club meeting: “Let’s face it ... This is no longer an agricultural area” (Price-Anderson, p. 127). Pierce County voters were given an opportunity to vote on taxpayer support of farmland preservation in 1985, but this failed when not enough voters went to polls.
Puyallup had a brief flirtation with high-tech industry in the early 1980s, when Fairchild Semiconductor obtained a 92-acre property on South Hill. It employed 900 in 1985, but soon folded. The property, a civic “white elephant,” had been owned by a succession of high-tech companies but never was developed. It was sold in October 2007 by Arizona-based Microchip Technology to the Benaroya Company, a well-known Seattle developer of industrial parks, for about $30 million, far below Microchip’s asking price of $93 million.
By 2007, Puyallup’s sense of community was undergoing change. South Hill, with its 120-store mall surrounded by new developments, was developing an identity of its own. Downtown Puyallup, like many such downtowns, had been resurrected, with the focus on collectibles -- antique shops and boutiques. There were three mainstream high schools – Puyallup, Rogers, and Emerald Ridge – and their strong rivalries are a big part of the community bonding.
The Western Washington State Fair, the city’s iconic reminder of its agricultural past, generates ambivalence. The traffic mess “is considered a nuisance, but at the same time, there’s a great deal of pride for the community. ... People take off work, vacation time, so they can work at it,” said Heather Meier, editor of the Puyallup Herald. Some farms remain in the valley but agriculture remains “a big struggle,” she said. The county continues to attempt to preserve farmland, with “mixed reviews.”
Demographically, the city was about 88 percent white in 2007, almost 5 percent Hispanic, 3 percent Asian, and the remainder African American, Native American, Pacific Islander, or other. | <urn:uuid:6da37fb8-e7a7-441d-9584-c32b492eb286> | CC-MAIN-2017-51 | http://historylink.org/File/8447 | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948522999.27/warc/CC-MAIN-20171213104259-20171213124259-00726.warc.gz | en | 0.964412 | 4,195 | 3.125 | 3 |
Thinking about aircraft of the past, some AEA members can remember the use of radium, a radioactive silvery element used to coat instrument faces to make them glow in the dark without external lighting.
In the area where I live, Luminous Dial was close by and was responsible for coating many of these instruments and indicators used in military and non-military aircraft. The company also turned into an EPA Superfund site years after it closed, leaving radioactive “hot spots” all over the area where the former factory was located.
The coating process included the incorporation of radium, a radioactive element emitting relatively harmless alpha particles, which can be shielded by a piece of paper, and dangerous gamma rays, which require more shielding, and typically were mixed with a phosphor. As the radium emitted its radiation, the phosphor glowed, giving the desired ability to read the instruments at night without lighting. Unfortunately, the hazards of being in proximity to radium weren’t understood, and while it proved a boon for night bombing and the war effort, its use has been largely abandoned for decades. However, this abandonment has not eliminated the plethora of radium-laced instruments existing in the field, both in flying condition and as paperweights on the desks of countless pilots and ex-crewmembers. While the phosphor has long since lost its glow, the radium is still decaying. With a half-life (the amount of time it takes an atom to decay to half its value) of 1,599 years, the radium will continue to provide a low dose of radiation exposure to anyone in proximity to one of these legacy devices. NRC Involvement, New Regulation The regulations governing aircraft are found largely within the FAA and FCC matrix, with most shops also falling under the provisions of 29CFR and OSHA regulations. Based on the discovery of a large cache of this legacy instrument in the warehouse of a California instrument firm, which subsequently was declared an EPA Superfund site for remediation, along with other issues associated with medical and accelerator products, the EPA compelled the Nuclear Regulatory Commission to develop regulations to control these devices. Your regulatory affairs staff at the AEA was involved and engaged in this rulemaking process. As a result of its efforts, the regulations are simple to comply with for a majority of shops. Shops working on such legacy instrument in the past should make sure they do not have an issue that could cause an increase in cancer risks for their employees, as well as new rules to incorporate into their shop manuals to ensure they stay within the new regulations. These new regulations went into effect Nov. 30, 2007. Here are the sections of the new regulation in 10CFR31.12, along with interpretations of what they mean:
a) A general license is hereby issued to any person to acquire, receive, possess, use or transfer with the provisions of paragraphs (b), (c) and (d) of this section, radium-226 contained in the following products manufactured prior to Nov. 30, 2007: 1 and 2) These sections apply to radium-filled antiquities and clock faces/hands, and are not included. 3) Luminous items installed in air, marine or land vehicles.
Section (a)(3) applies to products inside air, marine or land vehicles. There is no limit to the number of devices that can be installed in such vehicles.
4) All other luminous products, provided no more than 100 items are used or stored at the same location at any one time. Section (a)(4) provides the basis for a shop having instruments on hand. If a shop has instruments and/or other radium devices on hand in excess of 100 items, the shop must reduce the number of devices to less than 100 to stay within the limits for holding a general (no-cost) license. The AEA has worked with the NRC to ensure this is assessed at a particular location — such as if you have an East Coast branch and West Coast branch, each gets to have no more than 100 items, not 100 items between the two branches. From the perspective of the regulator, having 100 items in one building and 100 items in another building at the same address is considered having them at the same location and would be a violation requiring a license other than general. Two specific licenses are available for items in excess of 100 in the Schedule of Fees:
• 3.R.1 allows possession of items containing radium-226 identified in 10CFR31.12, which exceed the number of items or limits up to 10 times for a $590 application fee and a $2,100 annual fee. • 3.R.2 allows exceeding 10 times the number allowed in 10CFR31.12 for a $1,400 application fee and a $2,700 annual fee. Neither of these licenses allows your shop to perform any work on such instruments. b) Persons who acquire, receive, possess, use or transfer byproduct material under the general license issued in paragraph (a) of this section are exempt from the provisions of 10 CFR Parts 19, 20 and 21 and § 30.50 and 30.51 of this chapter, to the extent that the receipt, possession, use or transfer of byproduct material is within the terms of the general license; provided, however, that this exemption shall not be deemed to apply to any such person specifically licensed under this chapter. This is very important as it eliminates the reporting and red tape usually associated with the possession of such items. However, if your shop is licensed by the NRC to handle and work on such instruments, this provision does not apply to the licensed shop. c) Any person who acquires, receives, possesses, uses or transfers byproduct material in accordance with the general license in paragraph (a) of this section: 1) Shall notify the NRC should there be any indication of possible damage to the product so that it appears it could result in a loss of the radioactive material. A report containing a brief description of the event, and the remedial action taken, must be furnished to the Director of the Office of Federal and State Materials and Environmental Management Programs, U.S. Nuclear Regulatory Commission, Washington, D.C., 20555–0001 within 30 days. Contamination is the spread of radioactive material in an area where it is not desired. The vernacular used in nuclear power when an area is contaminated is “crapped up.” The NRC is concerned about the spread of contamination due to breached devices. Thus, the NRC is requiring a written report within 30 days of the discovery of any indication of damage where it could result in a loss of radioactive material. Because radium breaks down to radon, a cracked case or glass on an instrument could cause a loss of radioactive (gaseous) material resulting in airborne contamination, and it must be reported. The remedial action taken should be sufficient to contain any contamination, and it includes placing the instrument in a sealed container, such as a can or bucket with a sealed lid, properly immobilized to ensure further damage does not occur, and transferring the damaged equipment to a licensed facility for repair or a licensed facility for disposal. It is not acceptable to the NRC for you to repair such devices to eliminate the leak — this is specifically prohibited under the general license. You could transfer the device, appropriately immobilized to prevent the spread of contamination, to a licensed repair facility. 2) Shall not abandon products containing radium-226. The product and any radioactive material from the product may only be disposed of according to § 20.2008 of this chapter or by transfer to a person authorized by a specific license to receive the radium-226 in the product or as otherwise approved by the NRC. The NRC wants to ensure radium-226 instruments are not allowed into general waste streams or are walked away from. This regulation requires the product be disposed of at a licensed radioactive waste facility. Several years ago, a cobalt-filled radiation therapy machine was improperly abandoned in a garbage dump and resulted in the deaths of dozens of villagers in Mexico who had broke the machine open and rubbed the glowing metal on their body. 3) Shall not export products containing radium-226 except in accordance with Part 110 of this chapter. This is written clearly: It prohibits the export of any radium-226 instrument unless you are specifically licensed to export such instruments. This is a new restriction to prevent U.S. shops from exchanging such instruments with non-U.S. entities unless licensed. 4) Shall dispose of products containing radium-226 at a disposal facility authorized to dispose of radioactive material in accordance with any federal or state solid or hazardous waste law, including the Solid Waste Disposal Act, as authorized under the Energy Policy Act of 2005, by transfer to a person authorized to receive radium-226 by a specific license issued under Part 30 of this chapter, or equivalent regulations of an agreement state, or as otherwise approved by the NRC. The NRC wants to ensure radium-226 instruments are not allowed into general waste streams. This regulation requires the product be disposed of at a licensed radioactive waste facility. 5) Shall respond to written requests from the NRC to provide information relating to the general license within 30 calendar days of the date of the request, or other time specified in the request. If the general licensee cannot provide the requested information within the allotted time, it shall, within that same time period, request a longer period to supply the information by providing the Director of the Office of Federal and State Materials and Environmental Management Programs, by an appropriate method listed in § 30.6(a) of this chapter, a written justification for the request. The NRC likes reports. While not within the regulation, having experience with the NRC allows me to provide you with some insight regarding what a typical inspector is looking for. To ensure the NRC you have your inventory under control and, thus, have demonstrated your shop does not warrant time-consuming inspections, you’ll need to: • Know what you have on hand, which includes a description of each item and any model and serial numbers, allowing each item to be tracked. If a device does not have this information, the shop should create a tracking system and append the information. This can be as simple as “Radium-226 Artificial Horizon, Item No. 1” and so on, with each item uniquely tagged and identified, allowing it to be tracked. A picture or pictures tagged to the inventory number, along with a trackable storage location, will help demonstrate the proper control. • Keep a log of all transactions of such instrument, tracking each item as it enters and noting the condition of the item on receipt, as well as all sales of such items and the location to which it was shipped. • Account for everything moving in or out of inventory. • Perform an annual inventory and verify the items on hand match your inventory. If a discrepancy is identified, it needs to be investigated and resolved. If the discrepancy is unable to be resolved, it needs to be reported to the NRC and might result in a violation. • The NRC is accustomed to having license holders with procedures to control such material. Having a written procedure to define this process and following the procedure help to ensure the NRC you have the situation under control. While this section allows a shop more time to file a report, the written justification needs to be valid to the NRC. For example, “We currently are busy installing X on a deadline,” or “Our records are not in order,” generally are not accepted by the NRC and might result in a special inspection team arriving at your doorstep, the cost of which likely would be charged to your business. d) The general license in paragraph (a) of this section does not authorize the manufacture, assembly, disassembly, repair or import of products containing radium-226, except that timepieces may be disassembled and repaired. This means, unless you are licensed by the NRC at a cost of the application fee of $4,600 and an annual fee of $8,400, your shop cannot open or repair any radium-containing instrument. This includes the entire instrument, not just the faceplate, indicating needle or glass. This same section prohibits the import of any such instrument unless specifically licensed, which means your shop cannot receive such instruments from non-U.S. entities without being in violation of your general license. Regarding NRC fees, 10CFR171.16(c) does offer the ability for licensees to qualify as a “small entity,” with resulting lower fees. This requires the applying firm to file the appropriate certification Form 526, which can be accessed from the NRC website at www.nrc.gov, along with the payment of each annual fee. For cases in which your business is not involved in manufacturing and has sales of between $350,000 and $6.5 million a year, your annual license fee to work on the instrument could be as low as $2,300. If your revenue is less than $350,000, the annual license fee would be $500. Similarly, for manufacturers or educational facilities with an average of 500 employees or less, the annual license fee would be $2,300. For manufacturers or educational with less than 35 employees the license fee would be $500. Form 226 must be filed each year for the renewal of these licenses to maintain these lower rates. Failure to file the necessary Form 226 can result in your shop being assessed the full amount in the regulation by the NRC. While timepieces are allowed to be disassembled and repaired, unless your shop is skilled, equipped, and has the correct procedures and work practices in place to handle contaminated equipment, our advice is to not take such actions — the potential liability for spreading contamination in your shop and potentially allowing its ingestion by your staff is too great. Categorizing Your Existing Inventory If you are like many shops, you might have accumulated one to hundreds of these devices. Of particular concern are associations with collections, such as the EAA or local museums, which fall under this regulation but might not recognize it. In any event, you need to figure out which of these instruments potentially contain radium-226 and which ones do not, and thus are not included in this regulation. The following screening approach can be used to streamline the process: • If an instrument was produced after 1980, it has a high probability of being free of radium-226. Segregate these instruments from other instruments that might contain radium-226 to maintain proper control. • If an instrument was produced between 1960 and 1980, it is suspect. Unless manuals are available clearly describing the materials used, a professional radiation technician should be contracted to survey the instruments and tag any suspect items or those confirmed as containing radioactive material for inventory purposes. Similarly, any instrument that can screen out as clean can be moved to the instruments in the clean category for general distribution. • Instruments produced prior to 1960 have the same actions as those produced between 1960 and 1980. However, because they were produced in a time period during which the use of radium-226 was accepted, segregate them and label them as potentially containing radium-226 until confirmed otherwise. Because radium-226 is a gamma emitter, it will clearly show through the instrument glass and casing if tested by a qualified radiation technician using a Geiger-Mueller meter. Skilled personnel are available on a contract basis from a variety of firms serving the existing nuclear power industry; they also can provide you with someone who is skilled and knowledgeable regarding radiation and contamination concerns. Other Things to Look For One of the biggest risks in a shop is what is sitting on the back shelves in your storeroom. If your shop existed prior to the 1960s or has accepted any military surplus, you could have a container of radium-226-laced paint on your shelves — although it is unlikely having such a container would exceed the exemption criteria for the general license by several orders of magnitude. A review of your inventory is warranted. If you find unmarked suspect containers, rather than opening them, contact a contract radiation protection technician to survey the containers. If you are unable to locate such a technician, try this simple test: Purchase a roll of 12-exposure, 400-speed film for each suspect can, then tape a roll to each can. Label each roll and each can so they can be tracked. Separate any cans in this test from other suspect cans by 10 feet, as distance is an effective shield. Leave the roll of film in place for at least a week, then get the film developed. If the film comes back anything but black (gamma rays will cause white speckles or will bleach the film white in cases where a significant radiation source is present), you have found a problem. If such material is found, it needs to be isolated from all personnel immediately, roped off and labeled as “Radioactive Material.” A contract radiation technician should be brought in and the material surveyed to ensure you properly label the storage area as required by regulations and to assist in preparing the material for shipment for disposal. Once categorized by a radiation protection professional, you likely will need to promptly report the problem to the NRC. The best suggestion for disposition would be to contact a licensed special waste hauler and pay the necessary fees to have the material documented and disposed of in a NRC-licensed facility. Past Work on Legacy Instruments If your shop has worked on this type of legacy instrument in the past, you should have a contract technician perform a radiological survey of your shop workspaces to ensure your shop does not have small piles of radium-226 contamination creating dangerous “point sources” of exposure for your employees. If such spots are found, a qualified radiation technician can clean them up properly, and they must be disposed of in a NRC-licensed disposal facility. Consider changes that have been made to your shop when you request a survey. You might have added a building or moved the workspace. Check both the old and the new areas to ensure you have accurately assessed the potential threat. Vacuum cleaners and air handling units can concentrate airborne materials into a point source. If you have used such ventilation systems or clean-up systems, they need to be surveyed as well to ensure they are not contaminated. The AEA has worked hard to ensure the new regulation is as friendly to the average avionics shop as is reasonably possible. The savvy shop manager should take action promptly to understand the scope of the issue, engage professional assistance, and get in compliance with the new regulation. Remember, 10CFR31.12 isn’t just a good idea — it’s the law. The NRC does impose civil penalties and personal penalties upon entities not in compliance with NRC regulations. When you speak to or correspond with NRC personnel, you need to be honest and truthful; if you are not, you risk running afoul of 10CFR50.9, “Completeness and Accuracy of Information,” which can incur from the above penalties. About the author: In addition to a background in electronics and writing for various aviation magazines, George Wilhelmsen is the engineering rapid response manager at Exelon’s LaSalle generating station. He has 25 years experience in the nuclear power industry and working in radiologically controlled areas, as well as experience reading and understanding NRC regulations. If you have comments or questions about this article, send e-mails to firstname.lastname@example.org.
Reprinted with permission from the February 2008 issue of Avionics News Magazine.
RADIUM DIAL NEWS
The United States Nuclear Regulatory Commission has published revisions to existing regulations related to the aircraft instrument instrument industry. Although most instrument repair facilities do not accept such materials, we direct you to the following link to these regulations are available so that you will be aware of these regulatory changes.
The regulation that is applicable to certain instrument repair facilities that intend to or presently engage in the receipt and/or service of "antiquities" originally intended for use by the general public that contain radium-226 manufacturered prior to November 30, 2007 is:
Special attention should be paid to Paragraph d of Part 31.12 which states:
(d) The general license in paragraph (a) of this section does not authorize the manufacture, assembly, disassembly, repair, or import of products containing radium-226, except that timepieces may be disassembled and repaired.
More Radium Dial News can be found in this months AEA "Avionics News Magazine"
All material contained within AIA.net, unless otherwise stated, is the property of the Aviation Instrument Association. Copyright and other intellectual property laws protect these materials. Reproduction or retransmission of the materials, in whole or in part, in any manner requires written permission from the Aviation Instrument Association. The Aviation Instrument Association is registered under Section 501(c)(6) of the Internal Revenue Code as is not organized for profit. | <urn:uuid:892ecf24-9439-4b48-9c10-f7d2355b999b> | CC-MAIN-2022-33 | http://www.aia.net/pressroom/nrcnews.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00295.warc.gz | en | 0.952488 | 4,309 | 3.203125 | 3 |
Hypervitaminosis B: Side Effects Of Too Much Vitamin B
Disclaimer: Results are not guaranteed*** and may vary from person to person***.
Hypervitaminosis B is a condition where there is too much vitamin B intake, especially in supplement form. As a result, too much vitamin B symptoms may include nerve toxicity, jaundice and liver toxicity, nausea, and digestive issues.
Vitamin B complex supplements comprise a combination of all B vitamins, which include:
- Vitamin B1
- Vitamin B12
Since B vitamins are water soluble, excess amounts will not build up in the body. Instead, they will be excreted through the urine.
That being said, taking exceptionally high dosages of these B complex vitamins can lead to a variety of health risks and side effects.
This article will detail the major side effects of too much vitamin B, including lung cancer in men and the effect of too much vitamin B6 while pregnant, as well as the specific health risks associated with too much vitamin B1, vitamin B2, vitamin B3, vitamin B5, vitamin B6, vitamin B9 , and vitamin B12.
In This Article:
What Causes Extremely High B12 Levels
Recent ingestion or injection of supplemental vitamin B12 are the most common causes of high B12 in the blood. It is not a concern to overdose on B12 supplements because excess can be excreted in your urine. It could also be from your diet if you consume a lot of animal products like meat, eggs, and shellfish.
Recommended Use Dosage And Possible Interactions
Folic acid is included in most multivitamins, prenatal supplements, and B complex vitamins, but its also sold as a supplement. In certain countries, some foods are also fortified with the vitamin.
Folic acid supplements are typically recommended to prevent or treat low blood folate levels. Moreover, those who are pregnant or are planning to become pregnant often take them to reduce the risk of birth defects .
The Recommended Dietary Allowance for folate is 400 mcg for those over 14. People who are pregnant and breastfeeding should get 600 and 500 mcg, respectively. Supplement doses typically range from 400800 mcg .
You can purchase folic acid supplements without a prescription. Theyre generally considered safe when taken in normal doses .
That said, they can interact with some prescription medications, including some that are used to treat seizures, rheumatoid arthritis, and parasitic infections. Thus, if youre taking other medications, its best to consult a health professional before using folic acid supplements .
Folic acid supplements are used to reduce the risk of birth defects and prevent or treat folate deficiency. Theyre generally considered safe if taken in recommended amounts but may interact with some prescription drugs.
Recommended Reading: Zzzquil Or Nyquil
Top 7 Foods Rich In Vitamin B12
1. Beef Liver
Liver is very rich in vitamin B12. In fact, only one ounce of beef liver meets your daily recommended intake of the vitamin. It also helps with anemia as it is full of folate and iron, too. You should look for liver from pasture-raised or grass-fed cows, as the quality will be better.
Sardines are high in both vitamin B12 and omega-3 fatty acids. The combination of nutrients help with asthma, increases heart health and fights inflammation. Eating sardines has been associated with several health benefits.
3. Atlantic Mackerel
The Atlantic sardine is full of B12 and omega-3 fatty acids, too. Not to be confused with the king mackerel, this fish is low in sodium. It has been ranked as one of the top fish for health.
While not as popular in the U.S. as other countries, adding lamb to your diet is a great way to supplement your diet. Since there really isnt such a thing as too much vitamin B12 side effects, you can add this highly nutritious meat without fearing a vitamin overload. Lamb is also full of iron, protein, zinc, and selenium, which are effective in boosting your immunity.
5. Wild-Caught Salmon
6. Feta Cheese
Feta cheese is full of vitamin B12, riboflavin and calcium. It is traditionally made from sheeps milk, but can be made from a combination of sheep and goat milk as well. Because of its high riboflavin level, it has been found helpful in the prevention of migraines. The best source is that made from raw sheep milk.
Identifying Vitamin B Deficiency
B vitamin deficiency isnt necessarily easy to spot. Thats because there are many different B vitamins, and a deficiency in one might look different from a deficiency in another. Each B vitamin does something different to its neighbors, and so when you lack in one, you might experience fatigue but lack in another might cause skin rashes.
Lets take a brief look at the different symptoms of vitamin B deficiency.
- Vitamin B12 deficiency can cause anemia, confusion, fatigue, weakness, and depression.
- Vitamin B6 deficiency can cause anemia, confusion, depression, nausea, and rashes.
- Very few people lack B1 and B2 because theyre so common. However, a deficiency can cause confusion too.
- Vitamin B3 deficiency can cause digestive issues, nausea, and abdominal cramps.
- Vitamin B9 deficiency can cause anemia and diarrhea.
Thats an incredible range of problems. Unfortunately, diagnosing a lack of B vitamins isnt as easy as diagnosing other physical conditions. If you have measles, for example, there are easy-to-spot symptoms that make the condition obvious. But a vitamin or mineral deficiency is much harder to spot because the symptoms are more generic. If youre in doubt, talk with a healthcare professional to find out more.
You May Like: Does The Garmin Vivofit 4 Track Sleep
When Should You See A Doctor
If you notice any adverse effects after you start taking vitamin B-12 supplements, you should discontinue use immediately. Seek medical attention if your symptoms get worse or are severe.
You can consult your doctor to establish an appropriate dose if its determined that you dont get enough B-12 from food sources.
Other Symptoms Of Vitamin B12 Deficiency
More general symptoms of vitamin B12 deficiency are listed by the NHS:
- Extreme tiredness
- Hearing sounds coming from inside the body, rather than from an outside Source
- Loss of appetite and weight loss
Bupa adds: If you have vitamin B12-deficiency anaemia, you may also look pale or jaundiced .
As well as the symptoms of anaemia, vitamin B12-deficiency may cause symptoms related to your nerves. This is called vitamin B12 neuropathy. It may affect your movement and sensation, especially in your legs, cause numbness or pins and needles and decrease your sensitivity to touch, vibration or pain. It can also cause confusion, depression, poor concentration and forgetfulness.
Read Also: Sleep Apnea Dizzy
Side Effects Of Vitamin B3 Overdose
When you get vitamin B3 from food, an overdose is unlikely to occur. But, vitamin B3 supplementation is a different story. This can lead to possible side effects of too much vitamin B3, such as blurry vision, disorientation, nausea, vomiting, abdominal pain, bloating, nervousness, and headaches.
It is also possible to experience a vitamin B3 side effect called a niacin flush, which produces symptoms like burning, itching, a tingling sensation in the chest and face, and severe skin flushing with dizziness.
Liver damage, jaundice, and stomach ulcers may also occur from very high vitamin B3 doses.
Also, avoid taking too much vitamin B3 if you have gout, ulcers, gallbladder disease, liver disease, diabetes, had a recent heart attack, or are pregnant. If you take vitamin B3 to lower cholesterol, be sure to do so under a doctors supervision.
Medication interactions associated with vitamin B3 include atorvastatin, benztropine, carbidopa, levodopa, cerivastatin, fluvastatin, gemfibrozil, glimepiride, isoniazid, lovastatin, minocycline, oral contraceptives, pravastatin, repaglinide, rosuvastatin, simvastatin, tetracycline, thioridazine, and tricyclic antidepressants.
How much vitamin B3 should you take? According to the Institute of Medicine at the National Academy of Sciences, the following is the recommended dietary allowance for vitamin B3 daily:
- Males 14-plus years: 16 mg
- Females 14-plus years: 14 mg
- Pregnant females: 18 mg
- Breastfeeding females: 17 mg
- winter squash
Side Effects Of Vitamin B2 Overdose
Taking too much vitamin B2 is also rare, but there are side effects to consider of a riboflavin overdose.
Too much vitamin B2 side effects include increased urine frequency, diarrhea, allergic reactions like hives, difficulty breathing, and swelling of the tongue or face. Vitamin B2 will also cause the harmless side effect of turning your urine to a yellow-orange color.
Medication interactions associated with vitamin B2 include zidovudine , didanosine, doxorubicin, oral contraceptives, tetracycline, and tricyclic antidepressants.
How much vitamin B2 should you take? According to the Institute of Medicine at the National Academy of Sciences, the following is the recommended dietary allowance for vitamin B2 daily:
- Males 14-plus years: 1.3 mg
- Females 14 to 18 years: 1.0 mg
- Females 19-plus years: 1.1 mg
- Pregnant females: 1.4 mg
You can also get vitamin B2 from food. Whole foods high in vitamin B2 include:
- collard greens
Seize The Night With Melatonin
While melatonin has only now been making waves in the press recently, its been making waves in the brain for millennia.
Melatonin is a naturally occuring hormone that aids in the regulation of the bodys circadian rhythm. Studies suggest that taking melatonin supplements can really help to get you better, more restful sleep.
Alternative Causes Of Insomnia And Sleeplessness
Almost anything can cause a lack of sleep. If youre not sure what exactly the problem is, scroll through our list and tick off each cause. You might be surprised at whats causing your trouble sleeping.
Environmental causes are an obvious culprit. Noise, light, or other disturbances may wake you up completely but they may also only disturb your sleep somewhatnot enough to wake you up, but enough to rouse you from a deep sleep. This can cause tiredness later in the day.
A whole host of mental conditions including stress, anxiety, nerves, and depression can cause a lack of sleep. In turn, a lack of sleep can make these conditions worsea vicious cycle. Try relaxing techniques like meditation before bed to see if they can help.
Physical illness can cause discomfort and trouble sleeping. Obesity and diabetes can make you too hot at night, or cause cramp and pain. Any condition that causes pain can keep you awake at night, from multiple sclerosis to the flu.
Lifestyle choices may also play a part. Excess caffeine, nicotine, or alcohol can disrupt sleeping patterns, as can being overweight or obese. Having a job that necessitates working in the evening, sleeping late or going to bed late, and eating at night can also be at fault.
Delayed Sleep Phase Disorder
Read Also: Best Garmin Watch For Sleep Tracking
Tingling Sensation Or Numbness
Can Too Much Vitamin B12 Cause Side Effects
Vitamin B12 is water-soluble, released in your urine if youve consumed too much. Because of this, it is generally well-tolerated and rarely reaches toxic levels due to diet. It is very hard to overdose on vitamin B12, but you can put a moderate strain on your kidney if excess consumption occurs too often. Too much vitamin B12 side effects by overconsumption are rare.
Watch Out for Medicine Interactions
It is safe to consume too much vitamin B12, but keep in mind it may negatively interact with other medication. The most common effect of these medications is they do not allow your body to effectively absorb the amount of vitamin you need. One such medication is the antibiotic named chloramphenicol. Other types of drugs include those designed to relieve acid reflux and control Type 2 diabetes. If you need to take one of these medications, consult with your doctor as you may need to take a vitamin B12 supplement.
Recommended Reading: Garmin Vivoactive Hr Sleep
Does Vitamin B12 Deficiency Cause Insomnia
Vitamin B12 deficiency is common in vegetarians and older people.
It is responsible for a variety of neurological health problems.
While its not overwhelming, there is some evidence that vitamin B12 deficiency may indirectly cause insomnia in some people. Other B-vitamins like niacin are also important for sleep.
Well review that research quickly in this post.
Can To Many B12 Pills Make It Difficult To Sleep
Insomnia. It is possible to worsen insomnia by overdose of vitamin B. It can interfere with the normal sleep cycle when the B vitamins are consumed in excess of the normal amount. Vitamin B12 in the blood, especially vitamin B12, is an energy booster when taken in high doses.
Also Check: How To Win Sleep Apnea Va Claim
How Much Vitamin B12 Is Too Much
Vitamin B12 is a water-soluble nutrient that plays many critical roles in your body.
Some people think that taking high doses of B12 rather than the recommended intake is best for their health.
This practice has led many to wonder how much of this vitamin is too much.
This article examines the health benefits, as well as potential risks of taking megadoses of B12.
Theres no question that vitamin B12 is essential for health.
This nutrient is responsible for numerous functions in your body, including red blood cell formation, energy production, DNA formation and nerve maintenance .
Though B12 is found in many foods, such as meat, poultry, seafood, eggs, dairy products and fortified cereals, many people dont get enough of this important vitamin.
Health conditions such as inflammatory bowel disease , certain medications, genetic mutations, age and dietary restrictions can all contribute to an increased need for B12.
Vitamin B12 deficiency can lead to serious complications such as nerve damage, anemia and fatigue, which is why those at risk should add a high-quality B12 supplement to their diet .
While people who consume adequate amounts of B12-rich foods and are able to properly absorb and utilize this nutrient dont necessarily need to supplement, taking extra B12 has been linked to some health benefits.
For example, studies show that supplemental B12 may benefit people without a deficiency in the following ways:
In Many Cases Diet Alone Doesnt Give Us All The Nutrients We Need
We know that diet and sleep are deeply connected. But the truth is, we dont know nearly enough yet about how individual nutrients impact our sleep. Here, I look at five vitamins that appear to play a role in how much sleep we get and how restful and high-quality that sleep is.
As youll see, several of these vitamins may affect our risk of sleep disorders, including insomnia and sleep apnea. And at least two of them appear to play a role in regulating our circadian rhythms, the 24-hour biorhythms that control our sleep-wake cycles.
Im a big believer in leveraging a healthy diet to improve sleep. Often, diet alone doesnt give us all the nutrients we need. Supplements can play an important role in filling those gaps.
But before you run out and add the vitamins below to your supplement list, I encourage you to do two things. Look for ways to improve your vitamin intake through your diet. And talk to your doctor. Getting the dosingand the timingof supplement intake is critical to success when it comes to sleep.
Always consult your doctor before you begin taking a supplement or make any changes to your existing medication and supplement routine. This is not medical advice, but it is information you can use as a conversation-starter with your physician at your next appointment.
For memory protection Similar to vitamin E, vitamin C has been shown to offer protection for the brain against the memory loss associated with sleep deprivation.
Restless Leg Syndrome And Vitamin B12
Like the name of the condition suggests, Restless Leg Syndrome is a condition where your legs cant stay still at certain times.
RLS can of course make it harder to sleep and play a role in developing insomnia.
While RLS is usually caused by iron deficiency, theres some evidence linking vitamin B12 deficiency and RLS .
SummaryIts not a particularly likely scenario, but there appears to be some small chance of a vitamin B12 deficiency triggering RLS and leading to sleep trouble.
Side Effects Of Vitamin B9 Overdose
Folic acid is thought to be very dangerous when taken in high doses.
In normal circumstances, folic acid ensures a healthy pregnancy, supports nerve health, and protects against depression, cancer, and dementia. However, in high dosages, folic acid can cause great damage to your central nervous system.
Folic acid supplements can interact with certain medications like methotrexate and drugs that treat cancer and autoimmune diseases. Taking folic acid with antiepileptic drug used for psychiatric diseases or epilepsy may reduce the serum levels of these drugs.
Folic acid also may reduce the effectiveness of the ulcerative colitis treatment sulfasalazine, and interact with zinc and vitamin B6.
How much folate should you take? According to the Institute of Medicine at the National Academy of Sciences, the following is the recommended dietary allowance for folate daily:
- Males and females 14 and over: 400 micrograms
- Pregnant females: 600 mcg
Food high in folate include:
- calfs liver
- bell peppers
Also Check: Does The Garmin Vivofit 4 Track Sleep
How To Tell If Vitamins Is Causing A Lack Of Sleep
There are many steps you can take to make sure B vitamins enhance your sleep, rather than interfere with it. These will help you figure out whether its B vitamins or another factor thats causing your lack of sleep.
Go through each step one by one to make sure:
- Start taking your B vitamins earlier in the day, rather than later at night. Since they interfere with your sleep schedule, taking them late at night can make your sleep less restful. The problem may be as simple as that.
- Cut down on your intake, for instance by only taking half a tablet at a time. It may be that youre taking too much without knowing, or that your body processes B vitamins slower than normal.
- Stop taking B vitamins for a brief period. Take note of how well you sleep over the course of a week, perhaps in a sleep diary. Identify any improvements, i.e., longer sleep, or more restful sleep.
- Cut out other supplements to see if it was something other than your B vitamin complex tablet. As mentioned in the study above, multivitamins can cause restless sleep, not just B vitamins.
- Talk with your doctor if you still cant figure out the cause. It may be a completely unrelated condition thats making your sleep less restful. Your lack of sleep may be a sign of something more serious.
If your B vitamins were causing your poor sleep, these five steps should be enough to establish the fact. Before you see a doctor, though, it might help you to look at our list of other causes of insomnia below. | <urn:uuid:3f1a0ba5-529e-4f5b-91aa-daca51330f1c> | CC-MAIN-2022-33 | https://www.sleepingfix.com/can-too-much-b12-cause-insomnia/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570692.22/warc/CC-MAIN-20220807181008-20220807211008-00694.warc.gz | en | 0.922024 | 4,202 | 2.53125 | 3 |
The Commonwealth of The Bahamas is a constitutional, parliamentary democracy. Prime Minister Hubert Minnis’s Free National Movement won control of the government in May 2017 elections that international observers found free and fair.
Civilian authorities maintained effective control over the security forces.
Human rights issues included violence by guards against prisoners and harsh prison conditions. Libel was criminalized, although it was not enforced during the year.
The government took action in some cases against police officers, prison officials, and other officials accused of abuse of power and corruption.
Section 1. Respect for the Integrity of the Person, Including Freedom from:
a. Arbitrary Deprivation of Life and other Unlawful or Politically Motivated Killings
There were no reports that the government or its agents committed arbitrary or unlawful killings. The Ministry of National Security reported two fatalities in police operations during the year; in each case the government reported the suspect was armed. Twelve police shootings were pending before the Coroner’s Court.
There were no reports of disappearances by or on behalf of government authorities.
c. Torture and Other Cruel, Inhuman, or Degrading Treatment or Punishment
The constitution prohibits torture and cruel, inhuman, or degrading treatment or punishment. At times citizens and visitors alleged instances of cruel or degrading treatment of criminal suspects or of migrants by police or immigration officials. In June a man alleged The Bahamas Department of Corrections (BDOC) officers beat him and denied him medical treatment. BDOC officials charged a prison officer with “using unnecessary force.” He was awaiting the decision of a disciplinary tribunal.
Foreign male prisoners frequently reported threats and targeting by prison guards at the BDOC. For example, in September a prisoner reported that BDOC officials touched him in a sexually inappropriate manner on the shoulders and chest. The government moved the individual to a different wing of the prison while awaiting the results of an internal investigation.
Prison and Detention Center Conditions
Conditions at Fox Hill, the government’s only prison, failed to meet international standards in some areas and were harsh due to overcrowding, poor nutrition, and inadequate sanitation and ventilation.
Physical Conditions: Overcrowding, poor sanitation, and inadequate access to medical care and drinking water remained problems in the men’s maximum-security block. In September the Ministry of National Security reported the prison held 1,778 inmates in spaces designed to accommodate 1,000. Pretrial detainee juveniles were held with adults at the Fox Hill remand center. Prison conditions varied for men and women.
The government stated inmates consistently received three meals a day, but some inmates and nongovernmental organizations (NGOs) reported inmates received only two meals per day, with a meal sometimes consisting only of bread and tea. Fresh fruit and vegetables were rare to nonexistent. Prisoners also reported infrequent access to drinking water and inability to save potable water due to lack of storage containers for the prisoners. Many cells also lacked running water, and in those cells, inmates removed human waste by bucket. Sanitation was a general problem, with cells infested with rats, maggots, and insects. Ventilation was also a general problem. Prisoners in maximum security had access to sanitary facilities only one hour a day and used slop buckets as toilets.
Prison inmates complained about the lack of beds and bedding. As a result, inmates developed bedsores from lying on the bare ground. The availability of prescribed pharmaceuticals and access to physician care were sporadic.
There was inadequate access to the men’s second floor medical center for sick inmates or inmates with disabilities. Inmates reportedly used a wheelbarrow to transport inmates unable to walk to the clinic.
Administration: An independent authority does not exist to investigate credible allegations of inhuman conditions. Migrant detainees did not have access to an ombudsman or other means of submitting uncensored complaints, except through their nation’s embassy or consulate.
Independent Monitoring: The Office of the UN High Commissioner for Refugees (UNHCR) reported it was regularly able to visit the primary detention centers and the “safe-house” for women and children to speak with detainees held there, including asylum seekers and refugees. UNHCR had not conducted a formal monitoring visit at either facility since 2016; UNHCR primarily visited to identify potential persons of concern. Human rights organizations complained the government did not consistently grant requests by independent human rights observers for access to the BDOC facility, the Carmichael Road Detention Center, and the two juvenile centers. The government maintained additional bureaucratic requirements for some civil society organizations to gain access to the detention center, making it difficult to visit detainees on a regular basis.
Improvements: The Carmichael Road Detention Center installed new integrated computer modules to enhance detainee management as part of the government’s 30 million dollar modernization of the Department of Immigration. It also acquired additional industrial washers during the year for cleaning prisoner bedding and clothing.
d. Arbitrary Arrest or Detention
The constitution prohibits arbitrary arrest and detention, and the government generally observed these prohibitions, with the exception of immigration raids. The constitution provides for the right of any person to challenge the lawfulness of his/her arrest or detention in court, although this process sometimes took several years.
One man claimed the BDOC unlawfully detained him for 33 days after he received a certificate of discharge. Numerous Haitian migrants reported being detained by immigration officials and solicited for bribes of 3,000 Bahamian dollars (B$) (one Bahamian dollar is equal in value to one U.S. dollar) to gain release from the detention center.
Government officials sometimes held migrant detainees who presented a security risk at the BDOC facility.
ROLE OF THE POLICE AND SECURITY APPARATUS
The Royal Bahamas Police Force (RBPF) maintains internal security. The small Royal Bahamas Defense Force is primarily responsible for external security but also provides security at the Carmichael Road Detention Center and performs some domestic security functions, such as guarding foreign embassies. The Ministry of National Security oversees both the RBPF and defense force. The defense force augments the RBPF in administrative and support roles.
Civilian authorities maintained effective control over the RBPF and defense forces and the Department of Immigration. Authorities automatically placed under investigation police officers involved in shooting or killing a suspect. Police investigated all cases of police shootings and deaths in police custody and referred them to a coroner’s court for further evaluation. The RBPF published the results of completed investigations. The Police Complaints and Corruption Branch, which reports directly to the deputy commissioner, is responsible for investigating allegations of police brutality or other abuse.
In addition to the Complaints and Corruption Branch, the independent Police Complaints Inspectorate Office typically investigated complaints against police, but it had not met since September 2017.
From January to November, 143 complaints were lodged with the Complaints and Corruption Branch, with unethical behavior, receiving a bribe, stealing, stolen property, damage, unlawful arrest, causing harm, and extortion the most common, in descending order. The RBPF received and reportedly resolved these complaints through its Complaints and Corruptions Branch, but the responses to those complaints were made public only upon completion of an investigation. The RBPF took action against police misconduct, consistently firing officers for criminal behavior.
ARREST PROCEDURES AND TREATMENT OF DETAINEES
Authorities generally conducted arrests openly and, when required, obtained judicially issued warrants. Serious cases, including suspected narcotics or firearms offenses, do not require warrants where probable cause exists. The law provides that authorities must charge a suspect within 48 hours of arrest. Arrested persons must appear before a magistrate within 48 hours (or by the next business day for cases arising on weekends and holidays) to hear the charges against them, although some persons on remand claimed they were not brought before a magistrate within the 48-hour period. Police may apply for a 48-hour extension upon simple request to the court and for longer extensions with sufficient showing of need. The government generally respected the right to a judicial determination of the legality of arrests. The constitution provides the right for those arrested or detained to retain an attorney at their own expense; volunteer legal aides were sometimes available. Access to legal representation was inconsistent, including for detainees at the detention center. Minors younger than 18 receive legal assistance only when charged under offenses before the upper courts; otherwise, there is no official representation of minors before the courts.
A functioning bail system exists. Individuals who could not post bail were held on remand until they faced trial. Judges sometimes authorized cash bail for foreigners arrested on minor charges; however, foreign suspects generally preferred to plead guilty and pay a fine.
Pretrial Detention: Attorneys and other prisoner advocates continued to complain of excessive pretrial detention due to the failure of the criminal justice system to try even the most serious cases in a timely manner. The constitution provides that authorities may hold suspects in pretrial detention for a “reasonable period of time,” which was interpreted as two years. Authorities used an electronic ankle-bracelet surveillance system in which they released selected suspects awaiting trial with an ankle bracelet on the understanding the person would adhere to strict and person-specific guidelines defining allowable movement within the country.
Authorities detained irregular migrants, primarily Haitians, while arranging for them to leave the country or until they obtained legal status. The average length of detention varied significantly by nationality, willingness of governments to accept their nationals back in a timely manner, and availability of funds to pay for repatriation. Authorities usually repatriated Haitians within one to two weeks. In a 2014 agreement between the governments of The Bahamas and Haiti, the government of Haiti agreed to accept the return of its nationals without undue delay, and both governments agreed that Haitian migrants found on vessels illegally in Bahamian territorial waters would be subject to immediate repatriation. In return the Bahamian government agreed to continue reviewing the status of Haitian nationals with no legal status and without criminal records who either had arrived in The Bahamas before 1985 or had resided continuously in The Bahamas since that time. During the year the government began dispatching magistrates to the southern islands to adjudicate cases of interdicted irregular migrants, a change implemented to provide further due process.
The government continued to enforce the 2014 immigration policy that clarified requirements for noncitizens to carry the passport of their nationality and proof of legal status in the country. Some international organizations alleged that enforcement focused primarily on individuals of Haitian origin, that rights of children were not respected, and that expedited deportations did not allow time for due process. There were also widespread, credible reports that immigration officials physically abused persons who were being detained and that officials solicited and accepted bribes to prevent detention or secure release.
Activists for the Haitian community acknowledged that alleged victims filed few formal complaints with government authorities, which they attributed to a widespread perception of impunity for police and immigration authorities and fear of reprisal among minority communities. The government denied these allegations and publicly committed to carry out immigration operations with due respect for internationally accepted human rights standards.
e. Denial of Fair Public Trial
Although the constitution provides for an independent judiciary, sitting judges are not granted tenure, and some law professionals asserted that judges were incapable of rendering completely independent decisions due to lack of job security. Procedural shortcomings and trial delays were problems. The courts were unable to keep pace with the rise in criminal cases, and there was a growing backlog.
Defendants enjoy the right to a presumption of innocence until proven guilty, to be informed promptly and in detail of the charges, to a fair and free public trial without undue delay, to be present at their trial, to have adequate time and facilities to prepare a defense, to receive free assistance of an interpreter, and to present their own witnesses and evidence. Although defendants generally have the right to confront adverse witnesses, in some cases the law allows witnesses to testify anonymously against accused perpetrators in order to protect themselves from intimidation or retribution. Authorities frequently dismissed serious charges because witnesses either refused to testify or could not be located. Defendants also have a right not to be compelled to testify or confess guilt and to appeal.
Defendants may hire an attorney of their choice. The government provided legal representation only to destitute suspects charged with capital crimes, leaving large numbers of defendants without adequate legal representation. Lack of representation contributed to excessive pretrial detention, as some accused lacked the means to advance their cases toward trial.
Numerous juvenile offenders appear in court with an individual who is court-appointed to protect the juvenile’s interests (guardian ad litem). A conflict arises when the magistrate requests “information” about a child’s background and requests that the same social worker prepare a probation report. The Department of Social Services prepares the report, which includes a recommendation on the eventual sentence for the child. In essence the government-assigned social worker tasked with safeguarding the welfare of the child is the same individual tasked with recommending an appropriate punishment for the child.
A significant backlog of cases were awaiting trial. Delays reportedly lasted years, although the government increased the number of criminal courts and continued working to clear the backlog. Once cases went to trial, they were often further delayed due to poor case and court management, such as inaccurate handling or presentation of evidence and inaccurate scheduling of witnesses, jury members, and accused persons for testimony. Shaquille “Kellie” Rashad Demeritte Kelly was killed in 2013, and despite national coverage of the killing and a government commitment to bring the perpetrators to justice, the trial dates were continually postponed.
Local legal professionals also attributed delays to a variety of longstanding systemic problems, such as slow and limited police investigations, insufficient forensic capacity, lengthy legal procedures, and staff shortages in the Prosecutor’s Office and the courts.
POLITICAL PRISONERS AND DETAINEES
There were no reports of political prisoners or detainees.
CIVIL JUDICIAL PROCEDURES AND REMEDIES
There is an independent and impartial judiciary in civil matters, and there is access to a court to bring lawsuits seeking damages for, or cessation of, human rights violations.
f. Arbitrary or Unlawful Interference with Privacy, Family, Home, or Correspondence
The constitution prohibits such actions, and the government generally respected these prohibitions; however, in shantytowns (illegal settlements populated primarily by Haitian migrants), witnesses reported immigration officers’ habitual warrantless entry of homes without probable cause. Many Haitians claimed that immigration officers targeted their dwellings once their undocumented status was discovered, demanding multiple bribes.
While the law usually requires a court order for entry into or search of a private residence, a police inspector or more senior police official may authorize a search without a court order where probable cause to suspect a weapons violation or drug possession exists.
Section 2. Respect for Civil Liberties, Including:
b. Freedom of Peaceful Assembly and Association
The constitution provides for the freedoms of peaceful assembly and association, and the government generally respected these rights.
c. Freedom of Religion
See the Department of State’s International Religious Freedom Report at www.state.gov/religiousfreedomreport/.
d. Freedom of Movement, Internally Displaced Persons, Protection of Refugees, and Stateless Persons
The constitution provides for freedom of internal movement, foreign travel, emigration, and repatriation, and the government generally respected these rights. The government generally cooperated with UNHCR and other humanitarian organizations in assisting refugees and asylum seekers.
Abuse of Migrants, Refugees, and Stateless Persons: Migrants accused police and immigration officers of excessive force and warrantless searches, as well as frequent solicitations of bribes by immigration officials (see sections 1.d., 1.f.). Widespread bias against migrants, particularly those of Haitian descent, was reported.
PROTECTION OF REFUGEES
Refoulement: The government had an agreement with the government of Cuba to expedite removal of Cuban detainees. The announced intent of the agreement was to reduce the amount of time Cuban migrants spent in detention; however, concerns persisted that it also allowed for information sharing that heightened the risk of oppression of detainees and their families.
Access to Asylum: The law does not provide protection for asylum seekers, and the government has not established a system for providing protection to refugees. Access to asylum in the country is informal, with no normative legal framework under which the legal protections and practical safeguards could be implemented. The lack of refugee legislation or a formal policy complicated UNHCR’s work to identify and assist asylum seekers and refugees.
Throughout the year the government worked to develop formal asylum procedures to enhance the processing of asylum seekers and refugees. According to the government, trained individuals screened applicants for asylum and referred them to the Department of Immigration and the Ministry of Foreign Affairs for further review. Government procedure requires that the ministry forward approved applications to the cabinet for a final decision on granting or denying asylum.
Authorities did not systematically involve UNHCR in asylum proceedings, but they sought UNHCR’s advice on specific cases during the year and granted UNHCR greatly improved access to interview detained asylum seekers awaiting deportation.
The government did not effectively implement laws and policies to provide certain habitual residents the opportunity to gain nationality in a timely manner and on a nondiscriminatory basis. Children born in the country to non-Bahamian parents, to an unwed Bahamian father and a non-Bahamian mother, or outside the country to a Bahamian mother and a non-Bahamian father do not acquire citizenship at birth.
Under the constitution, Bahamian-born persons of foreign heritage must apply for citizenship during a 12-month window following their 18th birthday, sometimes waiting many years for a government response. The narrow window for application, difficult document requirements, and long waiting times left multiple generations, primarily Haitians due to their preponderance among the irregular migration population, without a confirmed nationality. During the year the government implemented a new policy allowing individuals who missed the 12-month window to gain legal permanent resident status with the right to work.
There were no reliable estimates of the number of persons without a confirmed nationality; one NGO estimated there were 30,000 to 40,000. The government asserted a number of “stateless” individuals had a legitimate claim to Haitian citizenship but refused to pursue it due to fear of deportation or loss of future claim to Bahamian citizenship. Such persons often faced waiting periods of several years for the government to decide on their nationality applications and, as a result, lacked proper documentation to secure employment, housing, and other public services.
Individuals born in the country to non-Bahamian parents were eligible to apply for “Belonger” status that entitled them to work and have access to public high school-level education and a fee-for-service health-care insurance program. Belonger permits were readily available. Authorities allowed individuals born in the country to non-Bahamian parents to pay the tuition rate for Bahamian students when enrolled in college and while waiting for their request for citizenship to be processed. The lack of a passport prohibits students from accessing higher education outside the country. In 2017 the government repealed its policy of barring children without legal status from government schools. Community activists alleged some schools continued to discriminate, claiming to be full so as not to admit children of Haitian descent.
In August media reported that a Bahamian child born to a Bahamian-born mother of Haitian descent was unable to obtain a passport to travel out of the country for medical treatment. Because the child’s mother was not a naturalized Bahamian citizen at the time of her birth, and her mother was not married at the time to her Bahamian father, the child was not granted Bahamian citizenship at birth. The government subsequently issued the child a Certificate of Identity that permitted her travel, listing her nationality as Haitian, despite being two generations removed from birth in Haiti.
Section 3. Freedom to Participate in the Political Process
The constitution and laws provide citizens the ability to choose their government in free and fair periodic elections held by secret ballot and based on universal and equal suffrage.
Elections and Political Participation
Recent Elections: Prime Minister Hubert Minnis took office after the Free National Movement (FNM) defeated the incumbent Progressive Liberal Party (PLP) in a general election in May 2017. The FNM won 35 of the 39 parliamentary seats, with 57 percent of the popular vote. The PLP won the remaining four seats. Election observers from the Organization of American States and foreign embassies found the elections to be generally free and fair.
Participation of Women and Minorities: No laws limit the participation of women or minorities in the political process, and they did participate.
Section 4. Corruption and Lack of Transparency in Government
The law provides criminal penalties for corruption by officials, and the government brought numerous charges against former and sitting officials for corrupt practices.
Corruption: The government acknowledged corruption in the BDOC was a long-standing problem. A study of the prison conducted by the University of The Bahamas in October 2017 revealed that 62 percent of inmates alleged they obtained drugs from staff at the prison. In October police arrested and charged two BDOC officers on possession of drugs in two separate incidents.
The campaign finance system is largely unregulated, with few safeguards against “quid pro quo” donations, creating a vulnerability to corruption. The procurement process was particularly susceptible to corruption, as it is opaque, contains no requirement to engage in open public tenders, and does not allow review of award decisions. The government nevertheless routinely issued open public tenders. During the year the government launched a process for all vendors and suppliers to register on an electronic platform to increase transparency and to improve the procurement process. The Minnis government pursued allegations of official corruption after taking office. As of November, cases continued regarding two former ministers and a former senator charged with corruption in 2017.
Financial Disclosure: The Public Disclosure Act requires senior public officials, including senators and members of parliament, to declare their assets, income, and liabilities on an annual basis. The government publishes a summary of the individual declarations. There is no independent verification of the submitted data.
A number of international and domestic human rights organizations operated without government restriction, investigating and publishing their findings on human rights cases, and enjoyed a constructive relationship with the government.
Section 6. Discrimination, Societal Abuses, and Trafficking in Persons
Rape and Domestic Violence: Rape of men or women is illegal, but the law does not protect against spousal rape unless the couple is separated or in the process of divorce, or there is a restraining order in place. The maximum penalty for an initial rape conviction is seven years. The maximum sentence for subsequent rape convictions is life imprisonment; however, the usual maximum was 14 years’ imprisonment. The RBPF reported that from January to November there were 45 reported rapes, 12 attempted rapes, and 114 cases of unlawful sexual intercourse. The RBPF reported Abaco had the highest number of reported cases of sexual violence. In September a woman alleged a jet ski operator raped her in Nassau. Although she identified the accused in a line up, he was released on bail because he was a minor. There were no further developments in her case in the courts, a common occurrence in rape and domestic violence cases.
Violence against women continued to be a serious, widespread problem.
The law recognizes domestic violence as a crime separate from assault and battery, and the government generally enforced the law, although women’s rights groups cited some reluctance on the part of law enforcement authorities to intervene in domestic disputes. The Bahamas Crisis Center provided a counselor referral service and operated a toll-free hotline. The authorities, in partnership with a private organization, operated a safe house.
Sexual Harassment: The law prohibits criminal “quid pro quo” sexual harassment and authorizes penalties of up to B$5,000 and a maximum of two years’ imprisonment. There were no official reports of workplace sexual harassment during the year.
Coercion in Population Control: There were no reports of coerced abortion or involuntary sterilization.
Discrimination: The law does not prohibit discrimination based on gender. Women with foreign-born spouses do not have the same right as men to transmit citizenship to their spouses or children (see section 2.d., Stateless Persons).
Women were generally free of economic discrimination, and the law provides for equal pay for equal work. The law also provides for the same legal status and rights for women as for men; however, women reported it was more difficult for them to qualify for credit and to own a business.
Birth Registration: Children born in the country to married parents, one of whom is Bahamian, acquire citizenship at birth. In the case of unwed parents, the child takes the citizenship of the mother. All children born in the country may apply for citizenship upon reaching their 18th birthday. There is universal birth registration, and all births must be registered within 21 days of delivery.
In January the case of Jean Rony Jean-Charles, who asserted he was born in the country to Haitian parents and thus was unlawfully repatriated to Haiti, went before the Supreme Court. In September 2017 Jean-Charles was unable to provide officials with identification proving his lawful presence in the country. Immigration officials subsequently deported Jean-Charles to Haiti although he was never issued a deportation or a detention order and had never traveled outside The Bahamas. The Supreme Court judge ruled that Jean-Charles was unlawfully expelled from The Bahamas and ordered the government to immediately issue a travel document for his return at the government’s expense. The ruling also granted him legal status no later than 60 days after his return. The judge noted that Jean-Charles was deprived of his personal liberty, unlawfully arrested and detained, and falsely imprisoned. The judge also ordered the government to pay Jean-Charles damages. In October the Court of Appeal, the highest court in the country, overturned the Supreme Court’s ruling following an appeal by the government.
Child Abuse: The law provides severe penalties for child abuse and requires all persons having contact with a child they believe has been physically or sexually abused to report their suspicions to police; nonetheless, child abuse and neglect remained serious problems.
The penalties for rape of a minor are the same as those for rape of an adult. While a victim’s consent is insufficient defense against allegations of statutory rape, it is sufficient defense if the accused had “reasonable cause” to believe the victim was older than age 16, provided the accused was younger than age 18.
The Ministry of Social Services provided services to abused and neglected children through a public-private center for children, the public hospital family-violence program, and The Bahamas Crisis Center.
Early and Forced Marriage: The legal minimum age for marriage is 18, although minors may marry at 15 with parental permission.
Sexual Exploitation of Children: The minimum age for consensual sex is 16. The law considers any association or exposure of a child to prostitution or a prostitution house as cruelty, neglect, or mistreatment of a child. Additionally, the offense of having sex with a minor carries a penalty of life imprisonment. Child pornography is against the law. A person who produces it is liable to life imprisonment; dissemination or possession of it calls for a penalty of 20 years’ imprisonment.
Institutionalized Children: A child as young as age 10 may be charged as an adult or a juvenile before a criminal court. First-time juvenile offenders charged with nonviolent or lesser offenses faced detention and custodial sentences at the Simpson Penn School for Boys, Willie Mae Pratt Center for Girls, or the BDOC facility.
International Child Abductions: The country is a party to the 1980 Hague Convention on the Civil Aspects of International Child Abduction. See the Department of State’s Annual Report on International Parental Child Abduction at //travel.state.gov/content/travel/en/International-Parental-Child-Abduction/for-providers/legal-reports-and-data.html.
The local Jewish community numbered approximately 300 persons. There were no reports of anti-Semitic acts.
Trafficking in Persons
See the Department of State’s Trafficking in Persons Report at www.state.gov/j/tip/rls/tiprpt/.
Persons with Disabilities
The law prohibits discrimination against persons with disabilities, including their access to education, employment, health services, information, communications, public buildings, transportation, the judicial system, and other state services. The government did not enforce these provisions effectively. The law affords equal access for students, but only as resources permit, with this decision made by individual schools. On less-populated islands, children with learning disabilities often sat disengaged in the back of classrooms because resources were not available.
A mix of government and private residential and nonresidential institutions provided education, training, counseling, and job placement services for adults and children with disabilities. Children with disabilities attended school through secondary education at a significantly lower rate than other children, and they attended school with nondisabled peers or in segregated schools, depending on local resources.
On September 18, the Court of Appeal upheld the wrongful dismissal claim of a woman who was fired from her job as a restaurant manager at the Atlantis Paradise Island Resort because she suffered a “serious nerve injury” that left her unable to carry out her duties. The court ruled that the Atlantis Resort did not make reasonable efforts to accommodate the worker in another position. The judge noted the Employment Act fails to set out how companies should accommodate workers with disabilities.
According to unofficial estimates, between 30,000 and 60,000 residents were Haitians or persons of Haitian descent, making them the largest ethnic minority. Many persons of Haitian origin lived in shantytowns with limited sewage and garbage services, law enforcement, or other infrastructure. Authorities generally granted Haitian children access to education and social services, but interethnic tensions and inequities persisted.
Members of the Haitian community complained of discrimination in the job market, specifically that identity and work-permit documents were controlled by employers seeking advantage by threat of deportation.
The government announced a comprehensive plan to dismantle the country’s shantytowns. Plans were halted by a Supreme Court injunction in August pending judicial review of the lawfulness of the plans to seize and demolish Haitian residences.
Acts of Violence, Discrimination, and Other Abuses Based on Sexual Orientation and Gender Identity
The law does not provide antidiscrimination protections to lesbian, gay, bisexual, transgender, and intersex (LGBTI) individuals on the basis of their sexual orientation, gender identity or expression, or sex characteristics. Consensual same-sex sexual activity between adults is legal. The law defines the age of consent for same-sex individuals as 18, compared with 16 for heterosexual individuals. NGOs reported LGBTI individuals faced social stigma and discrimination.
HIV and AIDS Social Stigma
The law prohibits discrimination in employment based on HIV/AIDS status. Children with HIV/AIDS also faced discrimination, and authorities often did not tell teachers that a child was HIV positive due to fear of verbal abuse from both educators and peers. The government maintained a home for orphaned children with HIV/AIDS.
Section 7. Worker Rights
a. Freedom of Association and the Right to Collective Bargaining
The law provides for the right of workers to form and join independent unions, participate in collective bargaining, and conduct legal strikes. The law prohibits antiunion discrimination. By law, employers may be compelled to reinstate workers illegally fired for union activity. Members of the police force, defense force, fire brigade, and prison guards may not organize or join unions, although police used professional associations to advocate on their behalf in pay disputes. Unions can exist without a majority vote from workers, but to be recognized by the government and act as an “agency shop,” a union must represent 50 percent plus one of the affected workers.
There was no information on the adequacy of enforcement resources. Fines varied widely by case and were not sufficient to deter violations. Administrative and judicial procedures were subject to lengthy delays and appeals. The government did not provide updated statistics during the year. By law, labor disputes must first be filed with the Ministry of Labor and National Insurance. If not resolved, they are transferred to an industrial tribunal, which determines penalties (fines) and remedies, up to a maximum of 26 weeks of an employee’s pay. The tribunal’s decision is final and may be appealed in court only on a strict question of law. Authorities reported a case backlog of up to three years at the tribunal.
The government generally respected freedom of association and the right to collective bargaining, and most employers in the private sector did as well. Penalties were sufficient to deter violations.
b. Prohibition of Forced or Compulsory Labor
The law prohibits all forms of forced or compulsory labor. The government did not always effectively enforce applicable law, due to lack of capacity. The government received five reports of human trafficking, including six sex trafficking victims, one sex and labor victim, and one labor victim. Local nongovernmental organizations noted that exploited workers often did not report their circumstances to government officials due to fear of deportation and lack of education about available resources. Penalties for forced labor range from three to 10 years’ imprisonment and were sufficiently stringent to deter violations.
Undocumented migrants were vulnerable to forced labor, especially in domestic servitude and in the agriculture sector, and particularly in the outlying Family Islands. There were reports that noncitizen laborers, often of Haitian origin, were vulnerable to compulsory labor and suffered abuses at the hands of their employers, who were responsible for endorsing their work permits on an annual basis. Specifically, local sources indicated that employers required noncitizen employees to ‘work off’ the work permit fees, which ranged from B$750 to B$1,500 for unskilled and semiskilled workers. The risk of losing the permit and the ability to work legally within the country was reportedly used as leverage for exploitation and potential abuse.
Also see the Department of State’s Trafficking in Persons Report at www.state.gov/j/tip/rls/tiprpt/.
c. Prohibition of Child Labor and Minimum Age for Employment
The law prohibits the employment of children under age 14 for industrial work or work during school hours and prohibits the worst forms of child labor. Children younger than 16 may not work at night. Children between ages 14 and 18 may work outside of school hours under the following conditions: on a school day, for not more than three hours; in a school week, for not more than 24 hours; on a nonschool day, for not more than eight hours; in a nonschool week, for not more than 40 hours. The law prohibits persons younger than age 18 from engaging in dangerous work, including construction, mining, and road building. There was no legal minimum age for employment in other sectors. Occupational health and safety restrictions apply to all younger workers.
The government made efforts to enforce the law, with labor inspectors proactively sent to stores and businesses on a regular basis, but resource constraints limited their effectiveness. The Ministry of Labor and National Insurance reported no severe violations of child labor laws, although inspectors reported several instances of children working in small merchant businesses or excess hours in grocery stores. The penalty for violations of child labor law is a fine between B$1,000 and B$1,500, which was sufficient to deter violations.
d. Discrimination with Respect to Employment and Occupation
The law prohibits discrimination in employment based on race, color, national origin, creed, sex, marital status, political opinion, age, HIV status, or disability, but not based on language, sexual orientation or gender identity, religion, or social status. The government did not effectively enforce the law, and while the law allows victims to sue for damages, many citizens were unable to avail themselves of this remedy due to poor availability of legal representation and the ability of wealthy defendants to drag out the process in courts.
e. Acceptable Conditions of Work
In 2015 the Ministry of Labor and National Insurance raised the minimum wage from B$4.00 to B$5.25 per hour, well above the established poverty line of B$4,247 per annum.
The law provides for a 40-hour workweek, a 24-hour rest period, and time-and-a-half payment for hours worked beyond the standard workweek. The law stipulates paid annual holidays and prohibits compulsory overtime. The law does not place a cap on overtime. The government set health and safety standards appropriate to the industries. According to the Ministry of Labor and National Insurance, the law protects all workers, including migrant workers, in areas including wages, working hours, working conditions, and occupational health and safety standards. Workers do not have the right to refuse to work under hazardous conditions, and legal standards do not cover undocumented and informal economy workers.
The ministry is responsible for enforcing labor laws, including the minimum wage, and fielded a team of inspectors that conducted onsite visits to enforce occupational health and safety standards and investigate employee concerns and complaints, although inspections occurred infrequently. The ministry generally announced inspection visits in advance, and employers generally cooperated with inspectors to implement safety standards. It was uncertain whether these inspections were effective in enforcing health and safety standards. The government did not levy fines for noncompliance but occasionally forced a work stoppage. Such penalties were not sufficiently stringent to deter violations. Working conditions varied, and mold was a problem in schools and government facilities. | <urn:uuid:0a650d37-fbc9-4cf3-8f9d-42851532ffc3> | CC-MAIN-2022-33 | https://www.state.gov/report/custom/da7d4d248e/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00094.warc.gz | en | 0.959601 | 7,746 | 2.546875 | 3 |
from Hayden White’s essay, “The Burden of History,” of 1966:
Such a conception of historical inquiry [in which the historian and scientist organize facts through tentative metaphoric approximations] and representation would open up the possibility of using contemporary scientific and artistic insights in history without leading to radical relativism… It would permit the plunder of psychoanalysis, cybernetics, game theory, and the rest…. And it would permit historians to conceive of the possibility of using impressionistic, expressionistic, surrealistic, and (perhaps) even actionist modes of representations for dramatizing the significance of data…. [p. 47]
‘historiography’ is the philosophical study of how history is written, constructed, told, assembled, in short: it’s the study of the often unconscious assumptions historians make in producing their histories. it’s important to note that historians don’t exactly ‘make history’. people ‘make’ history, but it’s not only people who make it, in fact; though humans do have considerable impact on history making. it is possible, or is it? to write history that reflects only on human activity. but any such history would be woefully incomplete because it would give no account of very important historical events that have made humans human. For example, the now deceased anthopologist, Paul Sheppard, wrote a book entitled: The Others: how animals made us human. Some of those ‘animals’, would include Neanderthals, our direct ancestors, who it turns out were making art long before humans were, and with whom homo sapiens interbred. One of Sheppard’s examples is while it’s true to a degree that human’s domesticated the dog, he shows that dogs also helped to domesticate humans because they co-evolved together, co-determining each others behaviors. It doesn’t take much analysis to recognized that purely environmental factors have had considerable impact on human history; climate change is doing that as i write, and geologists have named a new period of history to describe this feedback loop: the anthropocene dates to the beginning of the industrial revolution, when humans began to pump CO2 in the atmosphere in enormous quantities. the examples of human-animal-forest collaborations are too many to list: nomadism was dependent on animal migrations and seasonal changes that effected agriculture. We only have to look at the last hurricane season to see just how much impact the environment can have on human history on texas, florida or the caribbean.
but even ignoring the impact of the histories of the non-human world, of non-human ‘actors’, we must consider that what is typically thought of ‘history’, itself has a history and therefore must be considered not something ‘natural’ like air or water, but like everything human, a human invention that over time has taken many different forms. history as it’s typically taught and thought of today, is largely an invention of the 19th century. It’s true that many historians cite the early precedents of classical greece, Herodotus and Thucidedes, or for biograhical histories, Plutarch’s Lives. And there are of course many other examples. But they lack many of the criteria by which ‘modern’ history has been written. Specifically, they lack the criteria of ‘modern’ science, rules for determining what is or isn’t a ‘fact’, what constitutes a ‘document’, generally, what constitutes ‘evidence’. ‘modern’ history also requires a particular form of writing style, of what might be called a style/voice of ‘objectivity’. It requires the production of ‘proof’ through the making of ‘arguments’. It requires the ‘art of persuasion’, which ‘art’ in classical times was called, rhetoric. It requires ‘logic’ and combining logic with various forms of evidence.
To step back, historically, a bit, history as we think of it today, was made possible by the mechanization of language and images with the printing press. this made illustrated pamphlets and books possible for the first time. it made record keeping reproducible and able to be disseminated. It made libraries and archives possible on a widely available scale. Sure, there were libraries full of handwritten or hand-printed books before 1462. And such books are now part of ‘history’s ‘archive’. But such books were every expensive, and their availability very limited. And in the West, there were written only in Latin. So they were only available to a very small class of aristocratic scholars and priests, to in fact, scholar-priests; because to be educated in the early universities, was to be educated in what today would be considered a very narrow range of subjects designated by the categories of the Trivium [grammar, logic, and rhetoric] and Quadrivium [astronomy, arithmetic, .geometry, and music], and a fluent knowledge of the ONLY language in which was generally written, Latin. [Classical Greek came much later, after the Medici founded the Platonic Academy for the teaching of Greek and Arabic during the Renaissance so after the Arabic scholars fled to Italy to avoid being murdered, for the translation of long lost works of the Greek and Arab scholars]. But there was another catch for being able to study at Cambridge or Oxford or University of Paris or Heidelberg or Padua; a scholar was required to become a christian theologian. Thus the scottish philosopher and historian, David Hume, was barred from teaching in the university because he was an self-proclaimed atheist.
Another important ‘fact’ in this history of history relates to my earlier claim, that ‘modern’ history would not be possible without a particular form of written style, that included argument, logic, and evidence, of a voice of ‘objectivity’. This did not exist as we today understand ‘objectivity’. The ‘style of objectivity’ arose only in the 16th and 17th centuries with a small group of ‘scientific’ writers, then called ‘natural philosophers’. [the term ‘scientist’ wasn’t coined until 1840 by the polymath William Whewell in England.] What we today think of the ‘essay’ was invented first by 3 writers: Galileo, Descartes, and Hume. Others contributed to the ‘genre’, like Montesque and Pascal and later, Newton and Leibniz and, particularly important to the history of history, Giambattista Vico. these scholars, as stylists, crafted the first time, the ‘essay’ as a form of presentation of scholarly knowledge, that developed into the style of academic writing in general, and the scholarly ‘treatise’. As importantly, these authors were the first scholars to write in their vernacular languages, suddenly opening up their work to a more ‘popular’ readership and breaking the hold on ‘knowledge’ by the aristocratic, educated, elite. [Galileo even gave highly popular lectures in Italian about his scientific investigations. Later, in France, the first proponents of the Enlightenment, the Philosophes, Voltare, D’Lambert and others, would write and publish, in French, the first, highly illustrated, Encyclopedia to make ‘all’ knowledge available to everyone.] This is not by a long shot a complete history of the ‘essay’, but it hits some main points.
Okay… I need to mention two more seminal authors in order to restate my account of history and historiography so that i can clarify for B what i said insufficiently previously. The foregoing brief cultural history of history is meant only to roughly demonstrate my claim that there is in fact, a history of history that means, because history in fact has a history, it’s not some kind of ‘natural’ thing like air or water. I haven’t described the other can of worms, ‘oral history’… like that of the native hawaiians, who ‘sing’ their histories in the form of chants, that are so specific and detailed that they are able to guide voyagers in small canoes well enough that they can make the 12000 mile trip between hawaii and new zealand…
There are two works about history and historiography without which, Hayden’s work, and my orientation to them, would be impossible, probably. At least not in the terms that have become so contentious since White wrote some of his highly influential essays beginning in the mid-1960s. The first is a series of lectures Hegel gave at the University of Berlin in 1822, 1828, and 1830, since complied in a book called, The Philosophy of History. This is the book that has had profound influence specifically on how art history has been written, since his lectures recounted his philosophy though recounting a ‘history of art’ since Homeric times in ancient Greece, through the Roman Period, the Middle Ages, the Renaissance, and the Baroque. This history was based on Hegel’s more general philosophical, systematic account of human development of consciousness in general, in his Phenomenology of the Spirit. In this work, Hegel developed a ‘historical’ philosophy based on what he called ‘dialectics’. Dialectics is complicated, more complicated than the popular accounts of that art history subsequently absorbed. But i will use that over-simplified version here simply so i can get to B’s question… Hegel was a christian, so he thought that ‘god’ was the ‘spirit’ that drove historical development in time. Since ‘god’ was both omniscient and infinite, his spirit could never be manifest within the paltry limits of human perception or even within the physical constraints of the ‘phenomenologically’ determined world human’s ‘experience’. He was reacting to Kant’s equally influential treatises, the 3 Critiques: The Critique of Reason, the Critique of Morals, and the Critique of Judgment; which collectively argued that the ‘world’ was irreparably dived into two pieces: the ‘noumenal’ [the world as it ‘actually’ is], and the ‘phenomenal’ [the world as it appears [to humans]. According to Kant, the noumenal world can never be known/perceived, ‘in and of itself’, because it is always filtered through the structures of the human mind and perception. the human mind/perceptual apparatus, projected itself on the world, and constructed it in terms of it’s innate biological and moral and judgmental systems/structures. This is why Hegel titled his philosophical treatise, the Phenomenology of the Spirit. We can never know anything about ‘god’ directly; we can only know how he partially, appears, phenomenologically, manifests his spirit in the world.
The way god does that for Hegel, is through an evolutionary process through which human consciousness grows over time ‘progressively’ more enlightened. To attempt to cut to the chase here, this happens according to him, through the process of ‘dialectic’, which moves through 3 stages he called: affirmation, negation, and synthesis. [again, this is oversimplified] the spirit of history, god, first manifests in one form affirmatively – during the homeric, geometric period of early greek art; but the greeks eventually come to consciousness that that expression of human form is inadequate, doesn’t account sufficiently for what humans are, so the reject it, negate it, and develop a new aesthetic style, high attic greek style; but that under the romans comes to be seen as equally inadequate, so the hellenistic style develops that synthesizes aspects of the homeric with aspects of the attic period, in a ‘supercession’ of both previous stylistic forms. He goes on to ‘demonstrate’ this same triadic, evolutionary process during the middle ages, the renaissance, and Baroque. This version of art history becomes entrenched as what has been taught since hegel as the ‘early’, ‘middle’, and ‘late’ periods. to give one example, early renaissance [Giotto’s Assisi Chapel], to middle renaissance [Michaelangelo’s Sistine Chapel], to late renaissance [Pontormo’s The Deposition from the Cross]. ETC…ETC…ETC… Affirmation-Negation-Synthesis…
Most of the major art history surveys, like Jansen’s, etc. present art history as an evolutionary, progressive development of early, middle, late periods. Which carries the very unfortunate result that has the spirit of history progresses through time, human consciousness becomes more and more enlightened; with the unfortunate corollary that the humans of the renaissance are more enlightened and therefore ‘higher’, ‘better’, humans than the humans of the middle ages and the greco-roman period. Not to mention how far superior the Greeks were than the Egyptians and the sub-sarahan Africans. ETC ETC ETC. the history of modern art essentially follows the same hegelian dialectical ‘logic’ – Manet to Picasso to Malevich…. you can see how wobbly this gets and quickly. but the general scheme is maintained – naturalism, to quasi-realism, to complete abstraction = modernism. minimalism to conceptualism to materialist formalism = postmodernism… pretty wobbly too, but these art historical narratives are very common. [i fall into this same trap in some of my brief cultural history diagrams below… hegel goes as deep as freud’s concepts of the ego-id-superego… ]
the second book of importance, written in opposition to Hegel in part, as well as against Kant, as well as against what he saw as an ‘unhealthy’ 19th century obsession with capital “H” history, was Nietzsche’s early essay, sometimes translated as, ‘The Use and Abuse of History’, but more recently better translated as ‘On the Advantage and Disadvantage of History for Life; written and delivered at the University of Basel shortly after obtaining a professorship there at the ripe age of 24. This essay has become a very important work for the ‘poststructural’, ‘postmodern’ phase of philosophy, cultural theory, art history, and the like, since the late 60s in France, and because everything is so delayed in the US, there, since the 1980s.
Nietzsche argument is of course complex, and i will not attempt to do it justice here. I will only quote it’s opening paragraph to give a pale flavor of its brilliance.
“Moreover I hate everything which merely instructs me without increasing or directly quickening my activity.” These are Goethe’s words with which, as with a boldly expressed certerum censeo [I am of the opinion], we may begin our consideration of the worth and worthlessness of history. Our aim will be to show why instruction which fails to quicken activity, why knowledge which enfeebles activity, why history as a costly intellectual excess and luxury must, in the spirit of Goethe’s words, be seriously hated; for we still lack what is most necessary, and superfluous excess is the enemy of the necessary. Certainly we need history. But our need for history is quite different from that of the spoiled idler in the garden of knowledge, even if he in his refinement looks down on our rude and graceless requirements and needs. That is, we require history for life and action, not for the smug avoiding of life and action, or even to whitewash a selfish life and cowardly, bad acts. Only so far as history serves life will we serve it: but there is a degree of doing history and an estimation of it which brings with it a withering and degenerating of life: a phenomenon which is now as necessary as it may be painful to bring to consciousness through some remarkable symptoms of our age.
i cite this articular passage in order to give some context for the white citation with which this post begins. and i mean of course, only ‘some’ context. white suggests that some types of science, and some types of art, are perhaps the best models for history that serves life by quickening its activities. unlike hegelian history which only enfeebles it. as does that type of art history which only accounts for those artists who have conquered the art market.
it’s my view that paul demarinis accomplishes White’s type of Nietzschean history in exactly its articulation of science, art, music, sound, performance, and technologies. his work is as humorous as it is erudite, as ironic as it is romantic, as comic as it is tragic, as ‘pop’ as it is ‘high’ culture. it’s as self-critical as it’s arrogant. i’ll remind readers here of all these false dichotomies with one example:
paul discovered through very sophisticated research that he could play a hologram of a vinyl record or recreated edison wax cylinders using a directed laser beam instead of a diamond needle. that’s hilarious, as well as profoundly challenging to our ‘hegelian’ belief in the ideology of scientific and technological progress. his work forces us, once we engage it on it’s own terms, which are ‘our’ own terms, in historical terms, to face both the, ‘what might have been’, as well as, ‘what might be’. and well, also and perhaps most importantly, what should be called the deep history of our own ‘present’. while i’m deeply critical of hegel, one can only respect to a degree his brilliance no matter how wrong he may have been: and one of his philosophy of history adages was, to paraphrase: the depths of the past are contained in the present. paul’s work is definitely and demonstrably, non-heglian, by political commitment. it shows us a way to think about ‘history’ in a non-progressive way. in a non-linear way. it breaks open our ‘present’ to reveal the depths of history. and in so doing, it invigorates life rather than enfeebling it. some of his work is as challenging as hegel himself was; but some of his work is entirely ‘superficial’, as nietzsche suggested ‘life’ should be: by which he meant, directly active, performative, present, and, profound. like shooting a laser beam into a goldfish bowl, with practically speaking, zero possibility of hitting it, to the tune of polka.
So that is hopefully clarification #1… in historiographical terms.
Clarification #2: Art history is not unlike the quip: history is written by the conquerors…
that is: artists who fits the hegelian dialectical pattern get into major museums and make a lot of money… those who don’t, don’t. modernism linked to capitalism, to market forces, once the two forms of early patronage, the church then the wealthy early mercantilists like the medici, lost power, when a middle class developed and became less religious and interested in worldly mundane everyday life, as was first the case in Holland. When the middle class had enough money and the patronage of the aristocracy and the church no longer support artists, artists were like everyone else, thrown into the market place. thus artists like rubens who ran essentially a painting factory staffed with assistants who specialized in painting fur or skin or drapery… ETC ETC ETC…
art history is written by hegelian art historians who become linked to the art market: galleries or repute, art magazine reviews, major museums, through the intercession of curators. that is obviously oversimplified, but not by that much. art history as produced by most academics tends to reinforce the hegelian/market/1% dialectic…
the 8th edition… blockbuster art history
It became the holy grail for any blockbuster curator: a cultural event that grips the public imagination. As Engels reported to Marx: “Everyone up here is an art lover just now and the talk is all of the pictures at the exhibition.”
Blockbuster, a highly explosive word not usually associated with art, has now entered the lexicon as a term applied to art exhibitions. By 1996 so-called blockbuster exhibitions–big, popular, moneymaking showcases that delivered a powerful impact–had become important sources of direct and indirect revenue, visibility, and prestige for museums worldwide. | <urn:uuid:a8eaecb1-7c20-4d5b-bde8-2d091fbbb951> | CC-MAIN-2022-33 | https://pearodox.blog/2018/03/page/2/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00296.warc.gz | en | 0.960315 | 4,477 | 2.859375 | 3 |
“The site of the Colossus has never been determined with any accuracy, nor has the pose been described for us by reliable eyewitnesses,” writes Lawrence Durrell. “Argument over possible sites and poses is likely to go on until the next earthquake, in which presumably the whole island of Rhodes will sink into the sea and leave behind it legends as tenuous as those which make up the myth of Atlantis.”
In the meantime, we can divide our questions about the Colossus of Rhodes into three categories. There are those — albeit far too few of them — that are answered with reasonable definiteness by the ancient texts that have come down to us, such as the dates when its construction began and ended and the impetus behind it. There are those questions we can never realistically hope to answer, such as why the Rhodians waited ten full years to begin work on a statue meant to commemorate their great victory over Demetrius, using their booty from that very same battle. And then there are those questions we cannot answer using direct historical evidence, but for which speculation, combined with our more general knowledge of ancient Greek culture, technology, and building practices, can provide some probable solutions. Having largely exhausted what we know with certainty about the Colossus of Rhodes in our first chapter, it’s to these sorts of speculations that we’ll have to turn now as we attempt to fill in the rest of the where, what, and how of it to whatever extent we can.
The first question to address, then, is just where the Colossus would have been built, given that the harbor-bestriding monster of popular legend is a no-go. Three separate sites have garnered a measure of scholarly support over the years: the tip of one of the moles in Mandraki Harbor (as opposed to both of them at the same time); the center of Rhodes Town, where a Temple of Helios is known to have stood, near where the Palace of the Grand Master — a legacy of the Knights Hospitaller — stands today; and the acropolis above the city. Robert B. Kebric, the most recent historian to write about the Colossus at any length, made what strikes me as a very persuasive argument for the last in 2019, just eighteen months before his death. I am very much in his debt for what follows.
We learned in the last chapter that the city of Rhodes was possessed of a very impressive acropolis on the top of the hill that is known today as Monte Smith. Yet I neglected to mention one of the site’s most interesting aspects: Monte Smith is one of the surprisingly few places on earth, and the only place known to the ancient Greeks, where one can watch the sun rise out of the sea in the east, wheel across the sky over the course of the day, and sink back into the sea to the west, always with a completely unobstructed view. This quality would have invested the site with sacred importance under any circumstances, but that importance must have been doubled or trebled on Rhodes, the self-styled island of the sun, which enjoyed Helios himself as its patron deity. What better place to build a monument to the god of the sun than the one spot where people could watch his daily journey unimpeded by any obstructions? (In more recent times, a similar question has been asked and answered by communications companies; the top of Monte Smith is dotted today with the television and cellular towers that link Rhodes to the rest of the world.)
There is also a more practical consideration to be added to these religious and cultural advantages. The material of Monte Smith is solid limestone, an ideal natural foundation for a large monument such as the Colossus. The stability it provided would have been especially important on an island such as Rhodes, which is rocked regularly by earthquakes, not to mention the occasional violent thunderstorm with its dozens of lightning strikes. Yes, the Colossus would eventually be felled by an unusually powerful earthquake, but the fact that it managed to withstand 56 years worth of smaller ones is itself a testament to the talent and forethought of its builders.
The other locations proposed for the Colossus of Rhodes have none of the geological advantages of Monte Smith, being made of softer, more porous earth. Preparing ground such as this to support a structure of the size and weight of the Colossus would have been a massive engineering project in itself, one quite probably well beyond the Rhodians’ capabilities; the human-made moles of Mandraki Harbor would, needless to say, be an especially hopeless proposition in this regard. The ground of the acropolis, by contrast, would have required no real preparation at all beyond a careful leveling. The ancient Rhodians were surely adept enough at structural engineering to understand the site’s advantages in this respect — and, if by chance they weren’t, they enjoyed close links with the Egypt of Ptolemy I, who may very well have sent advisors to assist with the project. The Egyptians’ ability to build on limestone to a monumental scale was as legendary in the third century BC as it is in our own; the Pyramids of Giza, those most famous of all ancient monuments, had themselves been built circa 2500 BC on and of limestone, the only foundation that could have supported their weight.
So, the acropolis had much to recommend it as a site for the Colossus. Yet it must be acknowledged that, despite much looking, archaeologists have never uncovered any indubitable traces of it there.
This lack of physical evidence for the Colossus on the acropolis is frustrating, but it is nowhere close to sufficient to prove a negative about the statue’s existence there long ago. If the fallen Colossus was cut up for its raw materials at some point after Pliny wrote about it — something historians believe did occur, for reasons which we’ll explore in the next chapter — the pedestal on which it stood would presumably have been removed as well. And because the natural limestone of Monte Smith was virtually all the foundation the Colossus required, no trace of the Wonder of the World that had once stood here might be left even a few decades after it was removed, much less centuries.
Against the unfortunate but eminently explainable lack of physical evidence of the Colossus on Monte Smith must be balanced the site’s elevation, its unique relationship to the sun, and the practical advantages of its limestone shelf. Robert B. Kebric had no doubts at all in 2019 about its most likely location: “By themselves, each of these three extraordinary factors might be offered as the major [emphasis original] reason for why the Rhodians would decide to build the Colossus of Rhodes where they did; together, they are a remarkable triad of physical realities centered only at this one place on the island.” So, we’ll continue to follow his lead in picturing the figure of Helios towering over the city and harbors of Rhodes from a perch atop the acropolis.
The what and how of the Colossus are, alas, even less easily addressed than the where of it. The one thing that everyone can agree on is that its surface was made of bronze, the perfect shiny stuff for a statue of a sun god on an island famed for its sunshine. Beyond that, the difficulties are legion, not least because Philo, the one ancient writer we have who deigns to describe how the Colossus was made, writes things that cannot all be simultaneously true. We’ll return to these problems momentarily, but first I should briefly describe how bronze-casting, the process most often assumed to have been employed in the creation of the Colossus, actually works. What follows is a description of the “direct lost-wax” process that has been in use for thousands of years. I want to emphasize that it is a grossly simplified description of what could be an extremely complex process in reality; anyone tempted to believe that ancient peoples were in any way our intellectual inferiors should study the art of bronze-casting to be quickly disabused of the notion. Still, the simplified version should suffice for our purposes.
Let’s imagine that we want to create an object in bronze of human size or smaller. First, we sculpt a full-sized version of the figure we wish to cast out of clay. We then coat our clay model with a thin layer of beeswax, wait for it to harden, and cover the entirety with more clay. After making some strategically placed holes in the bottom of the piece, we heat it in a kiln to cause the wax to melt and run out, leaving behind a hollow space — a negative image of the original model — between the two layers of clay. Into this space we pour molten bronze. After it has been allowed to cool and harden, we can chip away the outer layer of clay to reveal the bronze sculpture beneath, to which we can make any adjustments, corrections, and enhancements that prove to be necessary using a hammer and chisel, and then polish its surface to the desired patina — a process known as “chasing.”
But what if we wish to make a bronze sculpture twenty times the height of a human being? Clearly some other techniques would have to come into play, but there exists no scholarly consensus as to what they might have been. Indeed, only one scholar in the course of the past century has even tried to provide a thoroughgoing explanation of how the Colossus was made. On December 3, 1953, one Herbert Maryon, a 79-year-old British authority on sculpting and metalworking in ancient and modern times, presented before the Society of Antiquaries in London the most comprehensive answer ever to the problem of the Colossus of Rhodes. His brave proposal, which appeared in The Journal of Hellenic Studies in 1956, rested upon his own immense reserves of practical knowledge, combined with a close reading of Philo.
Maryon imagined that the artist in charge of the project — a man named Chares if Strabo and Pliny are to be believed — started by sculpting a scale model of Helios in plaster, of human size or slightly larger. He then made a “chassis,” a sort of miniature scaffolding of wood, to surround it.
This he forms of straight, squared bars, perfectly truly planed and with every angle a right angle. Considerable care is devoted to this structure to ensure that all its outer surface is true and square. The chassis is then placed round the plaster figure, special care being taken to ensure that its sides are placed exactly vertically. Then it is fastened permanently to the model. It will be remembered that in the case of the Colossus the enlarged figure would have many times the dimensions of the original model, so any mistakes in the setting of the small chassis might have serious results.
Next a vastly larger wooden scaffolding was constructed, stretching all the way up to the planned height of the finished statue. This scaffold was an exact proportional duplicate of its smaller companion; one inch (2.5 centimeters) on the small scaffold might correspond to two feet (61 centimeters) on its full-size equivalent. The model inside the small scaffold would serve to keep Chares oriented as he worked on the full-size Colossus; a system of plumb lines and set-squares attached to both scaffolds would ensure that he stayed precisely on track.
Within the large scaffold was first raised a framework of iron to serve as the skeleton of the god. To this Chares and his assistants fit stucco panels, shaping them to match the same parts of the model statue. These were then removed and taken down to a foundry, where other craftsmen beat out their duplicates in bronze plates. Finally, said plates were hauled back up to be permanently affixed to the statue’s iron skeleton. The Colossus was not cast at all, in other words, but hammered laboriously into shape, panel by panel. As it rose higher, a ramped mound of earth was also raised around it to allow the workers easy access. And so the bronzed god slowly climbed toward the heavens, at a pace of perhaps ten feet (three meters) per year. Maryon provides a vivid image of the scene in his article.
Suppose that we are standing at the top of an immense mound of earth, up which we have climbed by a spiral pathway. There is an extensive view all round: the little town of Rhodes lying at our feet with its harbours, and the rocky coastline stretching away in the distance. Across the sea, some dozen miles [19 kilometers] away, is the coast of Asia Minor. Close before us rises a rectangular wooden scaffolding, like the framework of a building. Within it we see a screen of bronze which, as a close look tells us, is shaped like part of a man. The lower part of his body and his legs, we find, may be seen in the great pit which, framed by the scaffolding, penetrates the centre of the mound. A platform spans the gap, and we can look down into a great cavern within the body with sides formed by the bronze plates. Tall columns of stone rise from the bottom of the cavern, and from them radiate numberless iron struts which support the bronze walls. Nearby on the level top of the mound the original model for the Colossus stands on a bench within its chassis.
Work is in progress. [A stucco] panel some 4 feet [1.2 meters] long [rises] a few feet higher than the finished portion of the figure. The master sculptor is at work, employing a large riffle with which he works over the surface, modifying it to his liking. When he is satisfied with the work he will give an order, and his assistants will remove the panel from its position on the figure and bear it away to where, at some distance from the foot of the mound, the group of workshops is situated. Nearby [is] a foundry in which a large bronze plate is removed from its mould. When the plate had cooled enough it would be carried into the principal workshop. The craftsmen would take [the] sheet of bronze, and with hammers beat it to shape. Direct hammering would be continued until the master craftsman decided that the work was now far enough advanced for chasing to begin.
When the chasing was completed the panel and the stucco model of which it was a copy would be stood up side by side successively in a number of different lightings, both indoor and out, and the correct modelling of the form checked before it was finally passed by the sculptor. The work of fitting the plate to its neighbors on either side and into its position on the Colossus followed. Finally, it would be riveted in position.
Within a year of the publication of Maryon’s article, an historian and linguist named D.E.L. Haynes — the same whom we met in Chapter 1, when we learned of his argument for the traditional dating of Philo’s text about the Colossus — wrote a riposte which claimed Maryon’s thesis to be invalidated in its entirety by a mistake in the translation of Philo which Maryon had employed. Where Maryon had read that each successive layer of the Colossus was “filled up,” noted Haynes correctly, Philo had in fact written that each layer was “cast on top.” “Whatever we do, let us at least try to understand what Philo actually said,” concluded Haynes a little snippily. Ever since, most historians have more or less dismissed Maryon’s lengthy labor of love as being the unfortunate product of a botched translation — garbage in, garbage out.
Yet to do so strikes me as doing Maryon a great disservice. For taking Philo completely literally, as Haynes seemingly wishes us to do, is highly problematic, as Maryon well recognizes. Translation issues aside, Maryon does consider whether each stage of the statue might have been cast in place, and concludes that to do so would be “extremely inconvenient and impractical,” entailing as it would building a whole new foundry at each level; such an installation, capable of heating metal to 2000 degrees Fahrenheit (1100 degrees Celsius) and then pouring it safely, was at the cutting edge of ancient technology, and tended to be neither particularly cheap nor particularly mobile. Further, Philo’s vague statement in the corrected translation that each layer was “cast on top” of the one before it doesn’t make much sense even if we grant him his peripatetic foundries. Exactly how were these successive layers joined together to make a complete god? In his rejection of Maryon’s thesis, Haynes states blithely that “since the molten metal which was to form the new part would presumably have come into direct contact with the existing part, fusion would probably have resulted.” But, as Maryon doubtless could have told him, bronze-casting simply doesn’t work that way; pouring molten metal on top of its solid counterpart doesn’t result in an instantaneous “fusion” of the two.
Consider as well Philo’s claim that 12.5 tons of bronze went into the Colossus. This figure looks impressive at first glance, but when one does the math, as Maryon did, one finds that it equates to a likely thickness across the statue’s enormous surface area of somewhere between .06 and .1 inches (1.5 and 2.5 millimeters) — about the thickness of the average modern coin. The typical example of cast ancient bronze, by contrast, has a thickness of about one inch (2.5 centimeters). And it is equally impossible to take literally Philo’s claim that this 12.5 tons of bronze “might have exhausted the mines.” While it was and is a considerable quantity of the metal, to be sure, such a figure represents a tiny sliver of the entire annual bronze trade of the third century BC.
And then what to make of Philo’s description of the Colossus as hollow, such that it needed to be “held steady with stones that had been put inside?” (The same claim would later be echoed by Pliny in his description of the fallen statue; he writes of “vast caverns yawning in the interior.”) If its pieces were cast using the traditional method, the clay of the model of each piece would remain inside. And if said clay was somehow removed, a cast-bronze statue of the size and thinness of this one would never be able to support its own weight; it would crumple and fall to earth like a skyscraper made out of cardboard.
All of which is to say that Philo’s text is riddled with logical inconsistencies, as even our textual literalist D.E.L. Haynes is forced to acknowledge in the end. “Are we to reject [the rest of Philo’s description of the Colossus’s construction] simply because a single figure [i.e., the total weight of the bronze employed], mentioned by Philo once and not supported by any other evidence, cannot be reconciled with it?” he asks. “Since figures are notoriously liable to corruption, it seems more reasonable to reject the figure.” And so we have it straight from the horse’s mouth: Haynes too is picking and choosing which parts of Philo’s text to accept as true. His argument with Maryon is ultimately nothing more than a difference of opinion about which parts those should be, based on reasoning that is conjectural at best.
It seems to me that this whole debate is driven by an understandable but misguided desire to make Philo’s text into something it really isn’t. Historians like Haynes and to some extent even Maryon have wished to see him as a sort of ancient investigative journalist, dutifully reporting facts picked up from credible sources. It’s much more likely, however, that Philo was combining some measure of casual Rhodian scuttlebutt with a huge measure of conjecture of his own. Although he was an engineer himself, and thus filled with an engineer’s desire to get to the bottom of how things were done in a technological sense, he was by all indications no authority on bronze-casting or sculpting; none of the other texts of his that have come down to us touch on either of those subjects. His text on the Colossus probably constitutes little more than his best guess of how it was made — a guess grounded in a limited level of knowledge about the actual processes of bronze-casting. In these respects, he was ironically little different from D.E.L. Haynes, the man who would later wish to cite him as an arbiter of historical truth.
I would therefore argue that Herbert Maryon’s thesis deserves rehabilitation. For, unlike those of Haynes or Philo, his speculations were grounded in well over half a century of personal, intimate study of ancient bronze-casting, including much practical experimentation with materials and techniques. Of all those who have studied the Colossus of Rhodes, he may have been the best equipped to separate the possible from the impossible, the reasonable from the unreasonable, in the fraught debate over the methods of its construction. Certainly none of his detractors have ever offered up what he did: a sober, realistic, complete, and believable account of how a statue as large as the Colossus could have been made during the third century BC. In the absence of alternative explanations, I’m willing to give him the benefit of the doubt and accept his thesis of a statue that consisted of an iron frame surfaced with panels of beaten rather than cast bronze, with stones and whatever else was handy thrown into the hollow space inside the structure in order to weigh it down. Such a thesis isn’t as sexy as that of a cast-bronze statue — this in itself may account for much of the resistance to it — but it fits the known facts of the case much better, without requiring any technologies which the ancient Rhodians are not known to have possessed.
All it did require was care, money, and a willingness to dedicate skilled and unskilled laborers to the project for a very long period of time. “Modern people cannot easily appreciate what mass labour can achieve,” write the historians John and Elizabeth Romer, very accurately, in their own study of the Colossus. Yet the archaeological remnants of the ancient world provide even the casual modern tourist with plenty of monumental evidence of just how much mass labor really was able to achieve using fairly rudimentary building techniques, from the Pyramids of Giza to the Colosseum in Rome. Had the Colossus of Rhodes enjoyed a different fate, it too might have joined that bucket list.
But let us move on to a less controversial subject: that of the completed statue’s height. As we saw in Chapter 1, our three principal ancient sources are in relative agreement here: Strabo and Pliny claim the Colossus was 106 feet (32 meters) tall, Philo that it was 120 feet (36.5 meters) in height. Most modern scholars are content to average these figures, arriving at a nice round number of about 110 feet (33.5 meters).
But what does such a number really represent in terms of subjective space? A frame of reference is always useful in translating a measurement into an imaginative vision. Fortunately, we have a convenient one to hand, in the form of a modern monument that was built as a conscious evocation of the Colossus of Rhodes: the 1886-vintage Statue of Liberty in New York Harbor, whose pedestal goes so far as to sport a sonnet proclaiming it to be “The New Colossus.” Ignoring its 154-foot [47-meter] pedestal and measuring only the statue itself, we arrive at a height of 151 feet (46 meters). But if we measure Lady Liberty only from her toes to the top of her head, setting aside the torch she holds aloft, we find that she is just 111 feet (34 meters) tall — i.e., roughly the same height as her inspiration. This, then, is the scale on which our imaginations need to work. I hope the comparison serves to reinforce what an amazing achievement the Colossus was in its day.
The comparison also proves useful in another way. In his remarks on the fallen Colossus, Pliny tells us that its thumb was so thick around that “few men can clasp [it] in their arms,” while “its fingers are larger than most statues.” The Statue of Liberty’s thumbs are not individually articulated due to its pose — one hand is holding a tablet, the other holding the torch aloft — but the middle joint of its index finger is 3.5 feet (1.1 meters) in circumference, which is indeed a stretch for any but the longest-armed human huggers. The same finger is just over 8 feet (2.5 meters) long — i.e., “larger than most [ancient] statues,” which were generally built to human scale or just slightly bigger. Thus we can feel fairly confident that Pliny’s descriptions match his numbers.
Like the Statue of Liberty, the Colossus of Rhodes stood on a pedestal; this Philo explicitly states. The most likely scenario here is a three-tiered base, in order to create the best visual effect for spectators at ground level. Philo claims the pedestal to have been made of white marble, but this much-sought-after material would need to be imported at enormous expense, for Rhodes had no white-marble quarries of its own. Robert B. Kebric has suggested that only the third tier of the pedestal might have used white marble, the other two Rhodes’s own less expensive gray-blue marble, and this would definitely seem a reasonable compromise. Regardless, only the visible surface of the pedestal would have been made of marble. Its internal structure would have been limestone, not just for reasons of cost but because the heavier rock would have provided a much more stable base for the statue.
And how tall was the pedestal? Again, we have only informed speculation to go on, but such is not without value. Kebric has noted that the Colossus was surely intended to be viewed to excellent effect from the sea. In order to achieve this, its builders would want to place it so that its feet stood high enough above the surrounding buildings and landscape of the acropolis that the statue could be seen from head to toe by arriving mariners. Based upon this consideration, Kebric proposes a pedestal of about 50 feet (15 meters) in height at its third and tallest tier.
At this point, then, an admittedly highly speculative but possible or even probable version of the Colossus of Rhodes is starting to emerge for us. The 110-foot (33.5 meter) statue stands on its 50-foot (15-meter) pedestal on the acropolis of Rhodes, itself 270 feet (82 meters) above sea level. Thus the top of the god’s head is fully 430 feet (131 meters) above the waves of the city’s harbors. It must have been a truly awe-inspiring sight for any visitor to Rhodes — a sight absolutely unique in the ancient world, a sight to put even the likes of our modern Statue of Liberty to shame. (After all, the Colossus didn’t have skyscrapers and container ships to compete with.) Only one question remains: what did the Colossus actually look like?
Here even informed speculation can take us very little distance at all. We have no detailed written descriptions from ancient times of the Colossus’s appearance, and no known visual representations of it either. Still, one thing at least is certain: the sculptor would have had to balance aesthetics with structural integrity. An outstretched pose like that of the Statue of Liberty would probably not have been manageable. Benefiting from thousands of years of progress in metallurgy, the latter is made of iron and pure copper rather than iron and bronze, and employs a sophisticated internal frame that allows it to move and flex with the changing winds and temperatures; the statue’s torch, for example, can sway as much as five inches (13 centimeters) in high winds. Taking into consideration the need for stability as well as the beaten rather than cast bronze I echo Herbert Maryon in proposing to have been the principal material of its surface, the real Colossus of Rhodes, if we could go back in time and visit it in its heyday, might well appear disappointing when viewed close up by we who have been exposed to so many centuries worth of fanciful depictions. “To remain upright,” suggests historian Reynold Higgins in his study of the Colossus, “[the] statue would have to be very simple, approximately columnar in shape and in attitude not unlike an archaic Greek kouros figure.” It would, in other words, look more like the rigid, somewhat awkward Greek statuary from the era of Homer than it would the dynamic, graceful pieces that were being sculpted on a regular basis by the third century BC.
But would it necessarily have to? Once again, Herbert Maryon dared to propose something quite different. A sketch which appears in his 1956 article shows a Colossus that is shading its eyes with its upraised right hand while it trails a cloak from its left arm which cleverly conceals a vital additional support column. Later studies were quick to dismiss Maryon’s suggestion because it was based on a relief, discovered at Rhodes in 1932, which was believed at the time to be exactly the ancient visual depiction of the Colossus which historians had so long dreamed of; alas, it has since been reevaluated, and is now believed to depict merely an ordinary human athlete. But, just as one mistake in translation shouldn’t invalidate Maryon’s entire thoughtful proposal of how the Colossus might have been built, surely a mistake like this one doesn’t render worthless his demonstration that many structural sleights of hand are possible when an ingenious artist sets his mind to it.
It’s possible as well that Helios may have been riding in his chariot — a very common motif for this god — or even sitting on a throne; both would have been more inherently structurally sound than a free-standing figure. But whatever the details of his appearance, and however disappointingly crude he might even have looked when viewed close-up, the god surely looked spectacular when viewed from the sea, with his bronze skin shining in the famously brilliant Rhodian sunshine.
There is just one more aspect of the Colossus as a physical object that begs to be mentioned. In 2019, Robert B. Kebric also made the intriguing suggestion that it might have served a practical as well as an aesthetic and political purpose: that it might have functioned as a lighthouse. Kebric draws parallels between Rhodes and Alexandria, the latter being the one Greek city of the third century BC that arguably outshone even the former as a hub of commerce and culture. Alexandria’s own, probably slightly later lighthouse, built on a low islet in the harbor and crowned by its own statue of a god (precisely which one is uncertain), was such a breathtaking sight that it eventually joined the Colossus as an acknowledged Wonder of the World. Did the two serve a similar practical purpose in addition to serving as mascots and advertisements for their respective cities? Did the one perchance even inspire the other?
In one respect at least, Rhodes was actually better equipped to support a monumental lighthouse than was Alexandria: unlike the desert land of Egypt, the wooded island of Rhodes has plenty of timber, meaning that supplying enough firewood to keep its beacon burning wouldn’t be a major issue. Kebric proposes a number of ways that the Colossus might have been made to serve this additional purpose even if we reluctantly accept that a gleaming torch in the god’s upraised hand, Statue of Liberty-style, was probably not a part of its design. For example, a fire tower might have been built next to the Colossus, with a system of pulleys that would allow workers to kindle a fire in a pod at ground level, then raise it up to the top. A network of mirrors on the tower might then have reflected the light of the flames onto the statue’s bronze paneling, both lighting it up for the world to enjoy at night and amplifying the light itself for the benefit of ships. Perhaps other fires burned on the pedestal down below, with mirrors of their own to reflect their light up to the statue, whence out to sea. A perpetually shining god of the sun to mark the island of the sun — if nothing else, the image is a fitting one.
(A full listing of print and online sources used will follow the final article in this series.) | <urn:uuid:0fbce0d8-2f00-4230-a4b0-dbccde514a78> | CC-MAIN-2022-33 | https://analog-antiquarian.net/2021/11/12/chapter-3-raising-a-giant/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571950.76/warc/CC-MAIN-20220813111851-20220813141851-00695.warc.gz | en | 0.975097 | 6,809 | 2.921875 | 3 |
Table of Contents
1. Introduction 4
2. Method 5
2.1. Design: How and where it is made. 5
2.2. Original Design 5
2.3. Proposed design by the winner 7
2.4. Final reinforced design for production 10
3. Eco- Audit of the process: 11
3.1. Passive energy use during the parts life and the options at end of life. 12
4. Results and analysis: 13
5. Discussion 15
6. Conclusion 15
7. References 16
List of Figures
Figure 1: Load Considerations in the original bracket 5
Figure 2: Original Design of the bracket 6
Figure 3: Layout of the proposed bracket design 7
Figure 4 :3 D modelling of the proposed bracket 8
Figure 5: Final look of the proposed bracket 9
Figure 6: Finalized reinforced bracket design 9
Figure 7: Eco-Audit technology in Granta 12
Figure 8: Stress calculation in the finalized bracket 14
Due to the advancement in technology and invention of better methods of production of aerospace parts, there has been a huge demand for developing cost effective and power as well as fuel efficient parts. The performance of the aircraft engines is very much depended on the parts it is made up of. These parts contribute to the overall weight of the aircraft and these reduces the efficiency by increasing the consumption of fuel (Mattingly, 2002). At the same time, it is also very important to maintain the high quality in terms of robustness, stiffness, toughness and other such physical aspects of the parts. This has to be achieved through effective designing using multiple engineering tools to optimize the size and structure of these parts without compromising in its strength and durability (Rawal, Brantley, & Karabudak, 2013).
For designing and developing such parts, additive manufacturing is a very useful solution. It is also known as 3D printing and provides a unique as well as exclusive feature to develop parts of any shapes and test the various parameters related to the strength and durability of the parts through 3D modelling. It makes the practical testing of the design possible through accurate and precise dimensioning of the part in the design modules. It helps in developing components which would be lighter in weight and would also provide the required levels of strength and performance (Dehoff, Peter, Yamamoto, Chen, & Blue, 2013).
GE Corporation has been carrying production of the aerospace and other engineering components for over years and is a very trusted name across the globe. It has been striving to develop product s and designs that would help in minimizing the weight of the components and at the same time provide the required performance. It has been encouraging engineers across the world to develop designs and structures that would provide the above mentioned characteristics (GRABCAD.COM, 2014).
In one such attempt, it had organized a competition for all the GRABCAD designers to develop brackets for jet engines which would be very cost effective and of high quality. For this competition, the participants would use the original design provided by the company and modify it through additive manufacturing techniques and design for developing a better engine bracket than the current used design. There were many specification in terms of load bearing capacity, weight, dimensions, thermal load bearing capacity, capacity to absorb tension, material, static linear loads, yield strengths as well as the size and diameter of the bracket.
It is assumed that the development of the product is in the initial stage where the forging and shaping of the loading bracket is carried out. The 3 different methods used in the production of the 3 obtained designs of the load brackets have to be evaluated in terms of additive manufacturing processes. It has been then evaluated through the CES EduPack software. In this software, the data regarding the development of the product would be evaluated. In this evaluation, any other forces or external factors affecting the moment of inertia of the loading fraction has been neglected. The standard readings of the parameters are considered for analyzing the passive energy used by the company in the development of the product, that is, loading brackets.
The method used involves considering the current design specifications, evaluating the best design provided and developing an optimum design for the engineering brackets used in the jet engines (Kalpakjian, 2001). The design and the load as well as other requirements of the designs has been evaluated and determined in the answer developed (Chu, Graf, & Rosen, 2008).
Design: How and where it is made.
The design involves following the specific procedure of modelling, printing, testing, simulation, modifying and then finishing for final mass scale production (GE.COM, 2015).
The design of the original bracket developed by GE Corporation and used in the jet engines considers the static and torsional loads as shown in the figure below:
Figure 1: Load Considerations in the original bracket
The design of the engineering loading bracket which specified the above specifications is shown in the figure below:
Figure 2: Original Design of the bracket
Proposed design by the winner
The design that was proposed by M. Arie Kurniawan, the winner of the competition used Direct Metal Laser Sintering method of manufacturing (Dutta & Froes, 2015). The weight of the bracket was reduced from the original 2033 grams to 327 grams. In his design, it can be seen that he has used the principle of H-beam and developed a profile of the bracket on the basis of that.
The layout of the fraction that has been developed by M. Arie Kurniawan, involves development of the torsional and static loads that are going to be exerted on the bracket. The layout of his design is shown below:
Figure 3: Layout of the proposed bracket design
(GRABCAD.COM M. KURNIAWAN, 2015)
After developing the design and the layout, there was a 3D model developed by him, where he had used additive manufacturing tool and GRABCAD software fir displaying the model in 3 dimensional form and there
A 3D model of the design was developed by him which is shown in the figure below:
Figure 4 :3 D modelling of the proposed bracket
(GRABCAD.COM M. KURNIAWAN, 2015)
The final look of the design of the bracket that was developed by him is shown in the figure below:
Figure 5: Final look of the proposed bracket
(GRABCAD.COM M. KURNIAWAN, 2015)
Final reinforced design for production
The design proposed by the winner was then reinforced through simulation and modelling by the GRC engineers in its New York plant. They attached every bracket with an MTS servo testing machine which worked on hydraulics. It has been developed through the FDM manufacturing method which refers to Fused Deposition Modelling (Hambali, Smith, & Rennie, 2012). The weight of the final model was about 240 grams which was very less and ensured the compactness in the design and the performance of the bracket was also retained through high strength and durability of the component.
Figure 6: Finalized reinforced bracket design
(GRABCAD.COM M. KURNIAWAN, 2015)
After considering all the designs mentioned above, details of the technical and other specifications of the brackets developed in each stage is tabulated below:
Design Material Weight (g) Manufacturing method Cost (£)
Original (O) Titanium alloy Ti-6Al-4V 2,033 Milling 150
Proposed by the Competition winner (CW) Titanium alloy Ti-6Al-4V 327 DMLS 250
Finalized Fibre reinforced (FR) PLA, Basalt fibres, Titanium alloy Ti-6Al-4V
Epoxy PLA (182)
Epoxy (5) FDM, autoclave. 50
Eco- Audit of the process:
Eco-Audit of the production process refers to considering, evaluating and analyzing the effects of the manufacturing process on the various elements of the environment (Steger, 2000). “CES EduPack 2015” is a software which provides a complete analysis of the various processes and functions involved in the development of a product. It helps in providing evaluation of the product in terms of its cost, effectiveness of different methods of manufacturing, impact on the environment and evaluation of specific technical terms used in the development of the product. It involves maintaining of data bases regarding the material and the information regarding the various manufacturing and designing processes. The basic principle on which the CES EduPack software works in developing an eco-design of the products involves following of a specific flow of steps and measures which are shown in the following figure:
CES EduPack: Eco-design Tools
This design is followed for the data evaluation and analysis for the various processes which hare carried out in the software. At the same time, there is formulation of the Eco-Audit tool which requires setting up of the various parameters and developing the configurations regarding them.
The eco-auditing tool involves considering the following technology through user interface, Materials and Eco data, dashboards and reports as shown in the figure below (Amacher, Koskela, & Ollikainen, 2004). It provides an example of the Eco-Audit technology developed by Granta.
Figure 7: Eco-Audit technology in Granta
(GRANTDESIGN.COM ECOAUDIT, 2015)
CES EduPack evaluation of passive energy use during the parts life and the options at end of life.
Passive energy use refers to the energy consumed ddurign the production of the parts or components. In our case, for production of loading brackets for the jet engines, the energy that is utilized depends on the design and manufacturing of the part (Collopy & Eames, 2001). There are three designs available. We have carried an ecological audit considering the energy used by the three designs for different processes involved.
This would include considering the usage of passive energy during the development of the loading bracket. This has been carried out using the CES EduPack software and the result of the analysis is shown below. The parameters regarding the evaluation data have been collected though the energy used in the manufacturing, transporting, matrial collection, usage, disposal and End of Life potential for the brackets developed by the three designs considered above. It is shown as follows (Radford & Rennick, 2000):
Results and analysis:
The proposed model by M. Arie Kurniawan, result into development of the below design. The design that was developed by him was very effective in reducing the weight. It reduced the weight by about 85%. Axial loads of the range of 8000 to 9500 pounds was exerted on the bracket. It was observed in their testing, that there was only one bracket which failed in these extreme conditions, whereas all the other brackets met the requirement. There was torsional load of about 5000 pounds per inch (Johnson, 2001).
Eco-Auditing of the manufacturing process involves considering the stress faced by the loading bracket during its operation (Cerdan, Gazulla, Raugei, Martinez, & Fullana-i-Palmer, 2009). It is discussed and shown in the following figure:
Figure 8: Stress calculation in the finalized bracket
(Dehoff, Peter, Yamamoto, Chen, & Blue, 2013).
The model that has been developed by the M. Arie Kurniawan, has reduced the weight of the original design of the loading fraction by a considerable amount. However, the material used is also the same and the process suggested by him for manufacturing of the fraction is DMLS which is suitable for metals and it is the best practice for such kind of production (Roy, Caird, & Potter, 2007).
However, using of FDM by the GRC engineers has helped increase in reduction of the weight. The method of FDM would be appropriate as it would be cost effective and at the same time, it would be very accurate for the production of loading brackets (Kyprianidis, 2010).
From the above analysis and eco-audit carried of the three designs, it can be seen that reducing the weightage of the loading fraction would help in reduction of fuel consumption in jet planes, thereby, increasing the efficiency and cost effectiveness of the process.
Amacher, G. S., Koskela, E., & Ollikainen, M. (2004). Environmental quality competition and eco-labeling. . Journal of Environmental Economics and Management, 47(2),, 284-306.
Cerdan, C., Gazulla, C., Raugei, M., Martinez, E., & Fullana-i-Palmer, P. (2009). Proposal for new quantitative eco-design indicators: a first case study. . Journal of Cleaner Production, 17(18), , 1638-1643.
Chu, C., Graf, G., & Rosen, D. W. (2008). Design for additive manufacturing of cellular structures. Computer-Aided Design and Applications, 5(5), , 686-696.
Collopy, P. D., & Eames, D. J. (2001). Aerospace manufacturing cost prediction from a measure of part definition information (No. 2001-01-3004). . SAE Technical Paper.
Dehoff, R., Peter, W., Yamamoto, Y., Chen, W., & Blue, C. (2013). Case Study: Additive Manufacturing of Aerospace Brackets. ADVANCED MATERIALS & PROCESSES, 19-22.
Dutta, B., & Froes, F. H. (2015). The additive manufacturing (AM) of titanium alloys. . Titanium Powder Metallurgy: Science, Technology and Applications, , 447.
GE.COM. (2015). ADVANCED MANUFACTURING IS REINVENTING THE WAY WE WORK. Retrieved from http://www.ge.com: http://www.ge.com/stories/advanced-manufacturing
GRABCAD.COM. (2014). GE jet engine bracket challenge. Retrieved from https://grabcad.com: https://grabcad.com/challenges/ge-jet-engine-bracket-challenge
GRABCAD.COM M. KURNIAWAN. (2015). M. KURNIAWAN BRACKET DESIGN. Retrieved from https://grabcad.com: https://grabcad.com/library/m-kurniawan-ge-jet-engine-bracket-version-1-2-1
GRANTADESIGN.COM ECODESIGN. (2015). Granta’s Guide: Five Steps to Eco Design. Retrieved from http://www.grantadesign.com: http://www.grantadesign.com/eco/ecodesign.htm
GRANTDESIGN.COM ECOAUDIT. (2015). Granta’s Eco Audit Methodology. Retrieved from http://www.grantadesign.com: http://www.grantadesign.com/eco/audit.htm
Hambali, R. H., Smith, P., & Rennie, A. E. (2012). Determination of the effect of part orientation to the strength value on additive manufacturing FDM for end-use parts by physical testing and validation via three-dimensional finite element analysis. International Journal of Materials Engineering Innovation, 3(3-4), , 269-281.
Johnson, R. B. (2001). Jet Engine Metallurgy (No. 530038). . SAE Technical Paper.
Kalpakjian, S. (2001). Manufacturing engineering and technology. . Pearson Education India.
Kyprianidis, K. G. (2010). Multi-disciplinary conceptual design of future jet engine systems.
Mattingly, J. D. (2002). Aircraft engine design. . Aiaa.
Radford, D. W., & Rennick, T. S. (2000). Separating sources of manufacturing distortion in laminated composites. . Journal of Reinforced Plastics and Composites, 19(8), , 621-641.
Rawal, S., Brantley, J., & Karabudak, N. (2013). Additive manufacturing of Ti-6Al-4V alloy components for spacecraft applications. 6th International Conference on IEEE. (pp. 5-11). Recent Advances in Space Technologies (RAST), .
Roy, R., Caird, S., & Potter, S. (2007). People Centred Eco-design: Consumer adoption and use of low and zero carbon products and systems. . Governing technology for sustainability, , 41.
Steger, U. (2000). Environmental management systems: empirical evidence and further perspectives. European Management Journal, 18(1),, 23-37.
Higher Colleges of Technology
3D Printing in manufacturing
Table of Contents
1. Abstract 3
2. Introduction 4
3. Literature review 5
3.1. Research Questions 9
4. Methods 10
5. Findings 12
6. Discussion 14
7. Conclusion 15
8. References 17
9. Appendix Page(s) 19
There has been a tremendous increase in the demands of the people along with development of new technologies to fulfil them. The importance of developing new technological inventions pertaining to different fields has been realized by different industries across the globe. One of the most emerging technology pertaining to this advancement is 3D printer. It is one of the most innovative invention in the last decade. It has unlimited potential that can be explored through developing different machines and instruments from 3D printing technique. The importance of the 3D printing has been obtained and its future potential has been identified in the research.
With the current rate of technological advancement, no future possibilities are too hard to imagine. 3D printing is one such innovative technology that is additive in nature, in which objects get built up in a great number of extremely thin layers. 3D printers can be considered to be the technology that will bridge the gap between the physical world and cyberspace, and a manifestation of the second digital revolution, thus playing a foreseeable important role in our futures. (Barnatt, 2016).
3D Printing, also called additive manufacturing, includes the different processes that are used to synthesize three dimensional objects, by forming layers of material in succession under the control of a computer in order to form an object, that is, in simpler terms, three dimensional objects are made from a digital file. These objects may be of any possible geometry or shape. It is classified as a type of industrial robot. Some believe 3D printing to mark the start of a third industrial revolution. Using the capabilities of the Internet, it may soon become possible to send a product in the form of a blueprint to any possible place in the world. While originally,3D printing was used to denote the deposition of material onto a powder bed using inkjet printer heads, it now encompasses a wide range of techniques including sintering based processes and extrusion, all falling under the broad term of additive manufacturing. (3D printing). This research studies the applicability of 3D printing in manufacturing, both in terms of scope and commercial viability, and also other end user applications as well.
The research has been conducted by reviewing secondary literary sources, including journals and reports, to get an insight about how 3D printing can have an impact on the manufacturing process. The results after having conducted the research indicate that not only does 3D printing greatly impact the manufacturing process, especially in prototype manufacturing; it also has a wide variety to choose from to suit different purposes. The pivotal objective of the research is to study how this innovation in technology will bring about a revolution in the manufacturing process and shape its future for commercial as well as individual users.
While previous researches and reports have been published about how 3D printing will change the manufacturing technology, this report aims to cover up for the lack of a comprehensive report that users who are looking to use this technology for both large scale commercial uses as well as individual uses like making small spare parts. This research is based on qualitative method as carried out by previous literature and researches. Generally, it deals with an examination of finding out answers to a question, thoroughly employs a predefined set of processes to answer the question, gathers evidence, designs findings which were not verified before and creates the findings which are valid ahead of the instant limitations of the study.
3D Printing: Additive manufacturing v/s Traditional Manufacturing
3D printing is an additive manufacturing process that adds extremely thin successive layers, instead on removing them from a whole. (Tarang, 2015)The European Social Fund and Deloitte evaluate the advantages and disadvantages of 3D printing when compared to traditional manufacturing. In traditional manufacturing techniques, called subtractive manufacturing, like cutting and milling, materials are removed from a preformed block, creating a lot of waste since the scrap generally cannot be reused. However, 3D printing eliminates this process of waste creation since the material is only placed and added successively in the location where it is needed, leaving the rest of the space free. (European Social Fund, 2013) Further, in a report by Deloitte, it is clear that apart from waste reduction, it also provides the benefit of reducing lead times, easily incorporating innovations, creating customized products or small batches economically, reducing the level of inventories and facilitating Just –In-Time manufacturing. However, it may be disadvantageous in the production of larger volumes, or produce bad quality products if low end printers are used. It is also not possible to product larger objects that traditional manufacturing can, at least not at the cost that traditional manufacturing does it at. (Deloitte, 2014)
Report by AT Kearney summarizes the benefits of 3D printing as allowing mass customization, introducing new capabilities at low fixed and overhead costs, reduced speed and lead times, due to shorter cycles of production, process and design, simplification of the supply chain, by keeping production close to the demand point and ensuring reduced inventory, and reduction in wastes. (AT Kearney, 2013)
3D printing and the future of manufacturing
In a report published by Leading Edge Forum and one published by Campbell, T, Williams,C, Ivanova,O and Garett,B, the impact that 3D printing will have on the future of manufacturing is discussed. It says that 3D printing will bring about a change in the calculus of manufacturing by the means of optimizing for batches of one. It can be used to manufacture customized, improved and even near impossible products right at the point of consumption or usage. Apart from this, it can be used to create a wide range of products with flexibility, which implies the possibility of a serious change in supply chains and production models. Based on the materials that are used, although mass consumption is tougher, products can be up to 65% lighter but equally strong as in traditional manufacturing. 3D printing will initially focus on new markets instead of established ones, and competition will only drive the market forward. It has applications in a variety of industries such as Defense, Aerospace, Automotive, healthcare, with customization being called the new normal and sophistication and simplicity being its strengths. (Leading Edge Forum, 2012) Further, Additive manufacturing could also help leverage other breakthroughs in science and manufacturing, when used as a disruptive technology. Apart from the manufacturing process, it also has tremendous scope to create advances in environmental protection due to reduction in wastes created in the process of manufacturing. It will create new industries and careers and thus a possible shift in the global economy. (Campbell, 2011)
Singh,O, Ahmed,S, Abhilash,M and Dimitrov,D, Schreve,K and Beer,N write that 3D printing implies manufacturing processes involving low labor costs, and high precision. It can be used for a variety of purposes, right from manufacturing car accessories, printing out tooth fillings, to repairing components of space shuttles, and thus finds use in a number of varied fields. (Singh, 2016) (Dimitrov, Schreve, & Beer, 1995)
Applications of 3 D printer
3D Printing In the Aerospace Industry
Currently, rocket engine injector of NASA prepared from a 3D printer agreed a major hot fire test. The rocket engine injector produced ten times more driving force as compare to other injector from 3D printing previously in the test.
3D Printed Organs
3D printing has been employed for printing organs from own cells of a patient. This indicates that the patients do not have to wait for a long time interval for the donors in the future. Previously, hospitals implanted the organs and structures into patients designed by hands. 3D printing has considerably enhanced this process.
3D Printing In the Automotive Industry
Engineers at General Motors used 3D printing to conserve time needed in prototyping the components for the vehicle when the company began to create the 2014 Chevrolet Malibu.
If one were to own and use a 3D printer at home, Jackson,B suggests a number of uses. They can be used to create adaptors and other repair parts for almost any hardware, and make them work, instead of simply discarding them if a part stops functioning. They can also be used for designing and creating products that are unique or hard to find generally. It can also be used to easily create parts for the household computer or desktop, that could be otherwise expensive to procure. (Jackson, 2015)
Orsini,L agrees that even the most cynical users will have to embrace the fact that 3D printers have completely changed production technologies, however using them effectively required time and effort to be invested, to get the models and required sizes right, perfectly. Although this may seem like a step backward, requiring intensive efforts by the user, one should not make the mistake of dismissing this quickly because it has occurred several times in the past that organizations have rejected emerging technologies too soon, and then gone on to regret those decisions. (Orsini, 014)
Choosing the right 3D printer
3D Systems Corporation writes in a report about the different types of models that can be chosen from. Concept models are used to improve the earlier design decisions that have an impact on engineering activity. It helps reduce expensive changes later on in the process and reduce the length of the development cycle. It gives a number of options that can be evaluated and chosen from. Functional prototypes can be used once designs begin to shape up, to verify the elements of design so as to make sure the product will meet its functionality requirements, such as fit, form and performance. Digital manufacturing prototypes can be used more for end or spare part manufacturing. (3D Systems Corporation, 2013)
In the summary from the sources studied in the literature review, the differences between 3D manufacturing, which is an additive process and traditional manufacturing is evident. 3D manufacturing also offers the benefit of producing much lighter but equally strong products, as well as those that are tougher to find, although it is tough to product large goods or carry out mass production. It will also revolutionize the manufacturing process, since goods can be produced very close to the source of consumption and thus entire supply chains and processes may undergo changes. For the household consumer too, 3D printing can offer several uses. Further, different variations in these printers are available according to possible end use applications, and thus the appropriate model or variety needs to be chosen by customers according to their needs.
• How 3D printing can transform manufacturing?
• What is the future of 3D printing in manufacturing industry?
• What is the cost of 3D printer?
• What are the most important factors to you when choosing a 3D printer?
• What kinds of items would design and create with 3D printer if you owned one?
Type of Research
Quantitative methodology is selected for this research, which is based on the collection of data from sample population. As suggested by the name, quantitative data is considered in this approach. Different accepted standards of statistics are included in this approach for its validity like the number of respondents for the establishment of a result which is significant statistically. To ascertain the benefits of using 3D printers in the manufacturing companies, quantitative data is used as survey questionnaires were distributed to get primary data for the research project (charmaz, 2014). The questionnaire that was developed was filled by 30 people who were the managers and members of a company.
In order to get the data of manufacturing companies, questionnaire will be distributed among the managers and members of the company. These questionnaires with the combination of our deep research on the benefits of 3d printing would lead us to examine the future of 3d printing (charmaz, 2014).
Data Analysis and Interpretation
Data analysis is the process of systematic implementation of logical or statistical techniques to illustrate, describe, condense, and assess the data. A comprehensive summary of the outcomes of the research will be included in the data analysis as well as the main conclusions attained through the research will be included.
The study will give reflection on the benefits of using 3D printers in the manufacturing industry. The data for this purpose will be processed by statistical inference which is a reliable tool for analyzing primary data. It will provide assistance in analyzing the situation as well as will be helpful in reaching conclusion regarding the issue. Primary data analysis would also be helpful in finding the answers of the research questions.
Reliability and Validity of the sources
Reliability of the source refers to the level of consistency that the research possesses and the results that are developed through it are stable. It is a very important tools in identifying the applicability of the research and the developed data even after a long term that has been developed for the same purpose. Reliability can be achieved through developing data and its research on the basis of the opinions that are collected from the sample population. It has been obtained without any prior briefing or information and the responses are not manipulated and are totally genuine.
Validity, on the other hand refers to the level up to which the research is applicable and can be applied in the real world. It provides the practicality of the research method and its solution. It is one of the most important tool that is similar to reliability, but it provides the practical aspect of the research and the data that is obtained. The analysis of the data has helped in identifying that the solution that has been developed would be valid for practical application. The feasibility of the research has to be evaluated through systematic review and analysis of the system.
In Q6, it describes into the different applications that are possible with the usage of 3D printing, both commercially and for individual customers so the reader may understand fully the spectrum of end user applications that this technology offers.
Among the responses obtained from the questionnaire, the data analysis for Q6 is shown in the table and chart given below:
Options Number of responses Percentage of responses
Aerospace 12 40
Medical 3 10
Fashion Industry 1 3.33
Service Bureaus 2 6.64
Electronic accessories 3 10
The role of the 3D printer in shaping the future of the machines in the future has been obtained through the Q10 in the questionnaire. The findings of this theory are as follows:
Options Number of responses Percentage of responses
Extremely important 12 40
Not so important 3 10
Not at all important 0 0
Thus, it can be stated that about 70% of the respondents found that the role of 3D printing in shaping the machines of the future is important or extremely important as shown in the figure given above.
It can be obtained from the responses that are obtained from the sample population that has been taken into consideration that 3D printing has the ability to transform the manufacturing industry and at the same time, bring large number of innovations in the field of automotive, aerospace, Medical, Fashion Industry and Electronic Industries.it is found to have positive effect on the development of industries across the globe. There are many advantages of using 3D printing that has been discussed and there is an unlimited potential available to be explored in the technique of 3D printing. These advantages are realized while developing a prototype or a design of a complicated design which is very tiring and time consuming. At the same time, the required level of accuracy is not obtained if the production is carried through the traditional manufacturing processes.
It can also be obtained that the future of the 3D pointing in the manufacturing industry depends on the level of innovation that is incorporated in the process. It can be achieved through identifying the potential areas where there is a need to imbibe 3Dprinting in different processes. There are many advantages that are provided through the use of 3D printing. Accurate design and durable structure are the two most significant characteristics that are provided through 3D printing. It has been obtained from the questionnaire that was asked to the respondents.
While developing its effect on the manufacturing industry through the survey, it was obtained that most of the respondents did not agree that it had a negative effect as they knew the potential that 3D printing has in it. However, it has been identified from the responses obtained from the participants that there is a need to explore the potential of 3D printing and make significant and innovative designs with the help of it.
3D Printing has no doubt revolutionized the way manufacturing processes are carried out, and this is not just a current trend, but can spell significant changes for the future of both commercial and household applications as well. While this is more labor intensive, and may seem as a backward step at first, organizations and individuals need to evaluate its applications and suitability for different applications before they decide to use it in their processes.
They can help produce much lighter, and equally strong products, as well as parts that are tough to find and are complex to create. However, because of involving more effort, they need to be further developed since they currently cannot support mass production. On the other hand, their overhead and fixed costs are quite low, and this can be effectively tapped into to nullify other possible disadvantages.
1. 3D printing. (n.d.). What is 3D printing? Retrieved from 3D printing.com: http://3dprinting.com/what-is-3d-printing/
2. 3D Systems Corporation. (2013). 3D Printer buyers guide. Retrieved from Agile-manufacturing.com: http://www.agile-manufacturing.com/files/news/3d-printer-buyers-guide.pdf
3. AT Kearney. (2013). 3D Printing: A Manufacturing Revolution.
4. Barnatt, C. (2016). 3D Printing. Retrieved from Explaining the future: http://explainingthefuture.com/3dprinting.html
5. Campbell, T. W. (2011). Could 3D Printing Change the World? Startegic Foresight Report.
6. Deloitte. (2014). Disruptive manufacturing: The effects of 3D printing.
7. Dimitrov, D., Schreve, K., & Beer, N. (1995). Advances in three dimensional printing – state of the art and future perspectives. Rapid Prototyping Journal.
8. European Social Fund. (2013). Domain Group 3D Printing Workshop Notes.
9. Jackson, B. (2015). Household uses for your desktop 3D printer. Retrieved from 3d printing systems: http://3dprintingsystems.com/household-uses-for-your-desktop-3d-printer-part-3/
10. Leading Edge Forum. (2012). 3D printing and the future of manufacturing.
11. Orsini, L. (014). Why You’ll Want A 3D Printer In Your Home. Retrieved from Readwrite: http://readwrite.com/2014/01/31/why-you-want-a-3d-printer-in-your-home
12. Singh, O. A. (2016). Modern 3D Printing Technologies: Future Trends and Developments. Recent Patents on Engineering.
13. Tarang, Y. (2015). 3D PRINTING ADDITIVE MANUFACTURING.
1. Can 3D printing transform manufacturing?
a. Strongly Agree
c. Not sure
e. Strongly Disagree
2. According to you, how is the future of 3D printing in manufacturing industry?
a. Extremely good
e. Extremely bad
3. What is the cost of 3D printer?
a. Extremely high
e. Extremely low
4. Among the following, what are the most important factor to you when choosing a 3D printer? (Choose one)
f. User Friendliness
5. Would you like to design and create items with 3D printer if you owned one?
a. Strongly Agree
c. Not sure
e. Strongly Disagree
6. Which of the following 3D printing applications is the most interesting?
c. Academic institutions
e. Fashion industry
f. Service Bureaus
g. Electronic accessories
7. Do you think that 3D printing will help in overcoming many designing problem in the future?
a. Strongly Agree
c. Not sure
e. Strongly Disagree
8. Are the current developments in the field of 3D printing satisfactory?
a. Strongly Agree
c. Not sure
e. Strongly Disagree
9. Do you think that adapting 3D printing on a large scale would have some negative effect?
a. Strongly Agree
c. Not sure
e. Strongly Disagree
10. According to you, what role would 3D printing play in shaping the future of machines in the future?
a. Extremely important
d. Not so important
e. Not at all important | <urn:uuid:78474eae-4d65-4cbc-b0b0-d78d5dd0b418> | CC-MAIN-2022-33 | https://www.projectfactory.info/project_category/mechanics/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571692.3/warc/CC-MAIN-20220812105810-20220812135810-00096.warc.gz | en | 0.935148 | 8,120 | 3.28125 | 3 |
If you are a college student pursuing a Computer Science major or degree, a good laptop is essential. Although most CS and IT students’ primary criteria would be coding, learning programming languages, and concepts, submitting assignments on time, doing projects, etc., you might also want to take some extracurricular breathers such as occasional gaming. Finding the best Laptops for computer science students seems easy at first.
There is no shortage of options in the computer market, which is exactly what makes it a little,e difficult. Find the one in your price range and tailored to your needs among the hundreds of options. The difficult task is not finding a good laptop but finding an appropriate one (which is neither overpowered nor underpowered.)
In this guide, we’ve considered different factors to help you decide on a laptop that will suit your needs and requirements.
You might want to put together a checklist of your preferences before picking the best laptop for yourself. As an IT student, you don’t need a beast-like, overpowered laptop suited for machine learning and data science, although it may seem like you do.
In most cases, a Computer Science student’s curriculum would require essential software for coding and debug in Python, Java, C, C++, using PyCharm, Visual Studio Code, MATLAB, Octave, R Studio, Netbeans, etc. The software would differ depending on the specialization, but this list is good enough to help us understand the recommended system requirements.
What to Consider When Selecting a Best Laptop for a Computer Science Student?
- CPU: Intel Core i7 and i5 would be the best bets if one is an Intel person. For those to prefer AMD processors, the same choices would follow suit i.e. Ryzen 7 and 5. Computer Science students require processors that have adequate processing power to compile code and handle demanding projects. Ideally, you should try and get the fastest processor you can afford.
- Display: This is probably the most important feature for a programming student, sometimes even more so than processing power and storage. You’ll be staring at the screen a lot. Especially after the first year because you’ll end up sitting in front of the screen for hours trying to figure out what’s wrong with your code. So you may want to take care of your eyes and prevent straining them. So in such a case, full HD resolution or higher is recommended (1440 × 900 and above). Also, if possible, opt for matte displays. Matte displays help reduce eye strains and fatigue. And because you will be looking at the display a lot, why not get something easier on the eyes?
- RAM: 8GB is the bare minimum that you’ll require, given the fact that you will be keeping a lot of applications open and simultaneously running a lot of code. So you need enough RAM to be able to multitask. If you can afford more RAM, go for it by all means, but 8GB should be plenty in most cases.
- Graphics Card: A Computer Science student who uses the laptop strictly for school won’t need the fastest graphics card. Since most modern Intel and AMD hardware comes with integrated graphics that are more than good enough for anything you could possibly need for computer programming. However, if you like to game in your free time then we suggest a laptop with a dedicated graphics card. A dedicated graphics card could also come in handy if you decide to get into machine learning. But aside from that, an integrated graphics card does the job.
- Storage: If you can afford to get an SSD instead of an HDD, then do it, SSDs in laptops speed up your workflow extensively. How? Searching through a bunch of files to reuse a piece of code will be several times swifter with an SSD compared to an HDD.
At a Glance:
- What to Consider When Selecting a Best Laptop for a Computer Science Student?
- Our Recommendations For Best Laptops for Computer Science Students
- Best Value for Money: HP Pavilion 15
- HP Pavilion 15 CPU and GPU
- HP Pavilion 15 Design
- HP Pavilion 15 RAM and Storage
- Best Budget Laptop For CS Major: Lenovo IdeaPad 3
- Lenovo IdeaPad 3 CPU
- Lenovo IdeaPad 3 GPU
- Lenovo IdeaPad 3 RAM and Storage
- Lenovo IdeaPad 3 Display
- Lenovo IdeaPad 3 Design and Peripherals
- Lenovo IdeaPad 3 Ports
- Best 17-inch Display: Dell Inspiron 17
- Dell Inspiron 17 CPU
- Dell Inspiron 17 GPU
- Dell Inspiron 17 RAM
- Dell Inspiron 17 Storage
- Dell Inspiron 17 Display
- Dell Inspiron 17 Ports
- Dell Inspiron 17 Design
- Best For College Students: Acer Aspire 7
- Acer Aspire 7 CPU
- Acer Aspire 7 GPU
- Acer Aspire 7 RAM and Storage
- Acer Aspire 7 Display
- Acer Aspire 7 Design
- Best 2-in-1 Laptop: HP Spectre x360 15T
- HP Spectre x360 15T Design
- HP Spectre x360 15T Display
- HP Spectre x360 15T CPU, RAM, and Storage
- Budget School & Gaming Laptop: Asus TUF FX505DT
- Asus TUF FX505DT CPU
- Asus TUF FX505DT GPU
- Asus TUF FX505DT RAM and Storage
- Asus TUF FX505DT Display
- Asus TUF FX505DT Keyboard and Peripherals
- Asus TUF FX505DT Ports
- Frequently Asked Questions
Our Recommendations For Best Laptops for Computer Science Students
Best Value for Money: HP Pavilion 15Best Pick
HP’s Pavilion 15 series is known to all, and they have maintained an undeterred reputation for putting out power-packed machines over the years. The 2020 HP Pavilion is a serious performance-packed machine for under $1000. Without any doubt, the HP Pavilion is one of the best choices for programming students.
HP Pavilion 15 CPU and GPU
The specs are astounding since this laptop is built for heavy tasks like running computer science software and gaming. The Intel Core i7-1165G7 processor and Integrated Intel Iris graphics make this device a total beast.
HP Pavilion 15 Design
One thing to keep in mind about this laptop is that it’s an entirely plastic body with no metal casing anywhere. So if you’re a regular user of MATLAB or VB Studio, which are known to heat, most systems that do not use assisted cooling, this could be a point to vet over incessantly.
Although the HP Pavilion is not exactly portable because of its weight (3.86 pounds) and size (15.6-inches). This shortcoming can be traded off for all the other amazing specs.
HP Pavilion 15 RAM and Storage
With a massive 512GB SSD storage and 8GB DDR4 RAM, this computer science laptop easily performs day-to-day tasks. The battery life on this machine is incredible too, and HP claims to offer roughly 8 hours of battery life even when used substantially.
All in all, this is an affordable and powerful laptop that gets the job done smoothly for CS majors for sure.
- Great for multitasking
- Good battery life
- Edgy looks
- Lack of portability
- Plastic body
- The lack of SSD could make complex coding trickier
Best Budget Laptop For CS Major: Lenovo IdeaPad 3
If you are a computer science student on a budget, the Lenovo IdeaPad 3 is easily the best laptop in the retail space to purchase. Although you do need to make a few display-specific compromises, the processing unit is quite efficient with IDE applications, such as Visual Studio Code, Code:: Blocks, and other resource-intensive coding are concerned.
Lenovo IdeaPad 3 CPU
To start with, you get access to the power-efficient Intel Core i5-1035G1 processor, featuring four cores and programmer-friendly Virtualization support. Furthermore, this chipset is easily the best budget-friendly resource for MATLAB users, courtesy of the single-core clock speed of up to 3.9GHz.
Lenovo IdeaPad 3 GPU
As MATLAB, a handful of compilation tools, coding platforms, Octave, Netbeans, and other applications are inclined towards single-threaded performance, and the existing i5 SoC seems like a pretty good selection. Considering this to be a cheap entry-level laptop, the integrated UHD graphics seem like an expected inclusion.
Lenovo IdeaPad 3 RAM and Storage
The entire processing unit gets requisite levels of assistance from 12GB of onboard RAM, capable of processing data and files at 2600MHz. Our experts were pretty surprised to see 12GB of system memory on this sub-$600 budget notebook.
Then again, storage-specific arrangements are pretty bleak, with the IdeaPad housing a basic 256GB SSD unit. Despite its limited capacity, the boot drive can load several applications and the Windows 10 S Mode OS in a heartbeat. While Windows 10 S Mode is very restrictive, the good news is that you can switch out to Windows 10 Home for free. It’s important to note that there is no going back to S Mode once you do that.
Lenovo IdeaPad 3 Display
The 15.6-inch HD display isn’t the brightest kid on the block but is bolstered by the TruBrite technology. While the display resolution is pretty underwhelming at 1366 x 768 pixels, Lenovo makes up this bottleneck with a touchscreen panel. Besides, the LED-backlit technology renders the IdeaPad 3 a power-efficient laptop.
Lenovo IdeaPad 3 Design and Peripherals
Productivity significantly boosts with a full-sized keyboard, followed by the responsive trackpad, 720p webcam, and dual speakers with Dolby audio enhancement. Then again, the 180-degree hinge allows you to use the laptop as an extended tablet.
Lenovo IdeaPad 3 Ports
When it comes to structural relevance, the computer weighs close to the 5-pound mark but doesn’t compromise on the port arrangement, with USB 3.0, USB 2.0, and HDMI being the standard inclusions. In terms of wireless connectivity, you get access to Wi-Fi 5 and Bluetooth support, precisely for establishing faster connections.
The battery life of up to 7 hours furthers the portability quotient, whereas the numeric keypad ensures holistic functionality for aspiring programmers. Overall, the Lenovo IdeaPad 3 is a well-balanced notebook for college students, provided aesthetics and display quality aren’t your priorities.
- 10th gen processor
- A sizable chunk of RAM
- Touch screen display
- Integrated numeric keypad
- Long-lasting battery module
- Restricted storage space
- The display isn’t bright enough
Best 17-inch Display: Dell Inspiron 17Staff Pick
Better identified as a processing behemoth, the Dell Inspiron 17 is one of the more powerful and relevant notebooks in the market for computer science students and professionals.
Dell Inspiron 17 CPU
While some might consider the Inspiron 17 as overkill of a laptop for computer science, programming, and even running hours’ worth of complex codes, the futuristic spec sheet easily covers you to even the most complicated and resource-intensive requirements. At the core, you have the Intel Core i7-1065G7 processor, capable of reaching maximum clock speeds of up to 3.9GHz.
Dell Inspiron 17 GPU
The efficient, graphics-optimized chipset is perfectly complemented by the entry-level MX230 graphics card. This GPU is housed to ensure proper coverage, in case your programming tasks comprise graphics-intensive apps and processes.
Dell Inspiron 17 RAM
Dell incorporates 16GB RAM, allowing you to handle multiple tasks, codes, and applications without slowing down the system. Plus, we tried using relevant applications like Octave, Netbeans, and MATLAB on this laptop and experienced top-notch performances.
Dell Inspiron 17 Storage
We were equally impressed by the storage support with Dell offering a 2TB hard drive storage unit for the more static files followed by the 256GB SSD unit for the Windows 10 and the more demanding computer programming applications.
Dell Inspiron 17 Display
The 17.3-inch screen is big enough for most students who seek the perfect balance between power and portability. Plus, the 1080p panel assumes anti-glare support, making it one of the best resources for continued usage.
Dell Inspiron 17 Ports
Productivity-wise, the Dell Inspiron 17 makes room for a standard, backlit keyboard, potent speakers, HD webcam, and a decent set of ports, comprising Type-C, USB 3.1 Gen 1, HDMI, and other resources.
Dell Inspiron 17 Design
Despite being a heavy notebook at 6 pounds, the Dell Inspiron 17 is still an effective computing resource, courtesy of Wi-Fi AC and BT 4.1 wireless standards. Plus, there is a standard battery module in play, allowing you to churn out close to 7 hours, on a single charge.
Therefore, if you are interested in purchasing a future-proof notebook that can help with academic and even professional assignments, the Dell Inspiron 17 is the perfect resource to consider.
- Massive storage capacity
- 10th gen, future-proof processor
- Anti-glare display
- Durable keyboard
- Obsolete connectivity standards
- Heavier than usual
Best For College Students: Acer Aspire 7Budget Pick
The Acer Aspire 7 comes forth as one of the more complete laptops on the list, best suited for students, professionals, and even mid-range gaming enthusiasts. This laptop allows you to manage most academic and professional processes from home, courtesy of the inventive specs sheet. This mid-range machine brings a great balance between price and power.
Acer Aspire 7 CPU
At the core, Acer houses the Intel Core i5-9300H processor. The existing mobile chipset features quad-core architecture and allows you to manage single and multi-core processes with ease. Plus, this mobile SoC can also reach computing speeds of up to 4.1GHz, depending on the tasks at hand.
Acer Aspire 7 GPU
In addition to a 9th gen processor, Acer also incorporates a mid-range, GTX 1650 graphics card. The existing GPU allows you to run most AAA titles whereas the 4GB VRAM ensures that most graphics-intensive tasks are managed without bottlenecks.
Acer Aspire 7 RAM and Storage
You also get access to 8GB RAM for seamless multitasking followed by a 512GB solid-state drive. The existing SSD module loads multiple files in almost no time whilst booting up the Windows 10 Home OS at lightning speeds.
Acer Aspire 7 Display
As far as the display is concerned, the Acer Aspire 7 makes room for a 15.6-inch screen, fortified with a cumulative resolution of 1920 x 1080 pixels. Other resourceful attributes include a standard backlit keyboard, dual-band wireless support, Gig Ethernet port, interactive webcam, and a durable chassis with a 180-degree rotatable hinge to work with.
Acer Aspire 7 Design
Coming to the heft, the Aspire 7 weighs 4.74 pounds, which is good enough for extensive WFH usage. Plus, the wide range of connectivity schemes followed by an 8-hour battery backup further the credibility of this efficient and functional laptop.
As a computer engineering student interested in academics and leisure, all the same time, the Acer Aspire 7 comes forth as one of the more all-inclusive computers to purchase.
- H-series processor that guarantees power
- GTX 16-series GPU for better graphics
- Sizable storage support
- Dual-lane SSD for faster file retrievals
- Stellar display
- Aesthetic design
- Lacks Wi-Fi 6 support
- Middling RAM support
Best 2-in-1 Laptop: HP Spectre x360 15T
We tested the HP Spectre x360 15T which comes with a 4K high-resolution touch screen, an Intel Core i7-10510U processor, 16GB RAM, and 1TB SSD accompanied by a 2GB GeForce NVIDIA MX250 Graphics Card.
HP Spectre x360 15T Design
The Envy line of HP laptops is known for its sophisticated and elegant looks and the new edition does justice to the existing lineup. The Envy can put almost any high-end laptop down when it comes to aesthetics.
Weighing just about 4.83 pounds and having a thickness of just about 0.6inch, this laptop can compete with almost any portable laptop out there in terms of size and weight. And this portability especially comes in handy for students who like to travel light.
HP Spectre x360 15T Display
The 4K high-resolution gives out a very startling vibe. We especially were impressed with the sharp and bright colors. The 4K touch-screen is responsive and the feel of touch is vivid and it makes the whole experience of using stock drawing apps like Snip and Sketch a whole lot more exciting.
This experience can come in handy if you’re using wireframing tools to plan the UI/UX of your application and don’t want to go through the hassle of using a pen tablet before you proceed to write code.
HP Spectre x360 15T CPU, RAM, and Storage
The mixture of an i7-10750H processor, 16GB SDRAM, and a 1TB SSD can make your heavy-performance usage a cakewalk. You could be running a bunch of Chrome tabs while debugging and compiling code and watching your favorite movie in 4K all at the same time. All this maxed out with decent battery life, quality speakers, and essential security features makes it a perfect choice for Computer Science students.
- Elegant looks
- Good battery life
- Quality speakers
- 4K touch kills battery at an enormous speed
Budget School & Gaming Laptop: Asus TUF FX505DT
The Asus TUF FX505DT is an affordable, AMD-powered notebook that brings in quite a few innovative and required attributes for aspiring computer science majors.
Asus TUF FX505DT CPU
There is no shortage of processing power with the Asus laptop making room for the Ryzen 5 3550H chipset. Despite being slightly prone to heating, this mobile SoC is good enough for most applications and simulating resources like Android Studio, provided you take up one process at a time.
Asus TUF FX505DT GPU
The existing processor assumes a burst clock speed of up to 3.7GHz, which is more than adequate for coding and software development. However, Asus still incorporates a GTX 1650 GPU, loaded with 4GB VRAM.
The featured processing consortium, therefore, allows you to manage every app, task, and even mid-range games like Fortnite, WOW, etc, without breaking a sweat.
Asus TUF FX505DT RAM and Storage
For the RAM fanatics, Asus introduces 8GB of system memory. However, our only gripe has to be the inclusion of a basic 256GB solid-state drive, which is slightly middling considering the diverse requirements of a computer science student. Still, the SSD is pretty fast and boots up the Windows 10 OS in virtually no time.
Asus TUF FX505DT Display
Next in line is the 15.6-inch 1080p screen. While the display doesn’t come with additional bells and whistles, it manages to offer a pleasing visual experience to the codes and programmers.
Asus TUF FX505DT Keyboard and Peripherals
Other reliable peripheral-specific resources include a backlit keyboard; surround sound speakers, a wide-angle webcam, and a rugged chassis. Despite the durability, the TUF FX505DT weighs slightly less than the 5-pound mark.
Asus TUF FX505DT Ports
Not just that, you also get a pretty diverse connectivity suite, with Type-A ports, Wi-Fi AC, and BT 5.0 being the usual inclusions.
The battery life isn’t top-shelf and Asus only manages to offer 5 hours’ worth of autonomy, making it a dependable budget machine for a computer science engineering student.
- Reliable processor
- Mid-range graphics card for better performance
- Light yet durable
- Sharp viewing angles
- Chunkier than usual
- Middling battery backup
- Subpar storage support
Frequently Asked Questions
Should I get a gaming laptop if I’m not much of a gamer?
You could opt for a gaming laptop even if you’re not a gamer solely for performance reasons since most gaming laptops are heavily equipped with solid specs. These machines can easily handle anything from your curriculum, but if you’re keen on design and looking for a lighter laptop, then an outright gaming laptop might not suit you.
What kind of laptop do I need as a Computer Science student?
No one size fits all when it comes to laptops. It would help zero in on a laptop after understanding your interests and usage needs. Go through the recommended specifications listed above to understand the purpose of different components of the machine.
What’s an SoC, aka System On a Chip?
SoC is short for System-on-a-Chip. It’s usually used in mobile computing such as tablets, smartphones, smartwatches, and netbooks, and it’s the brain of a computer, phone, or smartwatch. SoC combines multiple components, such as CPU, GPU, NPU (Neural Processing Unit), modem, etc., into a single chip.
Do I need to get an external laptop cooling pad if I run heavy programs for extended hours?
Vet this list thoroughly, and you’ll find laptops with efficient cooling as a benchmark feature, so if you know that you’re going to be running heavy programs or gaming for extended hours.
Should I go for an SSD or HDD?
Suppose you’re going to be compiling a lot of code and plan to use the laptop for machine learning applications. Like MATLAB in the future, it’s best to go for SSD; in some cases, a spec-rich laptop could lack an SSD, so make sure you diligently pay attention to this aspect. Are you still confused? Please read our detailed comparison of SSD vs. HDD.
I like the performance and feature-rich laptop, but it’s bulky. Is it a worthy tradeoff?
Many performance and feature-rich laptops are a tad too heavy for some to carry. Including some laptops in this list; that’s one of the tradeoffs of getting a spec-rich laptop. At an economical price point, if you cannot afford a spec-rich laptop packed in a small chassis and can muster enough strength to move around with a bulky laptop, then do it for the specs. As a computer science student, you don’t know what interest you might develop tomorrow and where your coding journey will take you. | <urn:uuid:7fe7ae95-2862-4ec8-ae0c-e31fd001b330> | CC-MAIN-2022-33 | https://www.laptopsreviewes.com/best-laptops-for-computer-science-students/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00697.warc.gz | en | 0.889178 | 4,897 | 2.59375 | 3 |
The Campfire v. the Podium: the Persuasive Power of Storytelling
by Daniel Riggs
The original blog can be found: https://smallwarsjournal.com/jrnl/art/campfire-v-podium-persuasive-power-storytelling
The Lesson of Peter Venkman: An Introduction to Stories and Narratives
When discussing narratives and stories, an iconic film gives us an insight into their persuasive power: Ghostbusters. Towards the end of the film, the Ghostbusters (the protagonists) are in the NYC Mayor’s office pushing their solution to the paranormal bedlam and looming problem of Gozer, destroyer of worlds. Walter Peck of the EPA (one of the antagonists) is also present. He intends to scapegoat the Ghostbusters for his actions, namely releasing the contained ghosts that terrorize the city. Both are battling for the mayor’s approval. Peck is first up and provides an argument and accusation:
Walter Peck: “I’m prepared to make a full report. These men are consummate snowball artists! They use sensitive nerve gases to induce hallucinations. People think they’re seeing ghosts! And they call these bozos, who conveniently show up to deal with the problem with a fake electronic light show!”
The Ghostbusters instead paint a powerful narrative that better illustrates the Gozer problem and the paranormal crisis in contrast to a blame game:
Peter: Well, you could believe Mr. Pecker…Or you could accept the fact that this
City is headed for a disaster of biblical proportions.
Ray: What he means is Old Testament biblical, Mr. Mayor. Real wrath-of-God-type stuff.
Fire and brimstone coming from the sky! Rivers and seas boiling!
Egon: Forty years of darkness! Earthquakes! Volcanoes!
Winston: The dead rising from the grave!
Peter: Human sacrifice, dogs and cats living together, mass hysteria!
Mayor: Enough! I get the point! What if you’re wrong?
Peter brilliantly drives home their persuasive pitch:
Peter: If I’m wrong, nothing happens! We go to jail. Peacefully, quietly. We’ll
enjoy it! But if I’m right, and we can stop this thing; Lenny, you will have saved
the lives of millions of registered voters.
Peter shrewdly paints a persuading and illuminating story, not a rational argument. This story casts the mayor (not the Ghostbusters) as the hero of New York. Peter has given the mayor the chance to live out a protagonist archetype in an existential threat, not rationally weigh costs and benefits. In contrast, Walter Peck believes rationality, bureaucracy, and processes have the power for meaningful behavior change. But the mayor, like any human, does not want rationality. He wants a persuasive narrative and story that satisfies his human desires and dreams, a universal impulse.
Currently, US Army Psychological Operations (PSYOP) doctrine wants its Psychological Operations personnel (PSYOPers) to be like Walter Peck and use rational arguments as the central means to facilitate behavior change. It states, “the main argument is the reason that the Target Audience (TA) should engage in the desired behavior” and “the general format for this main argument is engaging in X (desired behavior) will result in Y (desirable outcomes for the TA)” (Department of the Army [DA], 2007, 2-90).” Unfortunately, this tactic is reductive and coarsely transactional.
Operationally, this philosophy of behavior change, along with other factors (Mayazadeh and Riggs, 2021), has failed the Department of Defense (DOD) in the Information Environment (IE) over the past few decades. The following will argue (ironically enough) narratives and stories are more persuasive in altering behavior than logical arguments. After an initial definition of critical terms, the following will detail the evolutionary, biological, and epistemological reasons why narratives and stories are persuasively superior and why arguments fail at the individual level and in the “marketplace of ideas.”
The first term requiring definition is behavior. Generally understood, behavior is “a form of conduct towards others and responses to any external stimulation, the “mechanistic function of a thing,” a kind of “alignment with societal norms and mores,” or the actions of someone or something in a particular situation as a response to exogenous stimuli (Meriam Webster, Cambridge, Dictionary.com). For this essay, the PSYOP field definition fits: “overt actions exhibited by individuals” (DA, 2007, 2-9). This definition’s strength refers to a shift in the overt and observable actions of individuals and not whether it is a contextual and temporal violation of societal norms/mores. Therefore, it provides an understanding of behavior that is more objective than subjective norms.
Narrative and Stories
Central to this discussion are narrative and story. Narratives often appear as a buzzword in contemporary discourse with a subsequent requirement to “frame it” or “seize it.” The Assessing Revolutionary and Insurgent Strategies (ARIS) Project does an excellent job in providing substance to the definition of narrative: “stories/accounts of events, experiences, whether true or fictitious” (Agan, et al., 2016, 8). The online service Master Class provides an equally illuminating definition, with notable storytelling luminaries such as Neil Gaiman, David Sedaris, Margaret Atwood, and David Baldachi providing this definition: “a way of presenting connected events in order to tell a good story” (MasterClass, 2021). One can discern from these two definitions (from two credible sources) that narratives are the framing devices and techniques that yield different interpretations of stories.
This current emphasis on narratives started with Post-modern philosophers Jacques Derrida and Michael Foucault. They were some of the first to view narratives not just as hermetically sealed, objective items. They are culturally dependent, constructed, and critical in understanding social dynamics, beliefs, and phenomena (Agan et al., 2016, 5). Post-modern insights inspired literary historians and theorists to view and study narratives as a dynamic and analytical tool to understand people, their beliefs, their environment, and their behavior in it (Zweibelson, 2011).
Military intellectual spheres have since acknowledged the power and importance of narrative and have started inserting this language into doctrine. Doctrinal Guidance instructs staff planners to “coordinate and synchronize narratives” (Department of Defense [DOD], EXEC SUMM-18) and “provide a coherent narrative to bridge the present to the future” (DOD, I-3), which will provide a more precise focus for the commander (DOD, III-46). Unfortunately, narratives are not as easy to divine as a tangible adversary capability. Narratives are transitory, adjusted, or steered beyond the direct control of an organization or society and shaped by heterarchical and hierarchical entities in a participatory/iterative fashion (Zweibelson, 2011). They subsequently establish particular and often enduring messages (Zweibelson, 2011) that shape the world. They are not empty stories.
But understanding and utilizing narratives requires PSYOPers to understand stories, which are the frame for narratives. Just as a great painter must have expert dexterity, spatial awareness, and a sense of geometry in addition to fluid dexterity, fashioning narratives requires knowledge of storytelling. To some, this might appear to be a chic trend, but the importance of stories cannot be understated. Like narratives, stories shape an internal sense of self, the cultures and societies humans belong to, and the knowledge that allows humans to act within the world (Agan, et al., 2016, 1). Stories are how people want to receive information. Before Athenians were listening to the rigorous logic of Socrates in the Agora, humans were consuming stories as the basis for belief (and subsequent behavior) and coding this preference into the species’ DNA.
Evolution and Stories
A species’ continuation is contingent on adjusting to the changing order of its given environment. Humans will undergo physical changes and develop heuristics to create advantages to increase survivability. A species outward changes (e.g., a hedgehog beginning to develop quills 24 million years ago and emerging as a porcupine) are easy to spot. Still, the evolved mental heuristics due to environmental requirements are just as important. Stories and storytelling are one of these valuable evolutionary heuristics because they increase survivability. As it is evolutionarily beneficial, it thus has no geographic boundary.
Storytelling is universal. It occurs in every culture and from every age (National Geographic-Storytelling and Cultural Tradition). Cave drawings from 30,000 years ago depicting animals, humans, and other objects represented visual stories (National Geographic-Storytelling). These drawings were not just an aesthetic impulse for early man but a conceptual means to understand an environment undoubtedly considered chaotic and dangerous. Stories (including cave drawings) were a means to allow one to feel in control and make sense of the events in the random world (National Geographic-Storytelling) and even find recurrent formulas and patterns to traverse the chaos. When humans receive new knowledge (or a revelation), the static data can become dynamic intelligence that increases survivability or potential to thrive. Stories pieced together represent efficacious and crucial data that assists in formulating a better picture that might enhance survival. A story can result in a far more satisfactory sense of certainty than the previously unknown (Guzman, et al., 2013, 1186). Repeated over time, this has created a natural preference for humans.
The Agta Tribe of the Philippines, a hunter-gatherer community structured similarly to early pre-modern man, has helped corroborate this hypothesis. A 2017 study shows how a pre-modern hunter-gather community, in this case, the Agta, benefits from storytelling (Smith, D., Schlaepfer, P., Major, K. et al., 2017). The researchers conclude that storytelling may have played an essential role in the evolution of human cooperation by broadcasting social and cooperative norms to coordinate group behavior (Smith, D., Schlaepfer, P., Major, K. et al., 2017). Stories would “silence egos and perform the adaptive function of organizing cooperation in hunter-gatherers” (Smith, D., Schlaepfer, P., Major, K. et al., 2017). In contrast to philosophers such as Thomas Hobbes speculated pre-modern society as “solitary, poor, nasty, brutish, and short” and a “constant war of every man against every man.” However, finding a means to cooperate was the norm not the exception. Stories helped facilitate survival.
Sexual opportunities are also numerous for an expert storyteller and their ability to engender cooperation. Due to their value, skilled storytellers were (and are) picked more as mates and are more likely to reproduce” (Smith, D., Schlaepfer, P., Major, K. et al., 2017). Without too much explanation, it should be obvious why this would be an incentive.
Biological Rewards of Storytelling
The evolutionary advantages of storytelling had a natural and logical impact on human biology. To continue to value stories as a means of survival, humans evolved to receive certain stimuli from storytelling. Captivating stories provide not just a pleasurable means of escape but chemical rewards for receptive audience members.
The neurotransmitter dopamine plays a crucial role in how we feel pleasure and assists one’s ability to think, plan, strive, and receive inspiration (“Dopamine”). It also aids in memory and processing information to help humans to break down complexities, explore big themes and questions through a narrow lens that stories provide (Padre, 2018). One key finding concludes compelling storytelling releases dopamine for listeners (Padre, 2018). Memorable and captivating stories activate multiple parts of the brain leading to increased information (e.g., facts, figures, and events) retention, which correlates to an increased capacity for behavior change (Padre, 2018). A story is not just fun but a chemically attractive way to receive information, unlike arguments.
The brain also releases oxytocin in the presence of an impactful story (Padre, 2018). Oxytocin, the hormone typically associated with pregnant and nursing mothers (DeAngelis, 2008), plays a substantial role in social affiliation and bonding overall. Studies from Nature (Kosfeld, M., Heinrichs, M., Zak, P. et al., 2005, 675) and PLoS ONE (Zak PJ, Stanton AA, and S. Ahmadi, 2007) had shown the introduction of oxytocin increased generosity in social experiments.
The release of oxytocin during storytelling means participants are far more likely to be receptive to the lines of persuasion in the story. Instead of having a defense up, the release of oxytocin via stories helps to modulate anxiety (Guzman, et al., 2013, 1186), which is one of the primary evolutionary reasons why stories developed the importance that they did.
The Dynamic of the Story
Stories, by their nature, are also far more accessible due to their reduced threat to the listener. Storytelling is a collaborative, non-hierarchical process involving the learners as active agents in the learning process rather than passive recipients (Padre, 2018). In contrast, when presented with an argument, the immediate response is defense. An argument is a formal and direct challenge to someone’s beliefs, which many regards as the means for daily survival. By their nature, arguments attempts to overturn beliefs. A natural (read evolutionary) defense arises to counter an argument (“Immersed,” 2018). Depending on one’s commitment to a given belief, an argument becomes an existential threat or an obstinacy to daily living.
Stories circumvent these evolutionary instincts and stealthily challenge entrenched views without listeners knowing it via “narrative transportation” (Mitra, 2017). According to Dr. Melanie Green, “narrative transportation is the experience people have when they become so engaged in a story that the real world just falls away and results in a suspension of disbelief or reduction of counter arguing (“Immersed,” 2018). A new (but analogous) setting serves as an accessible and safe proving ground for new ideas. Relatable characters and analogous situations can further enhance this suspension of disbelief as the listener/viewer has something to empathize with (“Immersed,” 2018). These practical elements of stories reflect the phenomenon of isopraxism. Isopraxism is an animal neurobehavioral (humans included) that involves mirroring speech patterns, vocabulary, tone, tempo, etc., that helps build rapport (Voss, 2017, 35). It is fair to assume that mirroring comparable experiences via story elicits this reaction and allows defenses to drop. After all, humans fear what is different and chose what is similar for survival (Voss, 2017, 36).
The original Star Wars trilogy, for instance, has this power. When the viewers first see Luke, they see someone longing for something more and looking to the stars for adventure and self-actualization. Luke’s situation mimics a very human feeling of feeling trapped by our environments, localities, and family commitments and wanting more. The subsequent hero’s journey, which goes back millennia, is also familiar. Even though viewers see far off Tattonie, they feel they are there because it grabs at universal impulses and reflects standard and successful aesthetic scaffolding. Star Wars generates narrative transportation, focuses the audience’s attention, elicits strong and emotional reactions and generates vivid mental images (“Immersed,” 2018). The viewer is not just passively viewing images. The transported participant maintains story-consistent beliefs even after exiting the experience (“Immersed,” 2018). They are inspired and ready to act. Maybe that action is a behavior change not considered before the story.
For US Army PSYOP, the “so-what” is that narrative transportation through story is more likely to show attitude and belief change (“Immersed,” 2018) leading to behavior change. Arguments do not get to behavior change, but stories will. There might be something uncomfortable with the thought that humans are irrational and require stories. However, modern behavioral science backs up this pre-modern pedagogy and forces us to come to terms with human fallibility.
The Faulty Apollonian: Bias of the Argument
Since Socrates, there seems to be the belief in the West that man is a rational animal. Concurrent with physical maturation, a human’s developing brain increases its capacity for rational thought to figure out problems in the world as they grow into adults. In the 20th century, psychologist Lawrence Kohlberg helped buttress this belief through his theory of moral development. Kohlberg’s research contends that ethical behavior is contingent on moral reasoning (Kohlberg & Hersch, 1977, 54). Kohlberg’s process follows a linear and six-stage path as children reason their way to notions of justice (Kohlberg & Hersch, 1977, 56). Kohlberg’s work posits that even at the earliest stage, rationality is humanity’s epistemological default.
Jonathan Haidt’s 2012 The Righteous Mind challenged this hypothesis. Haidt theorizes beliefs (which pre-figure behavior) come through intuition and that reason is merely post hoc justification for the driving emotions (Haidt, 2012, CH 3-3:37). Logic and formal rationalism to establishing truth are not universal, and Haidt sees three models of choice, not just one (Haidt, 2012, CH 3-3:35):
- David Hume: Passions Rule, and Reasoning Comes Second.
- Plato: Reason Could and Should Rule (i.e., rationalist model).
- Thomas Jefferson: The Passion and Reason are “co-emperors.”
Echoing David Hume, Haidt argues that the rationalist model are decidedly exaggerated or non-existent for most people (Haidt, 2012, CH 5-34:27). Most of us are irrational. Kohlberg and other rationalists fail to understand that their epistemology and ethics is Western and in the minority. They are WEIRD: Western Educated Industrialized Rich Democratic.
Someone like Kohlberg and many other rationalists hail from countries that are consistent psychological outliers compared to 85% of the world’s population (Talhelm, 2015). They explain behavior and categorize objects analytically (Talhelm, 2015). In contrast, most majority of people think more intuitively—what psychologists call “holistic thought” (Talhelm, 2015). Argumentation and reason are not going to cut it. Even critical words in the language are deeply established and entrenched to have a specific meaning (Haidt, 2012, CH-3 14:24). When it comes to understanding behavior, rationalists are, in a sense playing tennis against a backboard on a hard court while everyone else is playing tennis with an actual opponent on clay (i.e., thinking holistically).
The doctrine for Military Planners and PSYOPers reflects the rationalist and analytical model for behavior and action. It is no wonder that WEIRD-inspired philosophy cannot to work in much of the world because it lacks self-reflection and fails to illuminate. The rationalist approach reflects the biases of planners and fails to empathize with target audiences.
The subsequent failure of this approach can be explained away via a type of updated false consciousness echoing Thomas Franks What’s the Matter with Kansas. Franks argues that the 2004 defeat of John Kerry was due to the population failing to understand the benefits of the Democratic Party. Similar thinking allows planners to fall back to faulty processes and merely blame the audiences. With humanity’s propensity for following intuition and irrational impulses, it is evident that logic and reason will always be second best.
The Unsecure Marketplace of Ideas
The micro failure to understand most individual’s decision-making processes (i.e., through emotions, intuition, and stories) extends to the macro level in PSYOP doctrine. One of the faulty propositions in PSYOP doctrine is that the most persuasive arguments emerge due to their logical strength and ability to appeal to rational needs and wants (DOD, 2007, 2-90 to 2-93). A target audience will receive a series of arguments and rationally select the best one with some vulnerabilities targeted. Subsequently, the desired behavior reflects a rational choice by the target audience.
The “marketplace of ideas” metaphor informs this doctrinal rationale, which states that rational consumers will carefully weigh the relative quality of products/ideas, like in a market economy (Gordon, J., 235, 1997). In the analogous “marketplace of ideas,” the most rational and just products (i.e., ideas) stick around, and mediocre ones fall to the wayside. The marketplace of ideas is self-regulating and minimizes subversion. However, if Haidt’s work is the most robust explanation for how people come to their beliefs, the marketplace of ideas (based on a classical behaviorist model) cannot remain sacrosanct.
Public Intellectual Curtis Yarvin details how this occurs. In the marketplace of ideas, no one is theoretically in charge. In theory, it is self-regulating and secure like blockchain (Quiones, P, 2020, 27:30). However, various means can manipulate it. These include (1) deliberate coercion (i.e., a specific message will be heard or silenced), (2) positive measures (i.e., the state or other power subsidize favorable influence entities), and (3) the state leakage of information (Quiones, P., 2020, 31:15). Truth and rational arguments have currency, but the marketplace favors stories. An argument typically does not have the evolutionary staying power of narratives that provide dopamine and oxytocin.
What succeeds in the marketplace of ideas are narratives and stories that satisfy physical desires to receive the natural chemical enhancement. Compelling stories and narratives in the marketplace of ideas do not necessarily say how X solves Y. For instance, how does it logically follow that (X) I support the Democratic Party because (Y) Black Lives Matter (Quiones, P., 2020, 42:45)? Dominant narratives and stories are often non-sequiturs, not arguments, which satiate human desires and inflate the egos of target audiences. Those forming winning narratives and stories do not expect the target audience to be informed King Solomons.
However, the audience of these successful narratives does feel kingly. There is a beneficial power exchange between the successful narratives and stories in the marketplace of ideas and the audience (Quiones, P, 2020, 41:30). Successful narratives and stories in the marketplace of ideas reward its followers with a feeling of power which people invariably enjoy and subsequently want to receive more of (Quiones, P., 2020, 38:30). Returning to the evolutionary point, humanity’s cave ancestors wanted to feel as they were in control of a chaotic world. With COVID and many other displacing phenomena, it can feel just as irritating and messy. If someone can receive a sense of power (and do so while passive), they will elect to do this every time. If an idea is going to flourish in the marketplace of ideas, it requires (1) the target audience to feel important and (2) serve the power structure (Quiones, P. 2020, 43:35). It does not require rationality.
Let us take an American example, the 2017 Parkland Tragedy. Someone reads about the 17 innocent people who died that day. If it is from the New York Times or CNN, it provides a left-of-center interpretation. If it is Fox News or the Wall Street Journal, it provides a right-of-center interpretation. These outlets offer narratives and stories to excite the reader’s emotions and make them feel like they matter (Quiones, P., 2020, 43:55). When one reads the left side, a reader feels energized to post on social media calling for strict gun control and providing a romantic story of the civilized European Countries. On the right, one reads accounts of past totalitarians who seized private firearms and prophesized a future big brother. A rational look at first principles to include personal security or security production is far from the list of priorities in modern discourse.
The powerful stories online are not just passive consumption of information but enable the reader to feel as if they are digitally marching on Selma. Again, how exactly does (X) gun rights or gun control solve the (Y) issue of school violence? It does not require a rational response because storytellers have accomplished the narrative’s intent. The audience feels powerful. Reason would say these responses are non-sequiturs, but it does not matter. In the summer of 2020, plenty felt pedaling their Peloton contributed to Black Lives Matter (Quiones, P. 2020, 45:25) because the story of fighting for civil rights could be satisfied with oxytocin and dopamine received along with every burned calorie on a bike seat.
Moving Forward with Stories
If PSYOPers are to be successful, they need to reexamine the first principles in doctrine. If it is true that the US Military is inherently WEIRD, it ought to recognize it and adjust accordingly. Future publications should look to the lessons of Joseph Campbell, not just Clausewitz, to understanding how to understand what motivates people. The insights from a Campbell type might reveal cultural considerations, tensions, vulnerabilities, and opportunities to accomplish strategic goals. This essay is not a call for increased budgets and the newest tech but challenges PSYOPers to travel to the past and access the right side of the brain cognition to develop holistic doctrine.
Agan S.D, Haufler, A.W, Lauber, S. and G. Pinczuk. (2018) Assessing Revolutionary and
Insurgent Strategies Series: Narratives and Competing Messages (2nd ed.,). The United States Army Special Operations Command, Fort Bragg, North Carolina
Cambridge Dictionary. (n.d.) Behavior. In Dictionary.Cambridge.com. Retrieved June 16, 2021,
DeAngelis, T. (2008). “The Two Faces of Oxytocin.” American Psychological
Association, 39 (2). https://www.apa.org/monitor/feb08/oxytocin
“Dopamine.” (n.d.) Psychology Today.
Gordon, J. (1997). “John Stuart Mill and the ‘Marketplace of Ideas.’” Social Theory and Practice.
23 (2), 235-249.
Guzmán, Y. F., Tronson, N. C., Jovasevic, V., Sato, K., Guedea, A. L., Mizukami, H.,
Nishimori, K., & Radulovic, J. (2013). “Fear-enhancing effects of septal oxytocin receptors.” Nature Neuroscience, 16(9), 1185–1187.
Haidt, J. (2012). The Righteous Mind: Why Good People Are Divided by Politics and Religion.
Unabridged. [United States]: Gildan Audio.
Hersh, R.H. & Kohlberg, L. (1977). “Moral development: A review of the theory.” Theory Into
Practice, 16 (2), 53–59. https://doi.org/10.1080/00405847709542675
“Immersed, Transported & Persuaded by Story.” (2018) Wonder, June 11, 2018
Kosfeld, M., Heinrichs, M., Zak, P. et al (2005). “Oxytocin increases trust in humans.” Nature.
435, 673–676. https://doi.org/10.1038/nature03701
Mayazadeh, S. and D. Riggs. (2021) “Blockchain and Psychological Operations.”
Over the Horizon, January 20, 2021.
MasterClass Staff. (2020). “4 Types of Narrative Writing.” MasterClass
Merriam-Webster. (n.d.). Behavior. In Merriam-Webster.com dictionary. Retrieved June 16,
2021, from https://www.merriam-webster.com/dictionary/behavior
Mitra, R. (2017). “Tell me a story: narratives, behaviour change and neuroscience.”
BBC Media Action, June 17, 2017.
National Geographic Society. (n.d.). Storytelling. National Geographic Resource Library
Retrieved January 31, 2021, from https://www.nationalgeographic.org/encyclopedia/storytelling/?utm_source=BibblioRCM_Row
National Geographic Society (n.d.). Storytelling and Cultural Traditions. National Geographic
Resource Library. Retrieved January 31, 2021, from
Padre, J. (2018). “The Science of Storytelling: How Storytelling Shapes Our Behavior.” Media
Partners, August 6, 2018.
Quiones, P. (2020). “Why the Left Always Wins with Curtis Yarvin” (454) [Audio podcast
Episode]. Free Man Beyond the Wall Podcast. Libertarian Institute.
Reitman, I. (1984). Ghostbusters. Columbia Pictures.
Smith, D., Schlaepfer, P., Major, K. et al. (2017) “Cooperation and the evolution of hunter-
gatherer storytelling.” Nature Communications 8, 1853.
Talhelm, T. (2015). “Liberals are WEIRDer than Conservatives.” The Righteous Mind Blog,
January 15, 2015.
U.S. Department of the Army. (2007). Psychological Operations Process Tactics, Techniques,
and Procedures (FM 3-05.301).
U.S. Department of Defense (2020). Joint Publications-Joint Planning (JP 5-0).
Voss, Christopher (2017). Never Split the Difference: Negotiating as If Your Life Depended on
It. Harper Business: New York, NY.
Zweibelson, B. (2011). “Breaking Barriers to Deeper Understanding: How Post-Modern
Concepts Are ‘Value-Added’ to Military Conceptual Planning Considerations.”
Small Wars Journal, September 21, 2011.
Zak P.J., Stanton A.A., and S. Ahmadi (2007) Oxytocin Increases Generosity in Humans. PLoS
ONE 2(11): e1128.
The images of Biblical destruction also target the mayor’s likely vulnerabilities as a Christian. It is probably not a coincidence the mayor has New York City’s Catholic Cardinal in his Crisis Room as an advisor and looks to him for the final say.
Ludwig von Mises understanding of human action (in the aptly named Human Action) via praxeology might be instructive for PSYOPers: “Human action is purposeful behavior. Or we may say: Action is will put into operation and transformed into an agency, is aiming at ends and goals, is the ego’s meaningful response to stimuli and to the conditions of its environment, is a person’s conscious adjustment to the state of the universe that determines his life.”
One issue in the doctrine requiring further research and elucidation is narrative. Narratives are not as well-articulated and defined as they should be, nor is there is a defined means for developing, identifying, or countering narratives.
Imagine the cost of a small battle with an opposing tribe a few millennia ago. Even in victory, a simple cut might be enough for infection and subsequent death. Would you not want to exhaust every possible option before tribes charge into battle?
See Jill Gordon’s “John Stuart Mill and the ‘Marketplace of Ideas” in Social Theory and Practice Vol. 23, No. 2 for a further explanation of how this metaphor was ascribed to Mills though it doesn’t reflect his opinions on how to protect free speech.
A critical concern for state entities is how the fifth estate will narrate their actions. At this point, the state is leaking influence (Quiones, P., 2020, 35:30). This dynamic mimics the relationship between tiger Richard Parker and Pi Patel on their survival raft in Life of Pi: Pi has to find fish for the tiger or become lunch. | <urn:uuid:264c4877-2af0-430f-9701-c08d10a4acd3> | CC-MAIN-2022-33 | https://aodnetwork.ca/the-campfire-v-the-podium-the-persuasive-power-of-storytelling/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571090.80/warc/CC-MAIN-20220809215803-20220810005803-00096.warc.gz | en | 0.909419 | 6,953 | 2.515625 | 3 |
3 June 2003
Positive experiences of autonomous regions as a source of inspiration for conflict resolution in Europe
Political Affairs Committee
Rapporteur: Mr Gross, Switzerland, Socialist Group
Most present-day conflicts no longer occur between states but within states and are rooted in tensions between states and minority groups which demand the right to preserve their identities. These tensions are partly due to the territorial changes and the emergence of new states which followed the two world wars and the collapse of the old communist system, and also reflect the inevitable development of the concept of the nation-state, which, hitherto, viewed national sovereignty and cultural homogeneity as essential.
Autonomy as applied in states governed by the rule of law can be a source of inspiration in seeking ways to resolve internal political conflicts. Autonomy allows a group which is a minority within a state to exercise its rights while providing certain guarantees of the state’s unity, sovereignty and territorial integrity.
The autonomous status may be applied to various systems of political organization and means that autonomous entities are given specific powers, either devolved or shared with central government while remaining under the latter’s authority.
In order to provide the right conditions for the permanence of autonomy, the report recommends compliance with a number of basic principles, including the creation of a legal framework for autonomous status, a clear division of powers and the establishment of democratically elected legislative and executive bodies in autonomous regions.
I. Draft Resolution
- The resurgence of tensions in Europe, varying in intensity and frequently the product of unresolved conflicts within states, remains a cause of concern to the Parliamentary Assembly. Today, indeed, most political crises in Europe occur within states.
- These renewed tensions are partly due to the territorial changes and the emergence of new states which followed the two world wars and the collapse of the former Communist system in the 1990s.
- These tensions also reflect the inevitable development of the concept of the nation-state, which viewed national sovereignty and cultural homogeneity as essential. Nowadays, particularly in view of developments in the practice of democracy and international law, States are faced with new requirements.
- Most of the present conflicts can very often be traced to the dichotomy between the principle of indivisibility of states and the principle of identity, and are rooted in tensions between states and minority groups which demand the right to preserve their identities.
- The vast majority of European states today include communities which have different identities. Some of these demand their own institutions and want special laws allowing them to express their distinctive cultures.
- States must prevent tensions from developing by introducing flexible constitutional or legislative arrangements to meet their expectations. By giving minorities powers of their own, either devolved or shared with the central government, states can sometimes reconcile the principle of territorial unity and integrity with the principle of cultural diversity.
- The Council of Europe, which is committed to peace and to the prevention of violence as essential to the promotion of human rights, democracy and the rule of law, believes that the positive experience of autonomous regions can be a source of inspiration in seeking ways to resolve internal political conflicts.
- Many European states have already eased internal tensions, or are now in the process of doing so, by introducing various forms of territorial or cultural autonomy, embodying a wide range of principles and concrete measures which can help to resolve internal conflicts.
- There is no denying that autonomy is a concept which can have negative connotations. It can be seen as a threat to the state’s territorial integrity and a first step towards secession, but there is frequently little evidence to sustain this view.
- Autonomy, as applied in states respectful of the rule of law which guarantee their nationals fundamental rights and freedoms, should rather be seen as a “sub-state arrangement”, which allows a minority to exercise its rights and preserve its cultural identity while providing certain guarantees of the state’s unity, sovereignty and territorial integrity.
- The term “territorial autonomy” applies to an arrangement, usually adopted in a sovereign state, whereby the inhabitants of a certain region are given enlarged powers, reflecting their specific geographical situation, which protects and promotes their cultural and religious traditions.
- The constitutions of most Council of Europe member states do not recognize the right to secede unilaterally. However, indivisibility must not be confused with the concept of a unitary state, and indivisibility of the state is thus compatible with autonomy, regionalism and federalism.
- The autonomous status may be applied to various systems of political organization, ranging from straightforward decentralization in unitary states to a genuine division of powers, either symmetrically or asymmetrically, in regional or federal states.
- In the past, autonomy was introduced in two stages, and originated in three ways, being established by regional entities when central states were founded, introduced to resolve territorial tensions, or sponsored by the international community.
- Autonomy is not a panacea, and the solutions it offers are not universally relevant and applicable. However, failures should be blamed, not on autonomy as such, but on the conditions in which it is applied. Autonomous status must always be tailored to the geography, history and culture of the area concerned, and to the very different characteristics of specific cases and conflict zones.
- With a view to relieving internal tensions, the central government must react with understanding when minority groups, particularly when they are sizeable and have lived in an area for a long period of time, demand greater freedom to manage their own affairs independently. At the same time, the granting of autonomy must never give a community the impression that local government is a matter for it alone.
- Successful autonomy depends on balanced relationships within a state between majorities and minorities, but also between minorities. Autonomous status must always respect the principles of equality and non-discrimination.
- All interpretation, application and management of autonomy shall be subject to the authority of the State, and to the will and judgment of the national parliament and its institutions.
- Positive discrimination, i.e. favourable representation in the organs of central government, can often be used to involve minorities more effectively in the management of national affairs.
- It is fundamental that special measures must also be taken to protect “minorities within minorities”, and ensure that the majority and other minorities do not feel threatened by the powers conferred on an autonomous entity. In these autonomous entities, the Framework Convention for the Protection of National Minorities must also be applied, for the benefit of minorities within minorities.
- The Assembly calls on the governments of member states to respect the following basic principles when granting autonomous status:
i. An autonomous status, which depends by definition on co-operation and co-ordination between the central government and autonomous entities, must be based on an agreement negotiated between the parties concerned.
ii. Central government and autonomous authorities must recognize that autonomous status is part of a dynamic process and is always negotiable.
iii. It would be appropriate for the statutes and founding principles underlying autonomous status to be included in the Constitution rather than in legislation alone so that amendments can only be made in accordance with the Constitution. To avoid later disputes, agreements on autonomous status must explicitly define the repartition of powers between the central and autonomous authorities.
iv. Agreements on autonomous status must guarantee appropriate representation and effective participation of the autonomous authorities in decision-making and the management of public affairs.
v. Agreements on autonomous status must provide that autonomous entities are to have legislative and executive authorities, democratically elected at local level.
vi. Agreements on autonomous status must provide for funds and/or transfers which allow autonomous authorities to exercise the extra powers conferred on them by central government.
vii. To ensure that powers are not abused, special machinery must be established to resolve disputes between central government and the autonomous authorities.
viii. If tensions between central government and the autonomous authorities persist, the international community should sponsor the negotiation process.
ix. Devolution of powers to autonomous entities must imperatively protect the rights of minorities living within them are ignored or suppressed.
II. Draft Recommendation
- The Assembly considers that autonomous status must always give the autonomous region concerned a legislative and an executive body democratically elected at local level. These bodies should have appropriate powers to pass laws and enforce them in the autonomous territory while remaining subject to the law and prerogatives of central government – as defined in the European Charter of Regional Self-Government adopted by the CLRAE.
- The Assembly believes that the adoption of a European legal instrument would enable states facing internal conflicts to find constitutional or legislative solutions which would allow them to preserve the state’s sovereignty and territorial integrity while respecting the rights of minorities.
- This legal instrument must stipulate that the exercise of powers devolved to autonomous entities shall comply with the provisions of the European Convention for the Protection of Human Rights and Fundamental Freedoms, particularly the principles of equality, non-discrimination and secularism.
- In this context, the proposals contained in the Helsinki Declaration (28 June 2002), which recognizes the possibility of formulating basic concepts and principles applying to all systems of regional autonomy, merit the attention of the Council of Europe’s member states.
- The Assembly accordingly recommends that the Committee of Ministers
- prepare a European legal instrument (Article 11 of the Declaration), based on the principles laid down in the European Charter of Regional Self-Government, taking account of the member states’ experience, and also making it possible to recognize and promote the common principles of regional autonomy, with respect for the European Convention for the Protection of Human Rights and Fundamental Freedoms and its principles of equality and non-discrimination.
Explanatory memorandum by the rapporteur 7
I. Introduction 7
II. Development of the concept of autonomy 8
III. Concept of autonomy and right to self-determination 9
a) Diversity of forms of autonomy 9
b) Diversity of institutional frameworks 10
c) Defining the scope of autonomy 11
d) Legal framework of autonomy 13
e) Positive and negative aspects of autonomy 13
IV. Case studies
a) The Åland Islands 14
b) Alto-Adige / South Tyrol 15
c) Factual comparison of the two most successful historical cases 16
d) Sri Lanka 17
e) Faeroes Islands 20
V. Conceptual clarifications
a) Right to internal and external self-determination 21
b) Autonomy as a system of conflict resolution 22
VI. Analysis of the functioning of autonomous entities
a) Political systems and division of powers 23
iii. Decentralisation in unitary states
iv. System of devolution
v. Free association
vi. Asymmetric territorial organisation
vii. Establishment of special status
b) Methods of sharing sovereignty 26
c) Settlement of disputes 26
VII. Identification of the basic factors for the success of autonomy
a) Legal design and criteria for short and long-term success 27
b) Geopolitical and demographic aspects 27
c) Political and institutional aspects 28
d) Social, economic and financial aspects 29
e) Cultural aspects 30
f) Respect for human rights 30
VIII. Some thoughts on resolving certain current conflicts by introducing the
concept of autonomy
a) Abkhazia and South Ossetia (Georgia) 30
b) Kosovo as part of Serbia and Montenegro 31
c) Chechnya (Russian Federation) 32
d) Transnistria (Moldova) 32
IX. Conclusion 33
III. Explanatory memorandum by the rapporteur1
- The Assembly is concerned about the upsurge in violent tensions in Europe, which is often an indication of unresolved antagonisms within a state. For a long time, political crises had their origins in tensions between states, but today the reasons for these tensions are more likely to be found within states. This is why more than half of the current wars are civil in nature and the result of cultural conflicts. Based on this observation, a motion for a Resolution (Doc 8425) was submitted to the Assembly on the Resolution of ethic conflicts in Council of Europe member states. This motion represents the original source of this report.
- This increase in tensions can be partly explained by the profound changes that Europe underwent after the collapse of the old communist system in the 1990s. In the last few years, more than twenty new states have been established in central and eastern Europe.
- A state is generally composed of peoples (or communities) from different cultures. However, not every cultural community can establish a state to promote its cultural traditions, so every state must provide for and introduce flexible constitutional or legislative rules that allow these cultural differences to be expressed while safeguarding its unity at the same time.
- In the recent history of Europe, states have been created in three successive stages, namely after each of the two world wars and when the cold war ended. These pivotal stages were either marked by the creation of new states or the establishment of autonomous regions. Examples that illustrate this development are the autonomy granted to the Åland Islands in 1921 under the aegis of the League of Nations; to Alto-Adige / South Tyrol in 1947 under the authority of the UN and to Gagauzia (Moldova) in 1990 or the creation of the Autonomous Republic of Crimea (Ukraine) in 1992.
- Today, it seems that tensions in certain states that have been facing an internal political crisis for many years are being resolved with the aid of autonomy concepts. This appears to be the case in Cyprus or Sri Lanka.
- The Council of Europe, which wishes to contribute to finding peaceful solutions to all disputes, would like to know to what extent the positive experience of the autonomous regions can constitute a source of inspiration for conflict resolution. It may be observed that a number of states have dealt with their problems or are in the process of doing so by setting up territorial or cultural autonomies and that the latter offer a wide variety of principles, measures, ideas and concepts for resolving these issues.
- The purpose of this report is to establish the criteria conducive to the success of autonomy in order to provide guidelines for those who want to resolve internal conflicts by introducing self-government and help them avoid mistakes.
- In the light of the positive experience gained, it will be necessary to determine the factors and conditions that allow autonomy to succeed, to establish the historical, geographical, political, economic, ethnic and cultural aspects to be taken into account in order to define a conceptualized model or to recommend good practices that states facing internal conflicts will be able to draw on.
- In the final section, we shall study the actual application of this experience in crisis regions, such as Kosovo (Serbia and Montenegro), Chechnya (Russia), Abkhazia (Georgia) and Transnistria (Moldova).
II. Development of the concept of autonomy
- The concept of autonomy undeniably has a negative, even threatening, connotation. In order to avoid any misunderstanding. it is important to state that our conception of autonomy does not in any way correspond to the use of the word in the past by authoritarian regimes like the Russian empire, the USSR or Yugoslavia. Our definition corresponds to the way the term is employed in democracies, ie states subject to the rule of law that guarantee specific rights and freedoms to their citizens. Democracy and the exercise of basic freedoms are essential for the success of autonomous entities.
- Autonomy is often seen as a threat to the territorial integrity of a state and the first step towards secession, as might be the case where the Faeroes are concerned. However, it would be wrong to interpret it in this way. Rather, it must be considered as a compromise aimed at ensuring respect for territorial integrity in a state that recognizes the cultural diversity of its population.
- Avoiding any recourse to violence, autonomy allows a minority group within a state to enjoy its rights by preserving its specific cultural traditions while providing the state with guarantees regarding its unity and territorial integrity. It represents an intermediate solution that makes it possible to avoid both the forced assimilation of minority groups and the secession of part of the state territory. Autonomy thus strengthens the integration of the minorities within the state and is a constructive element for the promotion of peace.
13. It is necessary to emphasize the integrative potential of autonomy. Recent examples of its introduction, such as in Spain, Italy, Russia (e.g. the Republic of Tatarstan, Azerbaijan (the Autonomous Republic of Nakhichevan) or Moldova (the special status of Gagauzia), show that, as a system guaranteeing both respects for the cultural diversity of minorities and the preservation of territorial integrity, autonomy can represent a constructive solution to any real or latent conflict.
14. Moreover, as calls for autonomy have become more frequent and are having a greater impact on the international legal order this issue needs to be examined in greater detail.
Portuguese island territories of the Azores and Madeira, which have political and administrative statutes drawn up by the regional legislative assemblies and approved by the Assembly of the Republic.
cultural identity, it incorporates more detailed provisions concerning the right of domicile and the use of Swedish in education. This right of domicile8 entitles the citizen to participate in provincial and municipal elections (including the right to stand as a candidate), engage in commercial activities and acquire real estate. It also gives exemption from military service.
- minorities are dispersed throughout the territory, it is only possible to envisage cultural autonomy. When one group is dominant in a region but is dispersed over other regions, a mixed approach combining political and cultural territorial autonomy may be implemented.
1 I would like to thank the authorities and experts who gave me the benefit of their experience in the course of my research, which began in December 1999 and took me to the Åland Islands, Alto-Adige / South Tyrol, the Faeroes, Copenhagen, à Cagliari (Sardinia), the Azores and Madeira (Portugal), Madrid and Barcelona. I apologize in advance for any errors in, or omissions from, this document and would be grateful for any suggestions.
2 Nordquist Kjell-Åke. The Second Åland Islands Question. Mariehamn 2002, Jansson Salminen (ed).
3 “Autonomy as a Conflict-Solving Mechanism”, in Suksi Markku (ed.), Autonomy, Applications and Implications, Kluwer Law, Dordrecht, Netherlands, 1998.
4 “Ruth Lapidoth, Autonomy: Flexible Solutions to Ethnic Conflicts, United States Institute of Peace Press, 1997.
5 Hannum, Hurst and Lillich, R.B., The concept of autonomy in International law, 1980.
6 Local self-government, territorial integrity and protection of minorities, Lausanne 25-27 April 1996, Proceedings of the European Commission for Democracy through Law, contribution by Asbjorn Eide, Director of the Norwegian Institute of Human Rights, Oslo.
7 I would like to thank Elisabeth Naucler for her help in writing this chapter and for giving me the book Jansson/Salminen (ed), The Second Åland Islands Question, Autonomy or Independence?, Mariehamn, 2002.
8 According to section 7 of the Autonomy Act, the right of domicile in the islands is granted on request to any Finnish citizen who has settled in the province, lived there continuously for at least five years and has a sufficient command of Swedish.
9 There is a group of German-speakers in the province of Bolzano, French speakers in Valle d’Aosta, and Slovenian speakers in the eastern part of Friuli-Venezia Guiliana as well as a small group of Ladin speakers in the provinces of Bolzano and Trento.
10 The Singhalese make up 74% of the population, the Sri Lankan Tamils 12%, the Indian Tamils 5.5% and the Muslims 8%.
11 Buddhistes 70%, Hindus 15.5 %, Muslims 7.5% and Christians 7.7%.
12 Markku Suksi, Mechanisms of Decision-Making in the Creation of States, 1996.
13 Opened for signature on 1 November 1995.
14 Congress of Local and Regional Authorities of Europe: Federalism, Regionalism, Local Autonomy and Minorities. 1996
15 Recommendation 1201 (1993) on an additional protocol on the rights of minorities to the European Convention on Human Rights, Assembly debate on 1 February 1993. See Doc. 6742, a report by the Committee on Legal Affairs and Human Rights, rapporteur: Mr. Worms, and Doc. 6749, opinion of the Legal Affairs Committee, rapporteur: Mr de Puig.
16 Autonomy: Flexible Solutions to Ethnic Conflicts”, United States Institute of Peace Press, 1997.
17 See Jean-Marie Woehrling, Droits locaux comme instrument de renforcement de l’autonomie territoriale et de gestion des spécificités sociales et culturelles propres à certains territoires. CPLRE. CG/GT/CIV (5) 3
19 CDL-AD (2003) 2, Opinion on the draft constitution of the Chechen Republic.
20 Declaration presented at the 13th session of the Conference of European Ministers Responsible for Local and Regional Government, meeting in Helsinki on 27-28 June 2002.
21 SOC: Socialist Group | <urn:uuid:b8c56974-b572-4edd-8575-3790ebfb5357> | CC-MAIN-2022-33 | https://nakkeran.com/index.php/2020/12/03/positive-experiences-of-autonomous-regions-as-a-source-of-inspiration-for-conflict-resolution-in-europe/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00495.warc.gz | en | 0.919242 | 4,461 | 3 | 3 |
What we offer
Our district works to create an environment where children of all learning types succeed. Each campus has a student support team for those struggling behaviorally or academically as well as a team of dedicated professionals for special education.
We provide campus-based Special Education so that our ARD committees can create flexible services and programs designed for the specific children on campus. We encourage our parents to be a part of the process by participating in ARD meetings, asking questions and working closely with their child’s teacher and school.
Please contact your child‘s school if you have a concern.
Special Education Referral/Child Find
On this page
Special Needs safety decals
The Rowlett Police Department is working with our district to help keep students safe. They have provided special stickers for families to use on their homes and vehicles to help police officers during encounters with community members who are non-verbal, have a mental, emotional or intellectual disability, or have a different special need.
Areas of disabilityExpand All
Auditory impairment (hearing)
Auditory impairment (hearing) means an impairment in hearing, whether permanent or fluctuating, that adversely affects a child’s educational performance but that is not included under the definition of deafness in this section.
Deaf (hearing) means a hearing impairment that is so severe that the child is impaired in processing linguistic information through hearing, with or without amplification that adversely affects a child’s educational performance.
Autism means a developmental disability significantly affecting verbal and nonverbal communication and social interaction, generally evident before age three, which adversely affects a child’s educational performance. Other characteristics often associated with autism are engagement in repetitive activities and stereotyped movements, resistance to environmental change or change in daily routines, and unusual responses to sensory experiences.
Deaf-Blindness means concomitant hearing and visual impairments, the combination of which causes such severe communication and other developmental and educational needs that they cannot be accommodated in special education programs solely for children with deafness or children or children with blindness.
Emotional disturbance means a condition exhibiting one or more of the following characteristics over a long period of time and to a marked degree that adversely affects a child’s educational performance.
- An inability to learn that cannot be explained by intellectual, sensory, or health factors.
- An inability to build or maintain satisfactory interpersonal relationships with peers and teachers.
- Inappropriate types of behavior or feelings under normal circumstances.
- A general pervasive mood of unhappiness or depression.
- A tendency to develop physical symptoms or fears associated with personal or school problems.
Intellectual disability means significantly subaverage general intellectual functioning, existing concurrently with deficits in adaptive behavior and manifested during the developmental period, that adversely affects a child’s educational performance.
Multiple disabilities means concomitant impairments (such as mental retardation-blindness, mental retardation-orthopedic impairment, etc.), the combination of which causes such severe educational needs that they cannot be accommodated in special education programs solely for one of the impairments. Multiple disabilities do not include deaf-blindness.
Orthopedic Impairment means a severe orthopedic impairment that adversely affects a child's educational performance. The term includes impairments caused by a congenital anomaly, impairments caused by disease (e.g.; poliomyelitis, bone tuberculosis, etc.), and impairments from other causes (e.g.; cerebral palsy, amputations, and fractures or burns that cause contractures).
Other health impairment
Other health impairment means having limited strength, vitality or alertness, including a heightened alertness to environmental stimuli, that results in limited alertness with respect to the educational environment, that - -
- Is due to chronic or acute health problems such as asthma, attention deficit disorder or attention deficit hyperactivity disorder, diabetes, epilepsy, a heart condition, hemophilia lead poisoning, leukemia, nephritis, rheumatic fever, and sickle cell anemia, and
- Adversely affects a child’s educational performance.
Specific learning disabilities
Specific learning disabilities means a disorder in one or more of the basic psychological processes involved in understanding or in using language, spoken or written, that may manifest itself in the imperfect ability to listen, think, speak, read, write, spell, or to do mathematical calculations, including conditions such as perceptual disabilities, brain injury, minimal brain dysfunction, dyslexia, and developmental aphasia.
Speech or language impairment
Speech or language impairment means a communication disorder, such as stuttering, impaired articulation, language impairment, or a voice impairment, that adversely affects a child's educational performance.
Traumatic brain injury
Traumatic brain injury means an acquired injury to the brain caused by an external physical force, resulting in total or partial functional disability or psychosocial impairment, or both, that adversely affects a child's educational performance. Traumatic brain injury applies to open or closed head injuries resulting in impairments in one or more areas, such as cognition; language; memory; attention; abstract thinking judgment; problem-solving; sensory, perceptual, and motor abilities; psychosocial behavior; physical functions; information processing; and speech. Traumatic brain injury does not apply to brain injuries that are congenital or degenerative, or to brain injuries induced by birth trauma.
Visual Impairment including blindness means an impairment in vision that, even with correction, adversely affects a child's educational performance. The term includes both partial sight and blindness.
Additional information and resources for parents can be located through the Center for Parent Information and Resources (CIPR) website.
Special Education programsExpand All
Regular classroom with accommodations/modifications
Instructional and curricular accommodations/modifications recommended by the ARD committee are implemented in the general education classroom. This enables the student to be involved and progress in the general curriculum to the maximum extent possible.
Speech and language therapy is available to all students, ages three through twenty-one years enrolled in Garland ISD that meet district eligibility criteria as speech impaired. Through evaluation and intervention, the speech-language pathologist helps students with communication disorders in the areas of articulation, language, voice, and fluency.
Content Mastery program (CM) – secondary only
Content Mastery is designed to support special education students in the general education classroom accessing the general curriculum. The student receives direct instruction in the general education setting. During the independent practice of the lesson cycle, the student may attend the CM center if additional help and support are needed.
Inclusion teacher (Campus Level)
Inclusion teacher support is an excellent model for assisting students receiving special education services while fully accessing enrolled grade-level curriculum. It is a supplementary aid and service provided by a special education teacher or paraprofessional to support special education students in the general education classroom. The inclusion teacher may provide direct instruction, re-teaching, modifications, collaboration, or assist in other ways that provide support to special education students in collaboration with the general education teacher. The general education teacher is the teacher of record. The special education teacher will:
- Go into the general education class to provide support to the students
- Plan the schedule to meet the needs of the students
- Provide modifications as needed
The traditional resource classroom is for special education students who are struggling substantially more than struggling grade-level peers and are functioning significantly below grade level. The special education teacher provides direct, simplified/modified grade-level instruction in the core academic areas. The special education teacher is responsible for planning instruction according to the student's IEP.
Behavior Adjustment program (BA)
The Behavior Adjustment program is designed to serve eligible students whose behavior consistently interferes with their educational performance. This program provides a structured learning environment, a social skills curriculum and instruction, and regular contact with parents. The goal of the program is the successful return of each student to the general classroom.
Behavior and Academic Support Environment Program (BASE)
The Behavior and Academic Support Environment program is designed to address student’s behavioral and academic issues on secondary campuses by providing support within the general education classroom environment. This program also provides direct instruction in appropriate social skills and study skills. The goal of the program is to provide students with the support and encouragement they need to actively participate and remain in the general education classroom.
Adaptive Behavior and Communications Program (ABC)
The Adaptive Behavior and Communication programs are designed to meet the needs of students diagnosed with autism spectrum disorders, TBI, or other neurological impairments whose behavior consistently interferes with their educational performance. The focus is on the development of age-appropriate social skills, coping, and communication skills, and academics. The goal of the program is the successful return to the general education classroom to the greatest extent possible.
Applied Learning Environment (A.L.E)
In A.L.E. classes, practical application of academic skills is taught to maximize achievement at school, at home, and in the community. The A.L.E. program provides an environment that allows for learning that is individualized and appropriate to each student's developmental and functional level.
Moving Toward Independence (MTI)
MTI is a specialized class under the A.L.E program. It is designed to support students who require additional assistance in the areas of self-help and daily living skills.
A.L.E is a specialized class under the A.L.E program. It is designed to provide additional behavioral supports beyond those typically found in the A.L.E classroom.
Vocational Adjustment Class (12th grade plus)
The vocational adjustment class provides special education support to students who are placed on a job with regularly scheduled direct involvement by special education personnel in the implementation of the student's IEP. Students may also receive classroom instruction in job readiness and independent living skills in addition to general academic work. This program shall be used in conjunction with the student's transition service needs and only after Career and Technical Education (CTE) classes have been considered and determined inappropriate for the student.
Transition Learning Center (TLC) (12th grade plus)
The GISD TLC is a special education program designed to support inclusive practices, age appropriate settings, community integration activities, and opportunities for competitive/supported employment. Students entering the TLC will have completed the district's minimum credit requirements for graduation but the ARD/IEP Committee has determined a continued need for special education services leading to competitive/supported employment. The student's parents must be willing to support employment initiatives, non-paid internships, volunteerism and marketable skills training. Preparing for competitive and supported employment will be the primary focus. The ultimate objective is for students to establish daily independent living routines that will be continued after leaving the TLC.
Meeting And Catering Service (MACS) (12th grade plus)
Meeting and Catering Services (MACS) is a GISD special education community-based vocational education training experience for high school students with significant disabilities. This program gives students the opportunity to learn a wide range of job skills in an actual work environment. A job coach facilitates the hands on job training that includes refreshment set ups of conference rooms, operation of a small store selling convenience items, and a mobile beverage and snack service. The campus VAC, in collaboration with other campus special education staff, makes referrals to MACS.
Project SEARCH (12th grade plus)
The Project SEARCH High School Transition Program is an employer-based intervention for high school students with significant disabilities whose main goal is competitive employment. The program combines real-life work experience with training in employability and independent living skills. Individualized placement assistance is provided as an integral part of the program. The hallmark of this demand-side model is complete immersion in the workplace. The program also demonstrates a novel collaborative approach that brings the education system, employers, and rehabilitation services together in unique ways to create a productive and comprehensive transition experience for students.
See a video of the Project SEARCH program in action.
Homebound instruction is a service that is considered to be highly restrictive. Students on homebound are unable to interact with peers and may require a reduced curriculum. Homebound instruction may be extended to students who are eligible for special education instruction and, due to a medical condition, must be confined to their home for a minimum period of four weeks. In order to assist the ARD Committee in determining eligibility for the identified student for special education homebound services, information will be requested from the student’s attending physician.
Early Childhood Intervention (ECI)
The Early Childhood Intervention (ECI) program provides services to children from birth through 36 months in a variety of settings. Eligibility is based on medical diagnosis, atypical development or developmental delays. The program addresses all areas of functioning, including physical, emotional and cognitive development. ECI services for families who live within the boundaries of the Garland Independent School District will be provided by the Warren Center.
Early Childhood Special Education (ECSE)
Three, four, and five-year-old children with disabilities can participate in this program. The ECSE program offers a continuum of services ranging from speech services only to a full-day program based on the needs of the child. ECSE classes are located on campuses throughout the district. These special preschoolers receive instruction in the developmental areas of preacademics, communication, motor, self-help and social/emotional.
Kindergarten Adaptive Behavior and Communication (ABC/KN)
This class is for students with autism or other neurological disorders who are five or six years old and functioning academically at the kindergarten level. The Students attend the ABC/KN class for social skills development as well as individualized instruction on specific objectives. For the remainder of the day, the students are included with the general education kindergarten class.
Related services, training and support
Related and support services are available for those students who meet special education eligibility requirements. These services may be required to assist a child with a disability to benefit from special education.
If the need for a related service is suspected, the evaluation must be planned in an ARD. Related services include transportation and such developmental, corrective, and other support services as are required to assist a child with a disability to benefit from special education.
When the ARD/IEP committee needs to determine if a student needs a specialized communication device to benefit from the special education services, the ARD/IEP committee can request an evaluation from the Assistive Technology Team. The communication device may be needed to support oral or written communication and may include instructional software.
Assistive technology as a related service is when a member of the Assistive Technology Team (A-Team) integrates objectives into existing goals/objectives and an ARD/IEP committee agrees to provide direct services by an Assistive Technology Team member.
Assistive technology as a supplementary aide and/or service is when a student has been determined to be in need of some type of assistive technology based upon the assessment. This recommended technology can be made by any number of sources such as the diagnostician, vision teacher, teacher, speech-language pathologist, OT/PT, deaf educator, or Assistive Technology Team member. The student uses the technology without direct services by the provider although periodic consultative services may be recommended.
Audiology services available in Garland ISD include conducting comprehensive diagnostic audiological evaluations, identifying hearing loss through the district-wide state-mandated hearing screenings on each GISD campus, and making appropriate medical, educational, and community referrals for our hearing-impaired students.
GISD audiologists assist in program placement and recommendations for hearing impaired students as a member of the educational team. They recommend amplification devices such as personal hearing aids, providing and monitoring of FM listening systems and other assistive listening devices. The audiologists are responsible for training on hearing conservation and hearing impairments to school personnel, students, and parents.
In-Home or community training assists students with the generalization of skills to the home and/or community settings. Initially, the Home Trainer will be primarily responsible for the implementation of the generalization activities. As generalization occurs, training should shift from the trainer to the parent for the maintenance of target skills/behaviors. Focus for In-Home Training is on the needs of the child.
See our upcoming training sessions on our parent workshops calendar.
Occupational therapy/Physical therapy
Educationally based occupational and/or physical therapy is provided, as a related service, to enhance the special education student's ability to adapt to and physically function within an educational environment.
The role of the occupational and/or physical therapist is to facilitate a student's functioning in the school setting. The goal of educationally relevant therapy is to minimize the effects of the student's disability on his or her ability to participate in the educational process.
The OT/PT therapist observes the student's functional skills and offers compensatory strategies to promote functional independence within the individualized educational program (IEP). In the school setting, educational objectives hold a primary position while therapy objectives are considered secondary and are undertaken to support the educational objectives. Services are generally consultative in nature with the implementation of the therapist's recommendations by the teacher, assistant, or parent.
OT and/or PT services will be provided in the least restrictive environment (LRE), which generally means the classroom. By providing services in the classroom the therapist offers strategies needed for the student's daily activities with active teacher/assistant involvement. These strategies may include handling techniques, classroom modifications, and/or adaptive equipment.
The primary functions of the Licensed Specialist in School Psychology (LSSP) include conducting comprehensive psychological assessments of students referred for special education services; participating in the development of IEPs; consulting with teachers and parents; and staff training in managing students with special needs and students with learning and behavioral difficulties.
Transporation for Special Education students is available for students who need this as a related service according to the ARD. In order to receive transportation as a related service, the ARD/IEP committee shall document eligibility and need. Eligibility for special transportation must be re-established at every annual ARD and each time a student changes residence or campus.
In compliance with federal regulations, GISD ensures that each educational placement of a student with disabilities is on a campus as close as possible to the student's home with available space in the appropriate program.
If the parent of a transportation eligible student decides to enroll the student on a campus other than the placement campus designated by the ARD/IEP committee, the parent may waive transportation as a related service and provide their own transportation to and from school for their child, if there is available space.
In-Home/Community Based Training helps students with the generalization of skills to the home, school and community settings. The focus for In-Home Training is on the needs of the child.
The purpose of parent support is to provide parents with the necessary skills and techniques to assist their child with ongoing development and maintenance of skills and behaviors.
Parent Education and Training workshops
Monthly parent education and training workshops are offered to support families of all ages with special needs on a variety of topics including behavior, communication, self-help skills, social skills and much more.
See the parent workshop calendar for upcoming sessions.
Find help in your area
We've collected a list of various resources for mental health, crisis, shelter, food and more in our community. See our Wellness and Support Resources page to see what's available in your area. | <urn:uuid:1a535af4-94bf-4925-9732-525b2abf881d> | CC-MAIN-2022-33 | https://garlandisd.net/programs-services/special-education | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00296.warc.gz | en | 0.941355 | 4,150 | 3.453125 | 3 |
The pandemic has changed our lives in many ways. But, perhaps primarily, it has forced many organizations to reevaluate their current understandings of caregiving and how they can better support team members who are caregivers.
This resource carefully examines who a caregiver is and what caregiving means to different people so organizations can offer more fulsome policies, processes, and programs for caregiving that go beyond existing or traditional understandings.
Indeed, as work and home lives continue to confront each other, how organizations respond to the diversity of care experiences will dictate their organization’s success. This resource encourages organizations to be as inclusive, intentional, and exceptional as they can for caregivers.
A study by McKinsey & Company indicates that 1 in 4 women considered leaving their job in the public sphere during the pandemic or downshifting their careers, compared to 1 in 5 men. These numbers are higher for working mothers, women in senior management positions, and Black women. In the United States (U.S.), it is estimated that 3 million women left their job due to the coronavirus pandemic.
Gender norms that establish women as responsible for domestic labour are one reason many women leave their jobs in the public sphere. Childcare facilities closing, the need to shelter-in-place, and increased household duties have created more responsibilities that society expects women to take on. Health and safety concerns have also driven women out of their jobs, particularly those in lower-wage and frontline work.
The studies we often reference centre on traditional understandings of caregiving grounded in heterosexual, often heteronormative, nuclear family models. Moving beyond conventional understandings of caregiving means supporting women as mothers while recognizing that caregivers are also daughters and sons, men and fathers, aunts and uncles, siblings and niblings, people who are gender non-conforming or non-binary, partners, spouses, and family (chosen and otherwise).
Caregiving is time-consuming and physically and emotionally taxing, no matter who you are, but it becomes even more difficult when this labour goes unrecognized and unsupported. If we want caregiving to be shared equitably within communities and free from added work-related stress, we need to open our minds to all of the ways one can be a caregiver and build support for these team members into our organizational policies, processes, and programs. In other words, we believe an intersectional approach to understanding, supporting, and facilitating care at work should preface caregivers’ return to the working world.
As a starting point, organizations need to understand the many ways that people provide care. Here are some examples:
It is also essential to include and recognize people who are in the process of becoming caregivers or participating in someone’s caregiving journey. For example, this may include a gestational surrogate. This grey area of caregiving often creates gaps in caregiving-related policies. Consider expanding the understanding of caregivers to include a full spectrum of care experiences, including:
Finally, remember that people provide different types of care, such as financial, emotional, or physical. They might provide one or all three! Each type of care will require specific workplace supports.
Once we have a sense of the many ways team members can be caregivers, we can then reflect on the best ways to support them. Here are some suggestions for designing caregiving-related policies, processes, or programs.
Using inclusive language and considering various caregiving duties when designing any policy, process, or program is essential. In general, we opt for a "both...and" approach. For example, ”wife, husband, and spouse, partner, or significant other” acknowledge all individuals across the gender spectrum. This also extends to pronouns. When using pronouns, consider always using all three - “he, she, and they.” If this gets too cumbersome, exclusively use “they, them.” Consider the following guide to gender-inclusive language for family members:
Including an expansive definition of family is essential. Family structures beyond the nuclear family are ordinary; we mustn’t erase these experiences and identities. Unfortunately, many caregiving efforts only include definitions of family that fit within the nuclear model. It is essential to acknowledge that the nuclear family model is rooted in colonialism, and rejecting it is part of decolonial work. Therefore, when designing anything related to caregiving, be specific about the definition of family to include:
Note: We organized the lists above alphabetically to avoid reinforcing a familial relations hierarchy. When creating lists of identities, our instinct is first to list dominant identities, which can reinforce particular identities as the most important or valued.
To make inclusive caregiving a reality, we suggest challenging dominant notions of gender in any policy, process, or program. Since gender is such a pervasive belief within Western society and culture, challenging it takes intention. Too often, we assume that care and domestic work are the responsibility of women and girls and forego structures that share such work equitably. Thoughtful caregiving efforts acknowledge that gendered norms operate in our everyday lives at home and work and affect many team members. Challenging gender stereotypes also means not assuming someone’s caregiver status and dispelling the “mommy-track myth,” which assumes that women with children are less productive.
Inclusive caregiving also means challenging racialized stereotypes. For example, Black families are often pathologized and represented as “broken.” Indigenous, racialized, and LGBTQIA2+ families are also pathologized. We sometimes call this the “single parent/caregiver” trope, suggesting that families outside the normative “nuclear” family structure harm children. When, in fact, parents may have consciously uncoupled in a fully informed manner or simply follow non-normative kinship models. Families also sometimes do not marry or live together for different cultural or financial reasons.
The narrative and myth that the nuclear family structure is better or “healthier” is pervasive in our society and works to oppress anyone who does not follow this model actively. However, some studies suggest children of parents who do not live together are not worse off than those who live together. If managers and team members hold these harmful beliefs, they may treat their fellow team members differently concerning caregiving support. For example, a manager might deny a leave request or fellow team members may be less understanding about flexibility. It’s always a good idea to be aware of harmful biases and stereotypes that discriminate against non-normative families when designing anything related to childcare.
A supportive culture means an openness and willingness to work with caregivers to offer them support where and when they need it. You may foster this through manager and supervisor check-ins, return to work buddy support systems, and implementing a caregiver employee resource group (ERG). Human Resources (HR) or People & Culture (P&C) can also collect data on caregiver experience. Mobilizing data and feedback mechanisms can support the development of any policy, process, or program related to caregiving.
Another important aspect will be managing fellow team members' expectations of caregiver needs. This could mean developing educational materials and programs that help team members act and behave inclusively toward caregivers and avoid perpetuating harmful myths about team members who need time off or are unavailable because of caregiving responsibilities. For caregivers who may have been on leave for an extended period (more than two years), consider offering more structured re-training as part of their return-to-work plan.
A supportive culture can also resemble a psychologically safer culture where caregivers can share their experiences, ideas, opinions, and beliefs around organizational policies without fear of reprisals, intimidation, and shame. This could mean an open dialogue about caregiver experience and options for feedback (either anonymous or otherwise).
Caregiving can be difficult at the best of times. When we add significant social and political moments of crisis, caregiving can become even more emotionally and physically taxing. For example, witnessing extreme violence against your community or identity group at either your local or global level causes severe distress in caregivers. This makes the daily tasks and requirements of caregiving more challenging to manage. For example, the recent acts of gun violence against school-age children in the U.S. have caused fear and anxiety for many parents and caregivers of children. This fear can affect how they go about their daily activities, including work.
Support for caregivers in times of crisis is often the most successful when backed up by an organizational culture that is already supportive. This can mean flexible schedules, psychological safety where team members can share that they may need to push back deadlines or take a half day, or even publishing organizational statements and commitments of support for the communities affected by the tragedy or crisis.
It is essential to recognize the cultural diversity that exists within families. Caregivers can hold different cultural beliefs and have different needs concerning caregiving. Therefore, we suggest you encourage team members to practice caregiving cultural awareness and inclusion when developing any policy, process, or program. For example, consider offering mental health benefits to caregivers representing a culturally diverse group of mental health professionals. This can also mean offering benefits to extended family for those team members who care for people outside of the nuclear family.
One way to support this awareness is by including culturally diverse team members in any policy, process, or program development process. This can also mean crafting your hiring and recruitment policies to include cultural awareness training for any hiring personnel. For example, if a potential candidate shares they are a single-parent, hiring managers know how to think critically about the harmful basis they may hold around single-parenting and how it intersects with race, ethnicity, gender, disability, etc.
While the pandemic may have revealed the depth of our society’s care deficit across the board, the childcare crisis, in particular, predates 2020. A combination of childcare deserts, the high cost of care, and the lack of quality child care contribute to massive barriers for all parents. Moreover, parents of children with disabilities and/or severe allergies can face additional access barriers to daycare because their children may require specialized care or allergy-safe environments, all of which come at a financial premium and are in short supply.
While the cost of daycare varies globally, the monthly cost of daycare in Toronto, Ontario, Canada, is $1800/month. However, the province of Ontario is working to decrease this number as part of the federally backed $10-a-day childcare deal. Yet even in regions where local governments heavily subsidize licensed childcare, many parents and caregivers pay hefty fees because the demand for licensed spaces outstrips the supply. Waiting lists for a licensed childcare space can be years. In the meantime, parents pay a premium for unlicensed care. In response, some companies have started offering daycare benefits and benefits for care for people of advanced age or on-site childcare as part of their wellness benefits packages to help offset the financial strain associated with care.
Organizations might also consider how care benefits and policies include care for people of advanced age. During the pandemic, many families chose to move family members of advanced age into their homes rather than have them stay in nursing or retirement residences. This contributed to a dramatic shift in how, where, and who is caring for aging loved ones. We encourage organizations to continue to work with their team members to understand and support their changing caregiving needs by offering benefits for people of advanced age, like in-home care options or a home accessibility budget. A home accessibility budget could include benefits for in-home renovations to support accessibility needs.
Access to robust mental health benefits is critical for all team members. We know that the pandemic has exacerbated mental health challenges, and caregivers are no exception. Before the pandemic, about 1 in 7 birthing people would experience postpartum depression or anxiety. Unfortunately, this number has risen to 1 in 3. Therefore, a full suite of mental health benefits is a key to supporting caregivers before, during, and after they return to work.
Caregiving can be physically taxing. Recovering from care-related injuries, pumping and/or breast/chestfeeding, and sleep deprivation can add challenges for people returning to work. Consider including access to physiotherapy (such as pelvic floor physiotherapy), registered lactation specialists, birth and death doulas, and infant sleep experts in wellness and medical benefits.
Although Canada's government offers employment insurance (EI) during maternity and parental leave as either a standard or extended option (12 and 18 months, respectively), the dollar amount is prohibitively low, amounting to the average rent for a one-bedroom in Toronto. This means that for most people, parental leave is not financially viable. In Quebec, the government offers parents various parental leave benefits ranging from 55-70% of their pay (with no dollar maximums). Although this is considerably better, there is still room for discrimination and plenty of opportunities for employers to offer top-ups to cover the difference in pay.
Organizations that offer parental leave top-up have much higher morale, retention, and productivity within their teams. However, more than the “business case” for top-ups is the social justice case, as top-ups are a tool for advancing gender equality. This can also mean that while continuing to be a leader in parental leave top-up benefits, the organization can get involved in movements that fight for more governmental support for parental and pregnancy leave.
Globally, only 36 countries offer government-mandated paid parental leave, which is available to both parents. More than 120 countries provide some form of paid maternity/pregnancy leave. Still, this type of leave is typically only available to the pregnant person before and/or immediately following the birth or, in some cases, adoption. Currently, the U.S. does not offer any form of government-protected paid family leave; federal law guarantees new parents just six weeks of unpaid leave, and some workers do not qualify.
In addition to maternity and parental leave, the Canadian Government also offers EI benefits for family caregivers who cannot work because they are caring for a critically ill or injured adult or child. They also offer compassionate care benefits for caregivers providing end-of-life care. Like maternity and parental leave benefits, caregivers are only entitled to 55% of their earnings up to a maximum of $638/week. After taxes and deductions, the maximum allowable amount for caregiver benefits is about $2,100/month. For context, in March of 2022, an average one-bedroom rental in Toronto costed $2,044/month. Offering a caregiving leave top-up is an inclusive way to support caregivers beyond parental leave. Although few organizations currently provide a caregiver leave top-up, we encourage organizations who have the means to offer it to do so.
Caregiving can sometimes be an unpredictable job. Offering flexible work hours can allow caregivers to figure out a schedule that works for them. At the same time, other caregivers may benefit from a predictable schedule (with flex time) to stay organized and avoid feeling overwhelmed. Understanding rescheduling, being more task-oriented, adjusting performance review criteria, and reconsidering policies that reward in-office work can help shift away from the traditional 9-5 and allow caregivers to work in a way that best suits them. Flexible scheduling also means offering options for when and how people return to work—for example, part-time to start, job sharing, compressed work weeks, or work from home hybrid options. For non-office workers, this means direct communication about what shifts might work best for them and where flexibility is possible.
Policies that affect caregivers can range from a parental and pregnancy leave policy, sick leave policy, medical leave policy, bereavement leave policy, holiday or time off policy, and even an absentee policy. Often, these policies indirectly punish or discriminate against caregivers. Therefore, it is a good idea to review these policies and ask how they might indirectly affect a team member who is a caregiver. We suggest expanding definitions of a family to consider who may be impacted by these policies. Including a discretionary leave allotment into these policies is another excellent option. It gives team members the power to take time off without restriction in ways that work best for themselves and their families.
Access to extended paid caregiving leave, like parental leave, is necessary but can also contribute to widening promotional and pay gaps for caregivers, especially women. While on leave, make caregivers aware of upcoming promotional and advancement opportunities. Moreover, give them a choice as to what level of communication they are comfortable with while on leave. A clear communication plan is a great tool to have in place to support a caregiver who is on leave.
If your team works in-office or in a hybrid model, be aware of proximity bias as team members compete for promotions, as caregivers are more likely to choose remote work. Create a robust set of guidelines for hiring and promoting team members that work to avoid proximity bias and reproducing harmful stereotypes around caregiving and caregivers.
Caregiving requires enormous mental, emotional, physical, and financial resources, making participating in traditional professional development opportunities challenging. Therefore, a thoughtful return to work policy, for example, might be future-oriented and consider how caregivers can continue to develop professionally at work. This means:
When at work, caregivers may need any number of material supports, from private phone lines, private rooms for pumping, chest/breastfeeding, and/or taking personal calls, daycare/care for people of advanced age, and access to nourishing meals. This can also include connecting caregiving team members with organizations or financial support outside the office that might offer them support. Another option for this is on-site childcare or a pet-friendly office.
It is often considered “normal” to celebrate family formations by collecting gifts and offering congratulations to new parents. In addition, organizations often share organization-wide birth and/or adoption announcements by leadership and/or collect gifts from fellow team members. While there is nothing wrong with celebrating these significant changes in your team members’ lives, adding a content warning (CW) to organization-wide birth announcements and calls for gifts is considerate of people who have suffered a miscarriage, infant loss, and infertility.
It is also wise to include organization-wide announcements that acknowledge life milestones outside the normative frame, which often follows the marriage-pregnancy path. For example, gender-affirming surgery, adoption, sobriety, getting a new pet, or graduate school completion, to name a few, are rarely shared in workplaces as life milestones. Moreover, it’s important to consider that not everyone wants their life milestones shared at work. Keeping track of what milestones are important to everyone and who wants recognition is vital to ensuring we treat all team members respectfully. You may accomplish this through an organization-wide portal where team members can share what life events they want to be made public with their corresponding dates and details.
Finally, ensure that you treat all life events equally. This means the announcements (in frequency and type), gifts, events, and communications are the same regardless of the milestone.
This resource has provided suggestions and tips to help organizations draft a care-centred culture. Moving beyond traditional understandings of caregiving means moving towards care-centred approaches to work and teams. Once we adopt a more expansive understanding of caregiving, we can better ensure that our policies, processes, and programs value, support, and normalize all types of caregivers. The labour of caregiving is vital, taxing, and different for everyone. It is labour worthy of our consideration and support, and we urge everyone to reflect this in their workplaces.
This resource reflects a particular moment in time, North America in 2022, and like most things in life, will eventually need updates. Everything changes - from technologies and innovations to social norms, cultures, and languages. As such, this resource is not meant to be a static guide, but rather a compilation and reflection of our learnings to date.
Please feel free to reach out to us at firstname.lastname@example.org if you have any thoughts, questions, or comments.
Consultant & Facilitator
If you wish to reference this work, please use the following citation:
Feminuity. James, Y. and Marino, E. (2022). "How to Make Caring For Caregivers The New Normal: A Guide for Organizations" | <urn:uuid:d862877f-dfda-4202-927d-ce1d42bce0b4> | CC-MAIN-2022-33 | https://www.feminuity.org/resources/caregivers-guide-caregiving-workplace-childcare-benefits-flexible-parents | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573744.90/warc/CC-MAIN-20220819161440-20220819191440-00496.warc.gz | en | 0.949336 | 4,210 | 2.546875 | 3 |
Introduction: The 2001 Recommendations for clinical care guidelines on the management of otitis media in Aboriginal and Torres Islander populations were revised in 2010. This 2020 update by the Centre of Research Excellence in Ear and Hearing Health of Aboriginal and Torres Strait Islander Children used for the first time the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) approach.
Main recommendations: We performed systematic reviews of evidence across prevention, diagnosis, prognosis and management. We report ten algorithms to guide diagnosis and clinical management of all forms of otitis media. The guidelines include 14 prevention and 37 treatment strategies addressing 191 questions.
Changes in management as a result of the guidelines:
- A GRADE approach is used.
- Targeted recommendations for both high and low risk children.
- New tympanostomy tube otorrhoea section.
- New Priority 5 for health services: annual and catch‐up ear health checks for at‐risk children.
- Antibiotics are strongly recommended for persistent otitis media with effusion in high risk children.
- Azithromycin is strongly recommended for acute otitis media where adherence is difficult or there is no access to refrigeration.
- Concurrent audiology and surgical referrals are recommended where delays are likely.
- Surgical referral is recommended for chronic suppurative otitis media at the time of diagnosis.
- The use of autoinflation devices is recommended for some children with persistent otitis media with effusion.
- Definitions for mild (21–30 dB) and moderate (> 30 dB) hearing impairment have been updated.
- New “OMapp” enables free fast access to the guidelines, plus images, animations, and multiple Aboriginal and Torres Strait Islander language audio translations to aid communication with families.
In remote communities across the Northern Territory of Australia, only one in ten Aboriginal children younger than 3 years has healthy ears; five have otitis media (OM) with effusion (OME), or “glue ear”; and four have suppurative OM — acute OM (AOM) with or without perforation or chronic suppurative OM (CSOM).1,2,3 Remote communities rely on fly in‐fly out specialist services and a high turnover of resident primary health care professionals.4 Aboriginal and Torres Strait Islander children in rural and urban areas across Australia are also at increased risk of chronic OM, although the true prevalence of OM and associated conductive hearing loss is not known. All forms of OM cause conductive hearing loss, which is associated with language delay, speech problems, high vulnerability on entering school, social isolation, poor school attendance, and low education and employment opportunities.5,6,7,8
This 2020 update of the 2010 Recommendations for clinical care guidelines on the management of OM in Aboriginal and Torres Islander populations9 followed the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) approach. The 2020 OM guidelines provide recommendations for prevention, diagnosis and management of all forms of OM, including episodic OME, persistent OME, AOM without perforation, AOM with perforation, dry perforation, and CSOM, plus tympanostomy tube otorrhoea (TTO).
The overall objective of the 2020 OM guidelines is to prevent OM and improve detection and management of OM and associated hearing loss in Aboriginal and Torres Strait Islander children across Australia.
The Technical Advisory Group included representatives and members of the Royal Australasian College of Surgeons, the Royal Australian College of General Practitioners, and the Royal Australasian College of Physicians (adult and paediatric divisions), as well as trainees of these colleges, Aboriginal health practitioners, audiologists, scientists, researchers and consumers.
The methods for search strategies, grading recommendations, determination of quality and confidence, and generation of summary of findings tables10 used Cochrane Reviews, RevMan 5 (Cochrane Collaboration) and GRADEpro software as described in detail elsewhere (https://gradepro.org/).
The primary objectives were:
- to review all the available evidence to March 2017, using the GRADE approach;11
- to include pathways for children at low or high risk (Box 1) of treatment failure; and
- to transform the 2020 guidelines into a user‐friendly multiplatform application (app), “OMapp”, with multiple audiovisual features targeting multiple stakeholders (scientists, doctors, specialists, health workers, nurses, researchers, policy makers, children and parents).
The Commonwealth of Australia granted the Menzies School of Health Research licence to update the 2010 Recommendations for clinical care guidelines on the management of otitis media in Aboriginal and Torres Islander populations.
We generated 51 summary of findings tables10 (14 prevention and 37 treatment strategies) based on 191 PICOT (population, intervention, comparison, outcome, time) questions. We made 102 recommendations: 27 strong, 20 weak, and 55 by consensus. Recommendations were graded according to the quality and confidence of the evidence. Overall, the quality of evidence was usually low and the confidence or strength of the recommendations was usually weak. Randomised controlled trials were often ranked down, and few observational studies were ranked up. Effect sizes were rarely above 2 or below 0.5. Many studies had non‐significant outcomes for which an effect size could not be determined. Few studies reported adverse events, but when reported, they were rare. Adverse events, confidence, effect size and intervention complexity determined the overall benefit statements (large, moderate or small benefit).
The full OM guidelines are available at https://otitismediaguidelines.com, where the OMapp can also be downloaded.
Primary prevention strategies
Early accurate detection and appropriate treatment of OM can prevent associated hearing loss, language delay, developmental problems and educational disadvantage. There are several OM preventive strategies, including breastfeeding for at least 6 months,12 frequent handwashing for children attending day care centres,13 and avoiding smoke exposure.14 All are strongly recommended for their broad health benefits; however, most were observational studies with very low quality of evidence. Vitamin D15 and probiotic supplementation (Lactobacillus rhamnosis GG),16,17,18,19,20 xylitol,21 and limited use of pacifier are weakly recommended (quality of evidence: low to moderate). Pneumococcal conjugate22 and influenza23 vaccination are strongly recommended for the prevention of invasive pneumococcal disease and influenza, as the available evidence is very strong. However, the effect size for preventing OM is very small. Keeping the child away from sick children and those with a runny nose, especially at day care centres, is a consensus recommendation.13 There is no role for zinc in OM prevention.24
Diagnosis and management of otitis media
Ten algorithms are provided, including one diagnostic and seven management algorithms. There is one algorithm for each type of OM, an additional new algorithm for the management of TTO, and an algorithm for the management of hearing impairment.
Algorithm 1: diagnosis of otitis media. We recommend otoscopy with tympanometry or pneumatic otoscopy to diagnose middle ear disease. Algorithm 1 presents a clear guide to diagnosis based on the answer to simple stepwise questions related to visualisation and mobility of the child’s eardrum (Box 1).
Algorithms 2–8: management of all forms of otitis media. The management of OM ranges from watchful waiting to long term and high dose antibiotic therapy (Box 1).
- Where antibiotics are indicated, amoxycillin (50 mg/kg/day for 7 days) is recommended.25
- For persistent OME or for OME in high risk children, amoxycillin (25 mg/kg/dose two times per day for 2–4 weeks) is recommended. Autoinflation devices may assist some children.
- When AOM persists for 7 days, we recommend increasing the dose to 90 mg/kg/day for a further 7 days and possibly continued for a total of 4 weeks at 50 mg/kg/day.25 For unresolved AOM we recommend amoxycillin–clavulanate at 90 mg/kg/day for 7 days.26 The same total daily dose can also be given in three divided doses if higher antibiotic levels are required.
- Where compliance is poor and refrigeration not available, we recommend a single‐dose azithromycin.27,28,29,30
- For children with known penicillin allergy, we recommend co‐trimoxazole.
- For high risk children with recurrent AOM or children at risk of developing AOM with perforation or CSOM, we recommend long term (3–6 months) prophylactic antibiotics (amoxycillin 25–50 mg/kg/day).
- For children with CSOM, topical quinolone antibiotics (ciprofloxacin, five drops twice a day) after cleaning are strongly recommended, but oral antibiotics alone are not routinely recommended. However, if topical antibiotics fail, then adjunct oral trimethoprim–sulfamethoxazole (8 mg/kg/day, calculated using the trimethoprim component, in two divided doses) can be recommended.
- Regular check‐up by a health professional is recommended for all children, at least once per year (Priority 5).
- Oral analgesics (eg, paracetamol, 15 mg/kg/dose every 4–6 hours if needed) reduce ear pain.31
- Under direct medical supervision, topical analgesia (lignocaine aqueous 2%) may provide short term pain relief.
Algorithm 9: management of tympanostomy tube otorrhoea. Refer to the treating ear nose and throat (ENT) specialist if the child has continuous TTO for 4 weeks despite treatment, or if there is intermittent or recurrent TTO for 3 months or any complication.
Regular cleaning and use of topical ciprofloxacin drops are strongly recommended for the management of children with uncomplicated TTO. Topical steroid formulations are not recommended (Box 1).32 Weekly follow‐up reviews for 4 consecutive weeks are recommended.
Fever (> 37.5°C), external ear cellulitis or bleeding indicate complicated TTO; systemic antibiotics that provide gram‐negative cover (seek advice of an infectious diseases specialist) are recommended for fever and outer ear cellulitis, and topical ciprofloxacin and hydrocortisone are recommended for bleeding associated with TTO.33
Algorithm 10: criteria for prioritisation of hearing and ENT assessments. According to the current classification of the World Health Organization, individuals with an average hearing level greater than 25 dB are considered to have some degree of hearing impairment. The Technical Advisory Group made a consensus recommendation for a level greater than 20 dB based on the burden of OM among Aboriginal and Torres Strait Islander children and the risk that lower levels of hearing loss may be associated with impacts on listening and communication skills development in this population.
Hearing assessment is recommended for OME (unilateral or bilateral) that persists for more than 3 months, recurrent AOM with or without perforation, CSOM, dry perforation for more than 3 months, and any speech, language, developmental delay or behavioural problems and any family concerns.
Children with episodic OME or AOM without perforation do not routinely require hearing assessment. Any child referred to an ENT specialist should be concurrently referred for a hearing assessment, to minimise consecutive waiting times.
Key messages for primary health care providers are summarised in (Box 2).
Aboriginal and Torres Strait Islander children are the primary target population for the OM guideline recommendations although much of the data come from studies in other populations. In some areas (generally rural and remote communities, and in some urban settings), the clinical course of OM is characterised by an early age of onset, asymptomatic presentation, high prevalence and long duration of severe disease.34,35 Bacterial aetiology is common in high risk children, although viruses also play a role. Therefore, antimicrobial treatment is strongly recommended for high risk children. Recommendations may be different for other children and in other international guidelines, particularly regarding antimicrobial treatment.34,35
For children at low risk of CSOM, tympanostomy tube (grommet) insertion is strongly recommended, if the child has persistent OME or OME and hearing loss greater than 30 dB and/or speech and language delay.36,37,38 Any child at high risk of CSOM should be referred for tympanostomy tube insertion if the child has bilateral persistent OME and/or speech and language delay, and if surgery is consistent with parental preferences.
Adenoidectomy alone is usually not recommended, although it is weakly recommended in children aged over 4 years with persistent bilateral OME despite previous tympanostomy tube insertions or if the child is at high risk of CSOM.39 Adenoidectomy as an adjunct to tympanostomy tube insertion is strongly recommended in children with persistent OME.
Tympanostomy tube insertion is a weak recommendation for children with recurrent AOM who are at high risk of CSOM, have hearing loss and/or speech and language difficulties and have failed to improve with long term prophylactic antibiotics.40,41,42,43,44,45
A consultation to determine a child’s hearing, communication and amplification needs is recommended for children with CSOM, persistent OME or dry perforation with persistent bilateral hearing loss averaging more than 30 dB in the better ear, or if ENT consultation is delayed more than 6 months or not available, or specialist medical treatment has been unsuccessful.
Audiological assessment and management
This section of the guidelines addresses five areas: preventing hearing loss and its impacts on listening and communication skills, identifying hearing loss early, referral and specialist input, supporting listening and communication skill development in young children, and communicating with patients and co‐workers who have hearing loss. This section provides information for health professionals and families regarding screening and surveillance, hearing assessment and rehabilitation options, and strategies for enhancing language acquisition at home, in early education centres and schools and for communicating with people with hearing loss.
The OM guidelines app, OMapp, has been designed to be used in the clinic, is compatible with multiple digital platforms and is accessible free via the App Store and Google Play. The app has four sections:
- Clinical (diagnosis and management): algorithms for all types of OM.
- Communicate: audio recordings in multiple Aboriginal languages to assist the caregivers’ understanding of messages and instructions regarding their child’s ear health and hearing needs.
- Education: includes pneumatic otoscopy videos and a quiz and cartoons to explain hearing loss simulation, how the ear works, how ear infections can be prevented, and understanding referral pathways.
- Guidelines: evidence summaries for all prevention and treatment strategies, recommendations and their strength, quality and confidence, effect size, overall benefit, “what happens” PICOT statements, and links to GRADEpro summary of findings tables. 10
Implications for policy and practice
Evidence‐based guideline implementation and uptake is strongly influenced by features that facilitate their use. Our 2020 OM guidelines have a strong focus on decision making by health care providers. It is anticipated that the OMapp will be used by all stakeholders who are involved with the prevention, diagnosis and management of OM in Aboriginal and Torres Strait Islander children.
- Recommendations for low or high risk criteria can be applied to all Australian children.
- Otoscopy and tympanometry, or pneumatic otoscopy, are recommended for the diagnosis of OM.
- The diagnostic algorithms use a stepwise decision process supported by images, with links to medical and audiological management strategies for low and high risk children.
- The Communicate section provides audio recordings in Aboriginal and Torres Strait Islander languages to assist health care providers to communicate their recommendations.
- An Education section includes video images and a quiz for health care providers, and animation cartoons for families.
- The Guideline section is also a rich source of global evidence for research prioritisation and translation into policy and practice.
For health care providers to implement the OM guideline recommendations, awareness and training programs will be required, as current medical education does not equip professionals with the knowledge and skills to follow these recommendations. Current high turnover of health professional staff in remote areas demands effective and efficient orientation and training to meet the health care needs of the community. For example, the guidelines recommend either otoscopy with tympanometry, or pneumatic otoscopy as the gold standard techniques for OM diagnosis, but this will require capacity building, and/or new equipment in some services. Advances in technology such as optical eardrum scanning and smartphone devices should be evaluated, as they have the potential to increase confidence and, therefore, uptake and investment in state‐of‐the‐art technologies and training.
Families can benefit from the culturally appropriate resources within the OMapp, such as the Communicate and Education sections. Awareness of these resources will need to be promoted.
The 2010 guidelines referred to children as high risk if they lived in populations with a CSOM prevalence greater than 4%. Prevalence data are scarce or out of date for jurisdictions outside the Northern Territory. The 2020 guidelines now refer to the low or high risk child or episode (risk of treatment failure), increasing the relevance of the OM guidelines to all Aboriginal and Torres Strait Islander children, including the majority living in metropolitan settings.
Implications for research
The GRADE approach provided a framework for assessing the quality and confidence of evidence available for best practice. The very low to moderate quality of most OM research and the relatively small effect size of most interventions have led to many weak or consensus recommendations. Ongoing updates will assist identification of areas for high priority research. Much of the evidence for best practice is not followed in under‐resourced primary health care. Many thousands of children in the Northern Territory and across Australia are on waiting lists for hearing tests and ENT consultations or surgeries. More research around models of best practice in primary health care, referral pathways and waiting list management is needed. Clinical trial networks should be encouraged to undertake trials to address these gaps and inefficiencies. However, these problems are not just research questions. Gaps in best practice are consequent on workforce and resourcing, which require government action.
Our multidisciplinary team included librarians, statisticians, epidemiologists, scientists, GPs, paediatricians, ENT surgeons, Aboriginal Health Practitioners, specialist medical trainees, audiologists and consumers. Technical expertise in multiple scripts for the OMapp, animations, interpreter services and artistic work have all contributed to the 2020 OM guidelines and OMapp.
Limitations to health care professional confidence in the 2020 OM guidelines include ease of use. Ongoing evidence updates, added features and improvements in OMapp functionality will be needed. Expertise is required in efficient and effective ways of gathering evidence, knowledge of the GRADE system for rating quality of studies, statistical expertise for meta‐analyses and generating summary of findings tables,10 and content expertise to prioritise research questions and to grade recommendations.
The 2020 OM guidelines and the new OMapp46 improve access to the most up‐to‐date critically appraised evidence on best practice in OM and hearing loss prevention and management for Aboriginal and Torres Strait Islander children across all Australian settings.
Box 1 – Algorithms for diagnosis and management of different types of otitis media for low and high* risk children
Diagnosis. Could this child have a middle ear infection (otitis media)?
Management. Episodic bilateral otitis media with effusion
Management. Persistent bilateral otitis media with effusion
Management. Acute otitis media without perforation
Management. Recurrent acute otitis media
Management. Acute otitis media with perforation
Management. Chronic suppurative otitis media
Management. Dry perforation
Management. Tympanostomy tube otorrhoea
Management. Could this child have an important hearing loss due to otitis media?
* High risk of treatment failure includes one or more of the following risk factors: living in a remote community, younger than 2 years of age, first episode of otitis media before 6 months of age, a family history of chronic suppurative otitis media, a current or previous tympanic membrane perforation, craniofacial abnormalities, cleft palate, Down syndrome, immunodeficiency, cochlear implants, developmental delay, hearing loss, or severe visual impairment.
Box 2 – Key messages for primary health care providers
- Let families know that hearing is important for learning culture and language, for learning English and for getting a job. Aboriginal and Torres Strait Islander children are at greatly increased risk of persistent and severe otitis media (OM) and poor hearing that can affect their whole lives.
- Let families know that severe OM can be prevented with improved and less crowded living conditions, more hand and face washing, breastfeeding, avoiding smoke exposure, and getting all vaccinations on time.
- Let families know the importance of attending the local health clinic as soon as possible whenever a baby or child develops ear pain or ear discharge.
- Let families know that they can ask for their child’s ears to be checked, even when the child is well. Health care providers should use either pneumatic video‐otoscopy or both video‐otoscopy and tympanometry whenever possible.
- Antibiotics (amoxycillin) are recommended for all children with acute otitis media with perforation and for children with acute otitis media without perforation if they are at high risk of chronic suppurative otitis media (CSOM). Antibiotics and regular review should be continued until the bulging and/or discharge have resolved. If discharge persists and the perforation size is bigger than a pinhole, topical antibiotic drops need to be added.
- CSOM should be diagnosed in children who have persistent ear discharge for at least 2 weeks. Effective treatment of CSOM requires a long term approach with regular dry mopping or syringing of ear discharge followed by the application of topical antibiotics.
- All children with persistent bilateral OM (all types) for > 3 months should have their hearing assessed, so that appropriate management and referrals can be planned.
- Let families of children with disabling hearing loss (> 30 dB) know the benefits of improved communication strategies, surgical procedures, and hearing aids.
- Let families know that all babies and young children learn to talk by hearing people. Babies and children with OM may have problems with hearing and learning. Families can help by encouraging a lot of talking, storytelling, reading books and following their child’s conversational focus.
- Aim to provide patients or families with the knowledge to manage their own health needs. Use communication techniques, language translation and resources that facilitate true understanding.
Provenance: Not commissioned; externally peer reviewed.
- 1. Leach AJ, Wigger C, Beissbarth J, et al. General health, otitis media, nasopharyngeal carriage and middle ear microbiology in Northern Territory Aboriginal children vaccinated during consecutive periods of 10‐valent or 13‐valent pneumococcal conjugate vaccines. Int J Pediatr Otorhinolaryngol 2016; 86: 224–232.
- 2. Morris PS, Leach AJ, Silberberg P, et al. Otitis media in young Aboriginal children from remote communities in Northern and Central Australia: a cross‐sectional survey. BMC Pediatr 2005; 5: 27–37.
- 3. Leach AJ, Wigger C, Andrews R, et al. Otitis media in children vaccinated during consecutive 7‐valent or 10‐valent pneumococcal conjugate vaccination schedules. BMC Pediatr 2014; 14: 200–211.
- 4. Russell DJ, Zhao Y, Guthridge S, et al. Patterns of resident health workforce turnover and retention in remote communities of the Northern Territory of Australia, 2013–2015. Hum Resour Health 2017; 15: 52–64.
- 5. Da Costa C, Eikelboom RH, Jacques A, et al. Does otitis media in early childhood affect later behavioural development? Results from the Western Australian Pregnancy Cohort (Raine) study. Clin Otolaryngol 2018; 43: 1036–1042.
- 6. Timms L, Williams C, Stokes SF, Kane R. Literacy skills of Australian Indigenous school children with and without otitis media and hearing loss. Int J Speech Lang Pathol 2014; 16: 327–334.
- 7. Williams CJ, Jacobs AM. The impact of otitis media on cognitive and educational outcomes. Med J Aust 2009; 191: S69–S72. https://www.mja.com.au/journal/2009/191/9/impact-otitis-media-cognitive-and-educational-outcomes
- 8. Su JY, He VY, Guthridge S, et al. The impact of hearing impairment on Aboriginal children’s school attendance in remote Northern Territory: a data linkage study. Aust N Z J Public Health 2019; 43: 544–550.
- 9. Morris P, Leach A, Shah P, et al: Recommendations for clinical care guidelines on the management of otitis media in Aboriginal and Torres Strait Islander Populations (April 2010) [Publications Approval No. D0419]. Canberra: Commonwealth of Australia. 2011. https://healthinfonet.ecu.edu.au/healthinfonet/getContent.php?linkid=591736&title=Recommendations+for+clinical+care+guidelines+on+the+management+of+otitis+media+in+Aboriginal+and+Torres+Strait+Islander+populations&contentid=22141_1 (viewed Jan 2021).
- 10. Menzies School of Health Research. Otitis media guidelines. Summary of findings. https://otitismediaguidelines.com/resources/SoF/SoF_Tables_1-51.pdf (viewed Jan 2021).
- 11. Guyatt G, Oxman AD, Akl EA, et al. GRADE guidelines: 1. Introduction‐GRADE evidence profiles and summary of findings tables. J Clin Epidemiol 2011; 64: 383–394.
- 12. Bowatte G, Tham R, Allen KJ, et al. Breastfeeding and childhood acute otitis media: a systematic review and meta‐analysis. Acta Paediatr 2015; 104: 85–95.
- 13. Uhari M, Mottonen M. An open randomized controlled trial of infection prevention in child day‐care centers. Pediatr Infect Dis J 1999; 18: 672–677.
- 14. Jones LL, Hassanien A, Cook DG, et al. Parental smoking and the risk of middle ear disease in children: a systematic review and meta‐analysis. Arch Pediatr Adolesc Med 2012; 166: 18–27.
- 15. Marchisio P, Consonni D, Baggi E, et al. Vitamin D supplementation reduces the risk of acute otitis media in otitis‐prone children. Pediatr Infect Dis J 2013; 32: 1055–1060.
- 16. Cohen R, Martin E, de La Rocque F, et al. Probiotics and prebiotics in preventing episodes of acute otitis media in high‐risk children: a randomized, double‐blind, placebo‐controlled study. Pediatr Infect Dis J 2013; 32: 810–814.
- 17. Hojsak I, Snovak N, Abdovic S, et al. Lactobacillus GG in the prevention of gastrointestinal and respiratory tract infections in children who attend day care centers: a randomized, double‐blind, placebo‐controlled trial. Clin Nutr 2010; 29: 312–316.
- 18. Liu S, Hu P, Du X, et al. Lactobacillus rhamnosus GG supplementation for preventing respiratory infections in children: a meta‐analysis of randomized, placebo‐controlled trials. Indian Pediatr 2013; 50: 377–381.
- 19. Rautava S, Salminen S, Isolauri E. Specific probiotics in reducing the risk of acute infections in infancy–a randomised, double‐blind, placebo‐controlled study. Br J Nutr 2009; 101: 1722–1726.
- 20. Taipale T, Pienihakkinen K, Isolauri E, et al. Bifidobacterium animalis subsp. lactis BB‐12 in reducing the risk of infections in infancy. Br J Nutr 2011; 105: 409–416.
- 21. Azarpazhooh A, Limeback H, Lawrence HP, Shah PS. Xylitol for preventing acute otitis media in children up to 12 years of age. Cochrane Database Syst Rev 2011; (8): CD007095.
- 22. Ewald H, Briel M, Vuichard D, et al. The clinical effectiveness of pneumococcal conjugate vaccines: a systematic review and meta‐analysis of randomized controlled trials. Dtsch Arztebl Int 2016; 113: 139–146.
- 23. Norhayati MN, Ho JJ, Azman MY. Influenza vaccines for preventing acute otitis media in infants and children. Cochrane Database Syst Rev 2015; (3): CD010089.
- 24. Gulani A, Sachdev HS. Zinc supplements for preventing otitis media. Cochrane Database Syst Rev 2014; (6): CD006639.
- 25. Venekamp RP, Burton MJ, van Dongen TM, et al. Antibiotics for otitis media with effusion in children. Cochrane Database Syst Rev 2016; (6): CD009163.
- 26. Venekamp RP, Sanders SL, Glasziou PP, et al. Antibiotics for acute otitis media in children. Cochrane Database Syst Rev 2015; (6): CD000219.
- 27. Arguedas A, Soley C, Kamicker BJ, Jorgensen DM. Single‐dose extended‐release azithromycin versus a 10‐day regimen of amoxicillin/clavulanate for the treatment of children with acute otitis media. Int J Infect Dis 2011; 15: e240–e248.
- 28. Courter JD, Baker WL, Nowak KS, et al. Increased clinical failures when treating acute otitis media with macrolides: a meta‐analysis. Ann Pharmacother 2010; 44: 471–478.
- 29. Kozyrskyj A, Klassen TP, Moffatt M, Harvey K. Short‐course antibiotics for acute otitis media. Cochrane Database Syst Rev 2010; (9): CD001095.
- 30. Morris PS, Gadil G, McCallum GB, et al. Single‐dose azithromycin versus seven days of amoxycillin in the treatment of acute otitis media in Aboriginal children (AATAAC): a double blind, randomised controlled trial. Med J Aust 2010; 192: 24–29. https://www.mja.com.au/journal/2010/192/1/single-dose-azithromycin-versus-seven-days-amoxycillin-treatment-acute-otitis
- 31. Sjoukes A, Venekamp RP, van de Pol AC, et al. Paracetamol (acetaminophen) or non‐steroidal anti‐inflammatory drugs, alone or combined, for pain relief in acute otitis media in children. Cochrane Database Syst Rev 2016; (12): CD011534.
- 32. Syed MI, Suller S, Browning GG, Akeroyd MA. Interventions for the prevention of postoperative ear discharge after insertion of ventilation tubes (grommets) in children. Cochrane Database Syst Rev 2013; (4): CD008512.
- 33. Venekamp RP, Javed F, van Dongen TM, Waddell A, Schilder AG. Interventions for children with ear discharge occurring at least two weeks following grommet (ventilation tube) insertion. Cochrane Database Syst Rev 2016; (11): CD011684.
- 34. Morris PS, Leach AJ. Acute and chronic otitis media. Pediatr Clin North Am 2009; 56: 1383–1399.
- 35. Venekamp RP, Damoiseaux RAMJ, Schilder AGM. Acute otitis media in children. Am Fam Physician 2017; 95: 109–110.
- 36. Browning GG, Rovers MM, Williamson I, et al. Grommets (ventilation tubes) for hearing loss associated with otitis media with effusion in children. Cochrane Database Syst Rev 2010; (1): CD001801.
- 37. Jassar P, Sibtain A, Marco D, et al. Infection rates after tympanostomy tube insertion, comparing Aboriginal and non‐Aboriginal children in the Northern Territory, Australia: a retrospective, comparative study. J Laryngol Otol 2009; 123: 497–501.
- 38. Medical Research Council Multicentre Otitis Media Study Group. Surgery for persistent otitis media with effusion: generalizability of results from the UK trial (TARGET). Trial of Alternative Regimens in Glue Ear Treatment. Clin Otolaryngol Allied Sci 2001; 26: 417–424.
- 39. Medical Research Council Multicentre Otitis Media Study Group. Adjuvant adenoidectomy in persistent bilateral otitis media with effusion: hearing and revision surgery outcomes through 2 years in the TARGET randomised trial. Clin Otolaryngol 2012; 37: 107–116.
- 40. Casselbrant ML, Kaleida PH, Rockette HE, et al. Efficacy of antimicrobial prophylaxis and of tympanostomy tube insertion for prevention of recurrent acute otitis media: results of a randomized clinical trial. Pediatr Infect Dis J 1992; 11: 278–286.
- 41. Gonzalez C, Arnold JE, Woody EA, et al. Prevention of recurrent acute otitis media: chemoprophylaxis versus tympanostomy tubes. Laryngoscope 1986; 96: 1330–1334.
- 42. Kujala T, Alho OP, Kristo A, et al. Quality of life after surgery for recurrent otitis media in a randomized controlled trial. Pediatr Infect Dis J 2014; 33: 715–719.
- 43. Kujala T, Alho OP, Luotonen J, et al. Tympanostomy with and without adenoidectomy for the prevention of recurrences of acute otitis media: a randomized controlled trial. Pediatr Infect Dis J 2012; 31: 565–569.
- 44. Le CT, Freeman DW, Fireman BH. Evaluation of ventilating tubes and myringotomy in the treatment of recurrent or persistent otitis media. Pediatr Infect Dis J 1991; 10: 2–11.
- 45. McDonald S, Langton Hewer CD, Nunez DA. Grommets (ventilation tubes) for recurrent acute otitis media in children. Cochrane Database Syst Rev 2008; (4): CD004741.
- 46. Menzies School of Health Research. Otitis media guidelines for Aboriginal and Torres Strait Islander children. 2020. https://otitismediaguidelines.com (viewed Jan 2021).
Publication of your online response is subject to the Medical Journal of Australia's editorial discretion. You will be notified by email within five working days should your response be accepted. | <urn:uuid:6193f198-3f58-4145-acae-2e53b3e56f5a> | CC-MAIN-2022-33 | https://www.mja.com.au/journal/2021/214/5/otitis-media-guidelines-australian-aboriginal-and-torres-strait-islander | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571692.3/warc/CC-MAIN-20220812105810-20220812135810-00097.warc.gz | en | 0.858711 | 7,736 | 2.8125 | 3 |
C2006/F2402 '10 OUTLINE OF LECTURE #12
(c) 2010 Dr. Deborah Mowshowitz, Columbia University, New York, NY. Last update 03/01/2010 01:05 PM
Handouts: 12A. Modes of communication between cells. 12B -- Signal Hypothesis -- Co-translational Import & 12C = How proteins insert into ER membrane.
I. How do cells signal to other cells?
A. Why & when is cell-cell communication needed?
1. For development to work properly
2. To co-ordinate functions in adult -- between cells, tissues and organs (w/o using nerves)
3. For nerve-nerve and nerve-target communication
B. Major types of secreted Signals -- classified here by type of cell that makes them and/or target location. See Handout 12A for pictures -- numbers of pictures match numbers below.
a. Signal molecule secreted by specialized cells in ductless (endocrine) gland
b. Gland secretes signal molecule (hormone) into blood.
c. Target cell is often far away. Acts long range. For an example see Becker fig. 14-23 (14-22).
d. Examples: Insulin, testosterone, TH (thyroid hormone)
e. Signal molecule is not always a protein -- can be a protein (insulin & many others) or not (testosterone or other steroid or TH)
2. Paracrine: See Becker fig. 14-1 & table 14-4 (6th ed) for paracrine (or autocrine) vs. endocrine.
a. Usually secreted by ordinary cells
b. Target cell is near by -- Receptor is on adjacent cells. Act locally.
c. Example -- many growth factors (such as EGF); Wnt proteins (products of WNT genes), AMH (AntiMullerian Hormone -- not an endocrine, in spite of name)
3. Autocrine: Like paracrine, except receptor is on same cell. ex. = some growth factors
a. Neuron secretes signal molecule.
b. Signal molecule acts as a neurotransmitter (NT)
c. NT acts on receptors on neighbor (gland, another neuron or muscle). Acts locally, like a paracrine.
d. Examples: norephinephrine, acetyl choline. Details when we get to nerves. (Note these are small molecules, not proteins.)
a. Neuron secretes signal molecule, as in previous case.
b. Signal molecule acts like a hormone (travels through blood to target).
c. Example: Epinephrine (adrenaline).
a. Exocrine gland secretions are released by ducts to outside of body (or to inside of body cavity)*.
b. Examples: sweat, tears.
c. Secretions on outside can carry signals → target in different individual = pheromones (detected by olfactory receptors in mammals).
*Exocrine secretions (not involved in signaling) can be released to spaces inside of the body that connect with the outside, such as the GI lumen. Example: enzymes of pancreas.
C. Other types of Signaling
1. Gap Junctions -- allow ions & currents to flow directly from cell to cell -- used in smooth muscle → synchronized contractions. Sadava fig. 15.19 (15.16).
2. Juxtacrine. Cell surface proteins from two different cells contact -- used in immune system. Similar to basic system, but signal molecule is not secreted -- remains on cell surface.
II. How do cells make secreted proteins?Sorting of Proteins to their Proper Place: Overview & Review (See handout 7C & Becker fig. 22-14 (20-14) or Sadava 12.15 (12.14) -- terminology in Sadava is slightly different.)
A. Fate of proteins made on free ribosomes
This is the default location for a soluble protein. If there are no "tags" at all, proteins stay in the cytoplasm.
1. Soluble Cytoplasm
2. Organelles that are not part of the endomembrane system (EMS).Proteins with the appropriate "tags" or localization signals can be imported post-translationally into organelles outside the EMS -- nuclei, mito, chloro or peroxisomes.
B. Fate of proteins made on attached ribosomes -- these become part of the endomembrane system and/or leave the cell.
primarily by co-translational import. (There is some post-translational import, esp. in unicellular eukaryotes.) Protein can
1. They enter the ER
2. Most proteins travel from ER to Golgi
3. Most proteins are sorted and processed in the Golgi
4. Where do the proteins and/or vesicles go next?
(vesicles involved in regulated secretion) → area near plasma membrane
a. Secretory vesicles
(1). Vesicles fuse with plasma membrane only in response to signal (such as hormone, change in local Na+ concentration, etc.)
(2). The 'signal' usually causes an increase in intracellular Ca++, which directly triggers the fusion, causing exocytosis.
(3). Fusion results in: Release of contents outside celland/or addition of material to cell membrane. Click here for animation #1 -- annotated & animation #2 -- larger but not annotated.
b. Default vesicles(vesicles involved in constitutive secretion) → plasma membrane → fuse automatically (constitutively) and release contents. Same as in (a) -- leads to addition of material to membrane or outside it. HOWEVER no signal is required for fusion. This is probably the "default" for proteins that are directed to the ER but have no additional directional information.
c. Vesicles containing hydrolases→ Lysosomes (details to be discussed in future lectures).
d. Vesicles containing other enzymes→ other parts of EMS (Some enzymes may stay in trans Golgi, but others bud off and go back to other parts of Golgi, ER, etc.)
C. Labeling -- How do you follow newly made molecules moving through the cell and/or on their way out? How do we know newly made proteins go from RER to Golgi etc.?
In examples of detection discussed previously, emphasis was on following molecules going in to the cell. This example is about following newly made molecules on their way out.
-- Add labeled precursors (small molecules) and measure incorporation into macromolecules.
1. General idea
a. Add labeled precursors, and take cell samples after increasing time intervals.
b. For each sample, wash out unused ('unincorporated') small molecules -- removes labeled molecules not used for synthesis so not incorporated into macromolecules. Radioactivity remaining in dif. parts of the cell is in macromolecules.
c. Use autoradiography for measurement of radioactivity in each cell part, or measure amounts in each isolated fraction.
2. A specific example-- following secreted proteins out. See graphs on handout 6C. Re-label the curves, left to right, with Rough ER, Golgi, Vesicles, Outside the cell.
a. Continuous label vs pulse-chase results (See graphs on handout 6C & fig. 12-10 of Becker. 6th ed. has curves; 7th ed. has autoradiographs.)
b. Implications: newly made proteins to be secreted go → RER → Golgi → secretory vesicles → outside (See Becker fig. 12-8.) Click here for animation.
To review labeling of newly made material, try 3-4D.
D. Another Type of Labeling -- Cell makes its own labeled (fluorescent) protein containing GFP
GFP has been mentioned before. Here are the details.
1. What is it? GFP = green fluorescent protein = small fluorescent protein made by jelly fish. (Click here for page with pictures of GFP and related fluorescent proteins.)
2. What is it good for? GFP is used as tag to follow proteins inside the cell.
3. How is GFP added to proteins? GFP is not added from outside. Instead, genetic engineering is used to splice the gene for GFP to the gene for the protein of interest. The recombinant gene makes a fusion protein = normal sequence of amino acid + sequence of amino acids in GFP. Fusion protein (including GFP) is made internally by the cell; the functioning protein part and the GFP part fold up separately (each forms a separate domain).
4. How does fusion protein work?
a. GFP part fluoresces. In other words, cell makes its own fluorescently tagged version of the protein.
b. Functioning protein part usually works normally, but location of protein can be easily followed in cell, because protein has GFP attached. GFP labeled protein is used for many purposes, including following newly made protein through the cell.
5. Examples. For examples of use of GFP, see Becker fig. A-14, or Purves p. 885 (7th ed). Not in 8th ed. (Sadava).
GFP is often used to identify cells that express (turn on) a particular gene. For an example see this picture. The cells that "light up" are the only ones that express (turn on) the fusion gene. Only these cells produce a fusion protein containing GFP. (This example also illustrates why people use small, transparent organisms as "model organisms.") For a really startling picture, try this picture. For the accompanying article, click here.
See also: http://nobelprize.org/nobel_prizes/chemistry/laureates/2008/presentation-speech.html.
See problem 2R-4 for an example of the use of GFP labeling.
To review the material so far, you can try problems 3-1, A & B, 3-16, A & B, & 3-17. Alternatively, you may find it easier to wait until after section III.
A. Signal hypothesis -- How ribosomes get to the ER & Protein enters ER -- See handout 12B. Steps listed below refer to handout. See Becker fig. 22-16 or Sadava fig. 12.16 (12.15).
1. What is the Signal Hypothesis? Ribosome unattached to ER starts making protein. (Step 1.) If nascent (growing, incomplete) peptide has a "signal peptide," then ribosome plus growing chain will attach to ER membrane, and growing chain will enter ER as it grows.
2. How does ribosome get to the ER?
a. Signal peptide (SP) = section of growing peptide (usually on amino end) does not bind directly to the ER. Binds to 'middleman' called SRP. (Step 2.)
b. SRP = signal recognition particle = example of an RNP (ribonucleoprotein
3. How does Growing Chain enter ER? Takes two steps (3 & 4)
a. ER has SRP receptor. Receptor is also called docking protein. SRP binds to SRP receptor/docking protein, not directly to pore. (Step 3.)
b. ER has Translocon (gated pore or channel through membrane) which allows growing chain to pass through membrane as chain is made. Pore is closed until ribosome with growing chain gets into position.
Note on terminology: The term "translocon" is used in (at least) 2 different ways. Sometimes it is used to mean only the channel itself, and sometimes it is used to mean the whole complex of proteins required for translocation of proteins across the membrane -- the channel, SRP receptor, etc. It should be clear from context which usage is meant at any one time.
c. Ribosome/translocon complex formation occurs. (step 4 = complex step involving several events)
- SRP is released, recycles -- GTP split
- Ribosome binds to pore/translocon
- Translocon opens & peptide enters (as loop).
- Ribosome resumes translation.
d. Role of Middleman. Middleman needed for entry into ER through translocon or entry into nucleus through nuclear pores. Processes probably similar. In each case there is a protein system with three components -- Protein with LS (NLS or SP) binds to "middle man" or "ferry proteins" (importins or SRP) which bind to pore. Additional proteins (that we will ignore) are required as well, and GTP is used to drive transport in both cases. See Becker fig. 18-30 for a model.
What You Need -- Category
to Get into Nucleus**
to Get into ER
Middle Man or "ferry protein/particle"
Transporter proteins (importin)*
Surface receptor protein(s)
Nuclear Pore Complex
Docking Protein (SRP receptor)
*A middle man protein is required to enter or exit the nucleus, but different ones are used in for entry vs exit. Exportin is needed to get out of the nucleus, while importin is needed to get in.
** This is how soluble nuclear proteins are imported into the nucleus Integral (transmembrane) proteins of the nuclear membrane (nuclear envelope) are probably made on the ER, and slide laterally into the outer & inner nuclear membranes (continuous with the ER). Once in place, TM proteins are anchored by binding to lamins or other internal nuclear proteins.
4. How does a new protein end up in the lumen?
. Translation (and movement through translocon) continues (step 5)
b. Translation (and translocation) are completed, translocon closes.(step 6)
c. Signal Peptidase cuts off signal peptide at arrow.(step 7)
Now try problem 3-13, especially part D. (If some of the parts are not obvious, wait until later.)
B. How do proteins cross or enter the ER membrane? (See handout 12C and/or fig. 22-17 of Becker)
1. How proteins enter/pass through the membrane -- important points
a. SP probably forms loop not arrow. Loop enters channel (translocon) in membrane. SP loop is probably what opens (gates) the channel on the cytoplasmic side.
b. Protein enters as it is made. In humans, growing protein chains usually enter the ER as the chains are synthesized (co-translational import).
Note: In unicellular organisms, soluble proteins destined for the ER lumen often enter the ER after they are finished (post-translational import). Post translational import into the ER will be ignored here, but is covered at length in cell biology.
c. How do transmembrane proteins get anchored in the membrane? A hydrophobic sequence may trigger opening of the pore sideways, so protein slides out of pore, laterally, into lipid bilayer. These hydrophobic sequences are called 'stop-transfer' sequences and/or 'anchor' sequences.
d. Where will protein end up? Protein can go all the way through the membrane and end up as a soluble protein in the lumen (as in example above, on 12B) or protein can go part way through and end up as a transmembrane protein. Depends on sequence of protein.
2. Types of Proteins that can result (see handout 12C)
a. Soluble protein in lumen. Happens if protein passes all the way through the membrane and SP (on amino end) is removed, as above.
b. Integral membrane protein anchored in membrane by SP with no cytoplasmic domain. This happens if SP is on the amino end and is not removed.
c. Single Pass transmembrane protein -- get one of 2 possibilities:
(1). Type 1: Amino end is on lumen side of membrane (on E side); Carboxyl end is in cytoplasm (on P side of membrane)
One way this could happen: If SP is on amino end, and SP removed, and there is a hydrophobic sequence (acting as a stop-transfer or anchor sequence) in the middle of the peptide.
(2). Type 2: Carboxyl end is on lumen side of membrane (on E side); Amino end is in cytoplasm (on P side)
One way this could happen: If SP is in the middle, not on amino end. SP in this case is not removed -- it becomes the transmembrane domain of the protein. (SP doubles as stop-transfer or anchor sequence.)
d. Multipass transmembrane protein. (Requires one SP and several hydrophobic (start/stop) sequences.
(1). Hydrophobic sequences can stop the process (of moving through pore) and anchor protein in membrane, as explained above.
(2). Hydrophobic sequences in the middle of the peptide can restart looping → multipass protein. These are usually called "start-transfer" sequences (see 4).
(3). 'Start-transfer' and 'stop-transfer' sequences are probably equivalent. Role depends on where in protein they occur. (Both start- and stop-transfer sequences are also called 'topogenic sequences' as they determine the topology of the finished peptide.)
(4) A sequence that starts or restarts passage of a protein through the translocon is usually called a 'start transfer sequence' even if it also doubles as a stop or anchor in the membrane.
e. Lipid Anchored Proteins (FYI): Proteins to be anchored to lipids on the outside of the plasma membrane are generally made as follows: Protein is made on RER and inserted into the ER membrane. After the protein reaches the plasma membrane, the extracellular domain is detached from the rest of the protein and attached to lipid. (Proteins to be anchored to the plasma membrane on the inside are made on cytoplasmic ribosomes.) See Becker if you are curious about the details.
By now you should be
able to do problems 3-1 to 3-3 & 3-4, A-B.
IV. What Else Happens in/on the ER?
A. What happens inside ER
1. Terminal SP usually removed. Signal peptidase (enzyme) inside ER recognizes a particular sequence of amino acids next to the SP. If this sequence is present in the protein, signal peptidase cuts the peptide chain at that point. (If this sequence is absent, SP is not removed.)
2. Folding of protein -- requires chaperones.
a. Chaperones (also called chaperonins) -- proteins needed to assist in protein folding. Chaperones are used every time a protein remains unfolded or becomes unfolded to cross a membrane (or refolds on the other side). Different chaperones are found in different parts of the cell.
b. Chaperones are of two major types (families) -- HSP 60 (forms barrel) or HSP 70 (binds to hydrophobic regions). Differ in molecular weight (60 K vs 70 K) and mode of action. (See an advanced text if you are curious about the mechanisms.)
c. The major chaperone inside the ER is a member of the HSP 70 family, also called "BiP"
d. Why are chaperones named HSP 60, HSP 70? HSP = heat shock protein. Chaperones, aka HSP's, are made in large amounts after exposure to high temperatures. (That's how they were first discovered.)
e. Final shape. Amino acid sequence of protein determines final, folded shape, but chaperone is needed to help reach final state.
3. Enzymatic Modifications. The appropriate enzymes inside the ER catalyze the following:. In eukaryotes, all S-S bonds are formed in proteins inside the ER. Proteins made in the cytoplasm do not have S-S bonds. Cytoplasmic proteins do contain cysteines and have free SH groups.
a. Making of S-S bonds
b. Start of N-glycosylation. Oligosaccacharides are added to the N of the amide of asparagine side chains (this is called N glycosylation.) See Becker fig. 12-7 if you are curious about the biochemical details. Additional steps of glycosylation occur in the Golgi; details below.
c. Removal of SP as above.
4. Some proteins stay in ER (in lumen or membrane); most move on to Golgi.
5. What happens to proteins in ER that do not fold properly? See Becker p. 750-752 (755-757) .
a. Transport to cytosol -- Unfolded proteins are transported back to the cytosol (through the translocon -- mechanism unknown).
b. Ubiquitin addition -- in cytosol, proteins (from ER or cytosol) to be degraded are marked for destruction by addition of a multiple molecules of a small protein called ubiquitin to side chains of lysine. (See Becker or advanced texts if you are interested in the enzymatic details.)
c. Role of Proteasome = a large protein complex in cytosol that degrades ubiquitinylated proteins to fragments, at expense of ATP. Major site of degradation of intracellular proteins. (Proteins from outside are generally degraded in lysosomes.)
d. What goes to the proteasome? Proteins that are misfolded, damaged, or have served their function.
A major proportion of all proteins made in cell do not fold properly and are degraded.
Destruction of many proteins is regulated -- level of protein activity can be controlled by protein degradation as well as by rate of synthesis, feed back of activity, modification, etc. More details and/or examples to follow.
e. 2004 Nobel Prize for Chemistry was awarded to Aaron Ciechanover, Avram Hershko and Irwin Rose for the discovery of the ubiquitin/proteasome system.
B. What happens on outside of ER (besides protein synthesis)
1. Lipid synthesis --(exchange) proteins.
a. Insertion: Lipids made and inserted on cytoplasmic side (cytoplasmic leaflet) of membrane by enzymes attached to/in membrane.
b. Flipping: Enzymes ('flippases' = transporters) are required to move amphipathic lipids from one leaflet (P side) of membrane to other leaflet (E side). If lipids are moved preferentially from one side of membrane to the other, transport is active and requires ATP.
c. Transport: Lipids can reach parts of cell not connected to ER through vesicles and/or transport
2. Some detoxifications and other reactions are catalyzed by proteins on the cyto side of ER. See text for details if interested.
To review the structure and function of the ER, try problem 3-4.
1. Two sides of stack
a. cis/forming face (side closest to nucleus & ER)
b. trans/maturing face (away from nucleus)
2. Three basic parts or compartments in a stack
a. CGN (cis-Golgi network) or cis Golgi -- may include fusing vesicles
b. medial cisternae (sacs) -- part in between 'cis' and 'trans' Golgi
c. TGN (trans-Golgi network) or trans Golgi -- may include budding vesicles
3. Different marker enzymes/functions found in different parts. (See Becker figs 12-5 & 12-6) Enzymes unique to any one cell organelle or compartment are called 'marker enzymes' = their presence is a 'marker' for the presence of that compartment or organelle.
4. Sacs in stack connected by vesicle traffic -- not completely clear which way transport vesicles go or what they carry. (See next time.) It is clear that newly made protein and lipid passes through the Golgi from the cis face to the trans face, as shown on this animation.
C. Function -- what reactions take place inside Golgi?
-- oligosaccharide that was added to glycoproteins in ER is modified. These oligosaccharides are attached to "N" of amide side chains of asparagines (asn's).
1. Finish N glycosylation
2. Do O glycosylation of glycoproteins. Sugars are added to "O" of the hydroxyl of the side chain of ser & thr.
3. Assemble sugars of proteoglycans (linear chains of repeating sequence = GAGs)
4. Concentrate, sort proteins. This occurs at trans face (TGN). Different areas of Golgi have receptors that trap proteins going to different destinations.
To review how proteins are directed to the right place and modified in the ER and Golgi, try problem 3-2.
Next time: Wrap up of Golgi structure; then -- How are materials transported through the Golgi stacks? | <urn:uuid:31263488-0f3f-47de-aedc-ee2832881c2d> | CC-MAIN-2022-33 | http://www.columbia.edu/cu/biology/courses/c2006/lectures10/lect12.10.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573163.7/warc/CC-MAIN-20220818033705-20220818063705-00097.warc.gz | en | 0.903424 | 5,545 | 3.171875 | 3 |
The earthquake and tsunami that struck the East Coast of Japan in 2011 killed nearly 20,000 people, displaced 500,000, caused $360 billion in economic damage and destroyed 138,000 buildings. It also created a large, coastal uninhabitable zone and left many shoreline residents unsure about rebuilding their residences and their lives.
Two-and-a-half years later, these issues still resonate. As the Brookings Institute reported, “The reconstruction challenges remain daunting for Japan. Hundreds of thousands of people are still displaced, the quality of the nuclear cleanup continues to raise concerns and the financial cost of rebuilding the Tohoku region is staggering.”
The Japanese government has pledged a massive, long-term reconstruction budget of $262 billion. But the question has to be asked: Given the frequency of devastating natural disasters in earthquake-prone regions of Japan, as well as the likelihood of a sea-level rise as a result of climate change, should population-intensive human settlements be rebuilt just as they were?
Scientists and other experts are questioning the wisdom of such policies. It was a topic at the May 2013 Wharton Global Forum in Tokyo, organized by the Initiative for Global Environmental Leadership (IGEL) at Wharton, in a session titled “Risk, Challenges and Opportunities: Lessons Learned from 3/11.”
Given the frequency of devastating natural disasters in earthquake-prone regions of Japan, as well as the likelihood of a sea-level rise as a result of climate change, should population-intensive human settlements be rebuilt just as they were?
The issue is also relevant in the wake of Hurricane Sandy in the U.S., where federal insurance until recently has been greatly subsidized, enabling some residents to repeatedly rebuild coastal property more easily. As is often the case, financial and political considerations — including the high valuation of shoreline homes and businesses continue to influence policy decisions.
National Geographic, in a 2013 article titled “Rising Seas,” predicts that coastal storm damage is set to rise dramatically. Shoreline cities, it said, “face a twofold threat: Inexorably rising oceans will gradually inundate low-lying areas, and higher seas will extend the ruinous reach of storm surges. The threat will never go away; it will only worsen. By the end of the century a 100-year storm surge like Sandy’s might occur every decade or less…. By the next century, if not sooner, large numbers of people will have to abandon coastal areas in Florida and other parts of the world.” In 2070, according to the Organisation for Economic Co-operation and Development, the at-risk population in large port cities could reach 150 million, with $35 trillion worth of property under threat.
Abandoning coastal property, no matter how it may be threatened by future natural disasters, is difficult for people worldwide. In Japan, the post-Fukushima challenge is complicated by both the nature of the destruction and the limited options available in such a small island nation. Japan, says Robert Giegengack, professor of earth and environmental science at Penn, “is almost all coast. It’s coast and Mount Fuji.”
Yet people affected by the nuclear disaster have to relocate. The tragedy in Japan was not just “the thousands of people who were killed, and the people who were made sick by radiation sickness and will die within decades, but also that you have this beautiful region of the country that’s been decimated for many hundreds of years,” said Eric W. Orts, a Wharton professor of legal studies and business ethics who chaired the Wharton Forum panel in Tokyo and who also heads IGEL. Erwann Michel-Kerjan, managing director of the Risk Management and Decision Processes Center at Wharton, adds that people in Japan “want to stay where they are — they don’t want to move — but nuclear contamination means that hundreds of miles of coastline may be lost.”
Unlike Japan, says Michel-Kerjan, “the U.S. is huge. We really could relocate entire cities elsewhere.” But in the absence of an immediate and lethal threat, such as nuclear contamination, it’s much harder to declare property off-limits. “When people’s houses are destroyed, they say, ‘I will rebuild again right here,’” he notes. “And politicians, mayors or governors, how many of them will say, ‘You guys are out.’ They know they wouldn’t be re-elected if they said that. In any case, these aren’t easy questions to answer, because some of the people affected have been living in those locations for generations.”
The Catastrophe Paradox
Given that large-scale earthquakes and tsunamis regularly assault the Japanese coast (though not usually of such magnitude and usually not together), why were reactors like Fukushima Daiichi built along fault lines?
“When people’s houses are destroyed, they say, ‘I will rebuild again right here.’ And politicians, mayors or governors, how many of them will say, ‘You guys are out.’ They know they wouldn’t be re-elected if they said that.” — Erwann Michel-Kerjan
According to J. Mark Ramseyer, a professor of Japanese legal studies at Harvard Law School, it’s because owners such as the Tokyo Electric Power Company (TEPCO) faced limited liability. In a 2011 journal article for Theoretical Inquiries in Law, he argues that TEPCO “would not pay the full cost of a meltdown anyway.… It could externalize the cost of running reactors. In most industries, firms rarely risk tort damages so enormous they cannot pay them. In nuclear power, ‘unpayable’ potential liability is routine. Privately owned companies bear the cost of an accident only up to the fire-sale value of their net assets. Beyond that, they pay nothing — and the damages from a nuclear disaster easily soar past that point.”
Total claims against Tokyo Electric have been estimated by Bank of America/Merrill Lynch to reach $31 billion to $49 billion, well beyond the pre-storm market capitalization of the company. Beyond that amount, Ramseyer says, “any losses fell on its victims –or if the government so chose, on taxpayers.”
Ramseyer’s point also applies to homeowners living on the coast in both Japan and the U.S. for whom routine rebuilding, often at public expense, has been a given. But the catastrophic nature of the recent Japanese and American coastal disasters has led to some rethinking of those assumptions.
The World Bank estimated the cost of the Japanese catastrophe at $235 billion, plus $125 billion related to shutdowns and delays in business recovery. The Japanese government has pledged huge long-term aid, but so far has offered a fraction of this amount for rebuilding efforts. It faces its own budget issues — Japan has the highest level of public debt in the world.
Questions are arising about spending many billions on rebuilding, only to face another devastating event, but that is indeed what has been proposed in the Tohoku area. Satoshi Kitahama, representative director of the Kizuna Foundation in Tokyo (a non-profit created to aid the survivors of the March 11, 2011 twin disasters), asks if that effort — though it may be emotionally satisfying — is economically viable. Given a small and aging population of just 20,000 people, with a limited number of those in the workforce, he says that paying residents to relocate might be a more viable option.
“I have suggested to many of the mayors — just pay them,” he says. “You can’t hold [residents] hostage for the nostalgia of what this used to be, because it is never coming back to that.” He suggests that efforts to raise low-lying areas or replant ancient forests are poor public policy. “Instead of dispersing those funds and letting the individual decide what to do with them, they put it into projects like this, spending billions of dollars for a population of 20,000,” he said. “Instead, give people a couple of hundred thousand dollars per resident and let them make the decision. Let people move to higher ground, to other parts of Japan.”
Major Commitments, But Talk of Retreat
As in Japan, the U.S. has made a major federal commitment to rebuilding the Northeast after Hurricane Sandy, committing $50 billion, much of which has not yet been spent. New York Mayor Michael Bloomberg, warning of more storms ahead and a predicted sea-level rise of as much as 31 inches by 2050, has asked for $20 billion to erect flood barriers, including dunes and bulkheads, to protect low-lying areas.
It’s easy to see how, in a litigious society, rebuilding, rather than relocating became the priority. A small coastal town in New Jersey, Harvey Cedars, had the prescience to work with the federal government’s Army Corps of Engineers on a $26 million plan to protect itself against the storms that have repeatedly caused major damage and wiped out both beach and beachfront property.
Some homeowners held out against signing on to the project, which required the building of sand dunes on their property — and in some cases destroyed their view. Harvey and Phyllis Karan took opposition to their dune further than most — to court — and as reported in a 2013 article for The New Yorker, won a $375,000 judgment in March of 2012.
Seven months later, in October, Hurricane Sandy hit the Northeast, taking 159 lives, causing $69 billion in damages, and carrying away 37 million cubic yards of sand. But most of dune-sheltered Harvey Cedars was spared, including the Karans’ house. Despite that, their lawsuit continued, though their financial verdict was overturned last July. “All we wanted was our view,” said Phyllis Karan.
But simply rebuilding the Northeastern shore and moving on won’t be simple, especially in the wake of expensive new building requirements (some homes will have to go up on pilings) and escalating federal flood insurance premiums that can reach $30,000 annually. The National Flood Insurance Program (NFIP), managed by the Federal Emergency Management Agency (FEMA), has long provided subsidized coverage to property owners, but since early 2013 it has been phasing out subsidies for second homes and vacation residences, with premiums rising 25% annually until they reach actual market rates.
For companies, including those in Japan, “The event is seen as so catastrophic, there’s no reason to prepare for it. Small companies may not take protective measures because they can’t afford it — if a major event occurs, they’ll just go under.” — Howard Kunreuther
Until recently, the default position was that property would routinely be rebuilt at federal expense. As Justin Gillis and Felicity Barringer write in The New York Times in late 2012, “Across the nation, tens of billions of tax dollars have been spent on subsidizing coastal reconstruction in the aftermath of storms, usually with little consideration of whether it actually makes sense to keep rebuilding in disaster-prone areas.”
Marshall W. Meyer, a Wharton professor of management with a specialty in Asia, said of the FEMA insurance program, a lot of people think they over-insure – “the government shouldn’t be putting public funds at risk to insure homes on the New Jersey shore.”
The Times cites the example of 1,300-resident Dauphin Island on the Gulf Coast, which has been repeatedly battered by a dozen hurricanes and storms — and rebuilt each time.
According to Kitahama, speaking at Wharton’s Tokyo forum, “When thinking about how to rebuild, it’s very difficult. There will be another quake on the coast of Japan, and communities exist there in areas that have been inundated in the past. Also, some areas that were devastated this time, like Tohoku, had never had a quake or a tsunami.” One city that saw widespread damage “was in a safe zone.”
Reconstruction — in Japan and New York
Kitahama noted that there has been “a lot of focus on reconstruction, because that is the easy way to demonstrate action by the government — but it’s not really what’s needed.” He pointed to action by New York City Mayor Michael Bloomberg to buy heavily damaged coastal property and put it into no-build zones.
“This is not happening on Tohoku,” Kitahama said, “so some are sitting on properties deemed to be in non-build areas, but they haven’t been given any kind of offer for their land.” Kitahama said that one of the best uses for the land in no-build zones would be as locations for renewable energy farms (see the separate report on alternative energy prospects for Japan).
In reality, New York’s efforts have been far from decisive, reflecting the high stakes involving any valuable coastal property. Governor Andrew Cuomo launched a $400 million homeowner buyout, and the Bloomberg administration followed up last spring with a $1.8 billion effort using federal Community Development Block Grant funding.
Cuomo’s plan, which also leverages federal funds, is unambiguous about what should be done with the abandoned property. “There are some parcels that Mother Nature owns,” Cuomo stated.
But Brad Gair, director of New York City’s housing recovery office, said its own funding is not oriented toward turning stricken property into open space — instead, it will be offered for redevelopment by new buyers. “If there is one element that we have not yet come to full alignment on,” he noted, “it’s whether properties acquired should be made permanently open space or whether some of those would be suitable for redevelopment — preferably for the home owners in the area. These are valuable properties. There is a limited amount of coastline properties.”
In announcing the $1.8 billion in grants, deputy mayor for operations Caswell Holloway said in early 2013 that the money would go to “restore neighborhoods, re-open businesses, and better protect our coast and coastal communities from the dangers of climate change.”
In the New York area alone, more than 300,000 housing units were damaged or destroyed by Hurricane Sandy (with repair costs estimated at $9.6 billion), but city officials predict that only 10% to 15% will agree to city or state buyout offers. The storm has totally transformed the real estate market in some Northeastern shoreline communities. Although most homeowners are rebuilding, new buyers are asking questions about flood map zones, federal insurance and building elevations — and if they don’t like the answers, they’re looking for property elsewhere.
President Obama’s Hurricane Sandy Rebuilding Task Force issued a report in August 2013 that documents $110 billion in damages from 11 U.S. climate-related natural disasters in just the last year ($69 billion of that from Hurricane Sandy). The report, which embraces resilience as the new planning paradigm for disaster relief, makes sobering reading. It recognizes the elevated risk to shorelines from climate change, and suggests that such recognition be incorporated into all future relief planning. And it says that there may be limits to rebuilding efforts — despite new, stronger building codes that require elevating buildings above the high-water mark.
“Over time,” the report noted, “the ability to incrementally increase the height of flood control structures may be limited. Some communities are already facing limits to their ability to adapt to risk, presenting challenging questions for policy makers about managing consequences…. Understanding the limits of tolerable risk is an active area of research and public debate.”
Taxpayers, opined the Times in an editorial on the federal report, “should not be paying to rebuild and then re-rebuild as the sea level rises. Even those politicians who say they still don’t believe in climate change must see that the system needs fixing.”
Insurance and Catastrophe Planning
The shock of responding to such a severe and fast-moving event as the Japanese earthquake and tsunami has heightened emotions and complicated rebuilding plans. Howard Kunreuther, a Wharton professor and co-director of the Wharton Risk Management and Decision Processes Center, argues that people “aren’t prepared for low-probability, high-consequence events — the likelihood is very small, so it’s below a threshold level of concern. The general feeling that an earthquake of that magnitude coupled with a tsunami was not going to happen.”
For companies, including those in Japan, Kunreuther adds, “The event is seen as so catastrophic, there’s no reason to prepare for it. Small companies may not take protective measures because they can’t afford it — if a major event occurs, they’ll just go under.”
Kunreuther argues that short-term insurance is part of the problem. “The industry has traditionally looked at annual policies,” he says. “But there is very little concern over climate change or other long-term effects in setting rates with one-year policies. We have been arguing for five-year policies so the costs can be spread over multiple years — but there’s not a lot of movement on that.”
Robert Meyer, a Wharton marketing professor who also is co-director of the school’s Risk Management and Decision Processes Center, says that simply having flood insurance available, even at federally subsidized rates, is no guarantee that people will buy it — only 13% of American homeowners have such policies, for instance. New Jersey Manufacturer’s Insurance, which has 280,000 homeowner policies (and paid out $241 million in Sandy-related claims), said only 11,000 (or 4%) of them include flood coverage. That percentage didn’t change after Hurricane Sandy, said spokesman Pat Breslin.
Chile Sets an Example
According to Meyer, “If you give people discretion on whether to buy flood insurance, they won’t make the right decision. Even people who have been through hurricanes forget pretty quickly if they weren’t badly affected. You need strong leadership at the very top, and you need very strong building codes. If new nuclear plants are built in Japan, it will have to be to very high standards.”
“If you give people discretion on whether to buy flood insurance, they won’t make the right decision. Even people who have been through hurricanes forget pretty quickly if they weren’t badly affected.” — Robert Meyer
Meyer cites the positive example of Chile, most recently hit with an 8.8-magnitude quake in 2010. According to Bloomberg.com, “Since 1960, when the country suffered a 9.5 magnitude quake, the largest ever recorded, Chile has steadily improved building codes to protect lives and property. In 2010’s temblor, only five commercial buildings designed with the help of structural engineers were destroyed, according to a report by the U.S. Geological Survey.” One building, the $200 million Titanium Tower, incorporated the latest earthquake technology (including shock-absorbing steel dampers) and survived with no structural damage.
Nonetheless, Meyer says it’s impossible to build infrastructure to survive severe, 1,000-year natural disasters, even if the political will existed. “New York City is a great example. It’s sitting right on the water, one hurricane away from a $100 billion disaster. But with the probability of such a storm at 1.0, it’s very difficult to get people to take action. After Sandy, an unused airport was used to store 15,000 storm-damaged cars, and yet people with vehicles or fleets of them took no action to prevent them from getting flooded.”
According to Meyer, the risk management center has responded to that problem by building online simulations that “can realistically give a sense it what it would be like to experience a serious hurricane. It helps people develop options for protective action.” That’s in line with the federal Sandy report, which found that many residents of storm-prone regions are unaware of the risks they face, or how severe the consequences might be.
Looking forward, the case against reoccupying some hard-hit coastal regions of both the U.S. and Japan — despite their high value on many levels — can be compelling. As adopted policy, however, it is fraught with political consequences and strong emotions. Rebuilding efforts will go forward, in both countries, but with greater awareness of risk, and with limits on insurance coverage and the location and design of rebuilt buildings. In some cases, federal safety nets will be gone. Given that, the marketplace is likely to play a major role in determining the future of shoreline communities. | <urn:uuid:16ffef8b-632f-4a3e-b23e-9246a69a9c1d> | CC-MAIN-2017-51 | http://knowledge.wharton.upenn.edu/article/tale-two-storms-rebuilding-u-s-japanese-disasters/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948568283.66/warc/CC-MAIN-20171215095015-20171215115015-00528.warc.gz | en | 0.960825 | 4,419 | 3.140625 | 3 |
If the electron pairs in covalent bonds were donated and shared absolutely evenly there would be no fixed neighborhood charges in ~ a molecule. Return this is true because that diatomic facets such together H2, N2 and also O2, many covalent compounds display some level of regional charge separation, resulting in bond and also / or molecular dipoles. A dipole exists when the centers of optimistic and an adverse charge circulation do not coincide.
You are watching: Are double bonds more polar than single bonds
1. Official Charges
A big local fee separation normally results as soon as a common electron pair is donated unilaterally. The three Kekulé formulas presented here illustrate this condition.
In the formula because that ozone the central oxygen atom has three bonds and also a full positive fee while the ideal hand oxygen has a solitary bond and also is negative charged. The as whole charge the the ozone molecule is because of this zero. Similarly, nitromethane has a positive-charged nitrogen and a negative-charged oxygen, the complete molecular charge again being zero. Finally, azide anion has actually two negative-charged nitrogens and also one positive-charged nitrogen, the complete charge being minus one. In general, because that covalently external inspection atoms having valence covering electron octets, if the variety of covalent bonds come an atom is higher than its regular valence it will carry a confident charge. If the variety of covalent bonds to an atom is less than its regular valence that will carry a negative charge. The formal charge on one atom may additionally be calculate by the adhering to formula:
2. Polar Covalent Bonds
|H2.20Electronegativity Valuesfor part ElementsLi0.98Be1.57B2.04C2.55N3.04O3.44F3.98Na0.90Mg1.31Al1.61Si1.90P2.19S2.58Cl3.16K0.82Ca1.00Ga1.81Ge2.01As2.18Se2.55Br2.96|
Because of your differing nuclear charges, and as a result of shielding by inside electron shells, the different atoms that the periodic table have different affinities for adjacent electrons. The capability of an aspect to entice or host onto electrons is dubbed electronegativity. A rough quantitative scale of electronegativity values was developed by Linus Pauling, and also some of this are offered in the table to the right. A bigger number ~ above this scale signifies a better affinity for electrons. Fluorine has actually the best electronegativity of every the elements, and also the heavier alkali steels such together potassium, rubidium and cesium have the shortest electronegativities. It must be noted that carbon is around in the middle of the electronegativity range, and also is slightly more electronegative than hydrogen.When two different atoms space bonded covalently, the common electrons are attracted come the more electronegative atom the the bond, resulting in a transition of electron thickness toward the an ext electronegative atom. Together a covalent shortcut is polar, and also will have actually a dipole (one finish is positive and the other finish negative). The level of polarity and also the size of the link dipole will certainly be proportional come the distinction in electronegativity that the bonded atoms. Therefore a O–H link is an ext polar than a C–H bond, with the hydrogen atom that the former being much more positive than the hydrogen external inspection to carbon. Likewise, C–Cl and also C–Li bonds are both polar, but the carbon end is positive in the previous and an unfavorable in the latter. The dipolar nature of this bonds is frequently indicated by a partial fee notation (δ+/–) or through an arrowhead pointing to the negative end that the bond.
Although there is a tiny electronegativity difference between carbon and hydrogen, the C–H link is related to as weakly polar in ~ best, and hydrocarbons normally have little molecular dipoles and are considered to it is in non-polar compounds.
The shift of electron thickness in a covalent bond towards the much more electronegative atom or group can be it was observed in number of ways. Because that bonds to hydrogen, mountain is one criterion. If the bonding electron pair moves far from the hydrogen cell nucleus the proton will be more easily transfered come a base (it will be an ext acidic). A to compare of the acidities of methane, water and also hydrofluoric acid is instructive. Methane is essentially non-acidic, due to the fact that the C–H link is almost non-polar. As noted above, the O–H bond of water is polar, and also it is at the very least 25 powers of ten much more acidic 보다 methane. H–F is over 12 powers of ten an ext acidic 보다 water together a an effect of the higher electronegativity distinction in that atoms.Electronegativity differences may be transmitted with connecting covalent bond by one inductive effect. Replacing among the hydrogens that water by a more electronegative atom increases the acidity of the remaining O–H bond. For this reason hydrogen peroxide, HO–O–H, is ten thousand times more acidic than water, and hypochlorous acid, Cl–O–H is one hundred million times more acidic. This inductive carry of polarity tapers off together the number of transmitting bonds increases, and also the existence of more than one highly electronegative atom has actually a accumulation effect. Because that example, trifluoro ethanol, CF3CH2–O–H is about ten thousand times an ext acidic than ethanol, CH3CH2–O–H.
One method in i beg your pardon the shapes of molecules manifest themselves experimentally is through molecular dipole moments. A molecule which has actually one or much more polar covalent binding may have a dipole minute as a result of the collected bond dipoles. In the instance of water, we know that the O-H covalent bond is polar, due to the different electronegativities that hydrogen and oxygen. Since there room two O-H binding in water, their bond dipoles will interact and may result in a molecule dipole which have the right to be measured. The adhering to diagram shows four feasible orientations that the O-H bonds.The link dipoles are colored magenta and the resulting molecular dipole is fancy blue. In the linear configuration (bond edge 180º) the bond dipoles cancel, and the molecular dipole is zero. For various other bond angles (120 come 90º) the molecular dipole would differ in size, being biggest for the 90º configuration. In a comparable manner the configurations of methane (CH4) and also carbon dioxide (CO2) might be deduced from their zero molecular dipole moments. Due to the fact that the bond dipoles have canceled, the configurations of these molecules have to be tetrahedral (or square-planar) and also linear respectively.The case of methane gives insight come other disagreements that have actually been supplied to confirm its tetrahedral configuration. For purposes of discussion we shall consider three other configurations because that CH4, square-planar, square-pyramidal & triangular-pyramidal. Models of these possibilities might be check by clicking Here.Substitution the one hydrogen by a chlorine atom gives a CH3Cl compound. Due to the fact that the tetrahedral, square-planar and square-pyramidal configurations have actually structurally equivalent hydrogen atoms, they would certainly each offer a single substitution product. However, in the trigonal-pyramidal configuration one hydrogen (the apex) is structurally different from the various other three (the pyramid base). Substitution in this situation should provide two different CH3Cl compounds if every the hydrogens react. In the situation of disubstitution, the tetrahedral configuration of methane would result in a solitary CH2Cl2 product, but the various other configurations would offer two different CH2Cl2 compounds. This substitution possibilities are shown in the models.ResonanceKekulé structural formulas are crucial tools for understanding organic aufdercouch.net. However, the frameworks of some compounds and ions cannot be stood for by a single formula. Because that example, sulfur dioxide (SO2) and also nitric mountain (HNO3) may each be explained by two tantamount formulas (equations 1 & 2). Because that clarity the 2 ambiguous bonds come oxygen are given different colors in this formulas.1) sulfur dioxide2) nitric acidIf just one formula for sulfur dioxide to be correct and accurate, climate the twin bond to oxygen would be much shorter and stronger than the single bond. Because experimental evidence indicates the this molecule is bent (bond angle 120º) and has equal length sulfur : oxygen binding (1.432 Å), a single formula is inadequate, and the actual framework resembles an mean of the 2 formulas. This averaging that electron circulation over two or an ext hypothetical contributing structures (canonical forms) to produce a hybrid digital structure is called resonance. Likewise, the structure of nitric acid is ideal described together a resonance hybrid of two structures, the dual headed arrow being the unique symbol because that resonance. The above examples represent one too much in the application of resonance. Here, 2 structurally and energetically equivalent digital structures for a secure compound can be written, however no single structure provides precise or also an adequate representation of the true molecule. In instances such as these, the electron delocalization explained by resonance boosts the stability of the molecules, and also compounds or ion incorporating such equipment often present exceptional stability. 3) formaldehydeThe electronic structures of many covalent compounds perform not endure the inadequacy listed above. Thus, fully satisfactory Kekulé formulas may be drawn for water (H2O), methane (CH4) and acetylene C2H2). Nevertheless, the values of resonance are an extremely useful in rationalizing the chemical actions of numerous such compounds. For example, the carbonyl team of formaldehyde (the carbon-oxygen twin bond) reacts readily to give enhancement products. The food of these reactions deserve to be defined by a small contribution the a dipolar resonance contributor, as presented in equation 3. Here, the an initial contributor (on the left) is plainly the finest representation of this molecule unit, due to the fact that there is no fee separation and both the carbon and also oxygen atom have completed valence shell neon-like construction by covalent electron sharing. If the double bond is broken heterolytically, formal charge pairs result, as displayed in the other two structures. The desired charge distribution will have actually the positive charge on the less electronegative atom (carbon) and also the an unfavorable charge ~ above the much more electronegative atom (oxygen). As such the center formula to represent a an ext reasonable and stable framework than the one ~ above the right. The application of resonance come this instance requires a weighted averaging of these canonical structures. The twin bonded structure is related to as the major contributor, the middle structure a minor contributor and the best hand framework a non-contributor. Due to the fact that the middle, charge-separated contributor has actually an electron deficient carbon atom, this defines the propensity of electron donors (nucleophiles) come bond at this site.
The basic principles that the resonance technique may now be summarized. because that a offered compound, a collection of Lewis / Kekulé structures are written, keeping the loved one positions of every the component atoms the same. These are the canonical forms to be considered, and all must have actually the same number of paired and unpaired electrons.The following components are crucial in examining the contribution each of these canonical structures makes to the actual molecule. The number of covalent bond in a structure. (The higher the bonding, the more important and also stable the contributing structure.) Formal fee separation. (Other determinants aside, fee separation decreases the stability and also importance the the contributing structure.) Electronegativity of charge bearing atoms and also charge density. (High charge thickness is destabilizing. Optimistic charge is ideal accommodated on atoms of short electronegativity, and an unfavorable charge ~ above high electronegative atoms.) The security of a resonance hybrid is always greater 보다 the security of any canonical contributor. Consequently, if one canonical form has a much higher stability 보다 all others, the hybrid will carefully resemble that electronically and energetically. This is the case for the carbonyl team (eq.3). The left hand C=O structure has actually much greater full bonding 보다 either charge-separated structure, therefore it explains this functional group rather well. On the various other hand, if 2 or more canonical develops have identical low energy structures, the resonance hybrid will have exceptional stabilization and also unique properties. This is the situation for sulfur dioxide (eq.1) and nitric mountain (eq.2).4) carbon monoxide5) azide anionTo illustrate these principles we shall consider carbon monoxide (eq.4) and azide anion (eq.5). In each situation the many stable canonical type is ~ above the left. Because that carbon monoxide, the extr bonding is a much more important stabilizing variable than the destabilizing fee separation. Furthermore, the double bonded structure has actually an electron deficient carbon atom (valence covering sextet). A similar destabilizing factor is current in the two azide canonical develops on the height row the the bracket (three binding vs. Four bonds in the structure on the much left). The bottom row pair of tantamount structures likewise have 4 bonds, however are destabilized by the high charge thickness on a single nitrogen atom. Consequently, azide anion is finest written as shown on the left.
Another sort of resonance summary is regularly used once referring come the p-d double-bonding in link of third duration elements, particularly phosphorous and also sulfur. In enhancement to sulfuric acid and phosphoric acid, the helpful reagent compounds displayed in the complying with diagram display this dualism. The officially charged framework on the left the each example obeys the octet rule, vice versa, the neutral double-bonded structure on the right calls for overlap with 3d orbitals.
The approximate shapes of these three compounds are established under each. The Cl–S–Cl bond edge in thionyl chloride argues a virtually pure p-orbital bonding, presumably because of the boosted s-orbital personality of the non-bonding electron pair. This agrees v the roughly tetrahedral edge of this grouping in sulfuryl chloride, which go not have such a feature. The S=O and also P=O shortcut lengths in this compounds likewise indicate substantial double bond character.
|all the examples on this web page demonstrate critical restriction that have to be remembered when using resonance: No atoms change their positions within the common structural framework. Just electrons are moved.|
Hydrocarbons having actually a molecular formula CnH2n+2, whereby n is an integer, constitute a relatively unreactive class of compounds called alkanes. The C–C and also C–H bond that consist of alkanes are fairly non-polar and inert to most (but not all) of the reagents supplied by essential chemists. Functional groups are atom or tiny groups of atoms (two come four) that exhibit an intensified characteristic reactivity as soon as treated with specific reagents. A particular functional team will virtually always display its characteristic chemical actions when that is current in a compound. Since of their prestige in knowledge organic aufdercouch.net, functional groups have characteristic names the often carry over in the specify name of individual compounds incorporating specific groups. In the following table the atoms of every functional group are colored red and the characteristics IUPAC nomenclature suffix the denotes part (but not all) functional groups is also colored.Functional team TablesExclusively Carbon Functional groups
Group FormulaClass NameSpecific ExampleIUPAC NameCommon NameAlkeneH2C=CH2EtheneEthyleneAlkyneHC≡CHEthyneAcetyleneAreneC6H6BenzeneBenzene Functional teams with single Bonds to Heteroatoms group FormulaClass NameSpecific ExampleIUPAC NameCommon NameHalideH3C-IIodomethaneMethyl iodideAlcoholCH3CH2OHEthanolEthyl alcoholEtherCH3CH2OCH2CH3Diethyl etherEtherAmineH3C-NH2AminomethaneMethylamineNitro CompoundH3C-NO2Nitromethane ThiolH3C-SHMethanethiolMethyl mercaptanSulfideH3C-S-CH3Dimethyl sulfide Functional teams with multiple Bonds come Heteroatoms group FormulaClass NameSpecific ExampleIUPAC NameCommon NameNitrileH3C-CNEthanenitrileAcetonitrileAldehydeH3CCHOEthanalAcetaldehydeKetoneH3CCOCH3PropanoneAcetoneCarboxylic AcidH3CCO2HEthanoic AcidAcetic acidEsterH3CCO2CH2CH3Ethyl ethanoateEthyl acetateAcid HalideH3CCOClEthanoyl chlorideAcetyl chlorideAmideH3CCON(CH3)2N,N-DimethylethanamideN,N-DimethylacetamideAcid Anhydride(H3CCO)2OEthanoic anhydrideAcetic anhydride
The chemical habits of each of these functional groups constitutes a significant part of the research of organic aufdercouch.net. Plenty of of the functional teams are polar, and also their habits with polar or ionic reagents have the right to be summary by the principle: Opposites Attract.This is a fine known variable in electrostatics and also electromagnetism, and it uses equally well to polar covalent interactions. Since couple of functional groups are ionic in nature, necessary chemists use the state nucleophile and also electrophile more commonly than anionic (negative) and also cationic (positive). The following definitions should be remembered.
|Electrophile: an electron deficient atom, ion or molecule that has an affinity for electrons, and will bond come a nucleophile.Nucleophile: an atom, ion or molecule that has an electron pair that may be donated in bonding to an electrophile.|
|Very nice screens of orbitals might be discovered at the adhering to sites:|
|J. Gutow, Univ. Wisconsin Oshkosh||R. Spinney, Ohio State||M. Winter, Sheffield college|
1. Hybrid OrbitalsIn order to explain the framework of methane (CH4), the 2s and also three 2p orbitals have to be convert to 4 equivalent hybrid atomic orbitals, each having 25% s and 75% ns character, and also designated sp3. These hybrid orbitals have actually a certain orientation, and the four are naturally oriented in a tetrahedral fashion.The hypervalent compounds explained earlier and drawn listed below require 3d-orbital contributions to the bonding hybridization. PCl5 is a trigonal bipyramid produced by sp3d hybridization. The octahedral construction are formed by sp3d2 hybridization. Click the table to watch these shapes.
2. Molecule OrbitalsJust as the valence electron of atom occupy atom orbitals (AO), the mutual electron pairs of covalently bonded atoms might be assumed of as occupying molecular orbitals (MO). It is practically to approximate molecule orbitals by combine or mixing 2 or more atomic orbitals. In general, this mixing of n atom orbitals constantly generates n molecule orbitals. The hydrogen molecule gives a straightforward example the MO formation. In the adhering to diagram, two 1s atom orbitals combine to give a sigma (σ) bonding (low energy) molecular orbital and a second higher energy MO referred to as an antibonding orbital. The bonding MO is populated by 2 electrons of the contrary spin, the result being a covalent bond. The notation used for molecular orbitals parallels that supplied for atom orbitals. Thus, s-orbitals have a spherical symmetry neighboring a single nucleus, conversely, σ-orbitals have actually a cylindrical symmetry and also encompass two (or more) nuclei. In the situation of bonds between second period elements, p-orbitals or hybrid atomic orbitals having actually p-orbital character are used to form molecular orbitals. For example, the sigma molecular orbital that serves come bond 2 fluorine atoms with each other is created by the overlap of p-orbitals (part A below), and also two sp3 hybrid orbitals the carbon may incorporate to provide a comparable sigma orbital. When these bonding orbitals are populated by a pair of electrons, a covalent bond, the sigma link results. Although we have ignored the staying p-orbitals, their inclusion in a molecule orbital treatment does not lead to any added bonding, as might be shown by activating the fluorine correlation chart below. Another form of MO (the π orbital) might be formed from 2 p-orbitals by a lateral overlap, as shown in part A the the following diagram. Because bonds consisting of occupied π-orbitals (pi-bonds) are weaker 보다 sigma bonds, pi-bonding between two atoms occurs only as soon as a sigma bond has already been established. Thus, pi-bonding is generally found only together a component of double and triple covalent bonds. Since carbon atoms affiliated in dual bonds have only three bonding partners, castle require only three hybrid orbitals to add to 3 sigma bonds. A mixing of the 2s-orbital with two of the 2p orbitals offers three sp2 hybrid orbitals, leaving among the p-orbitals unused. 2 sp2 hybridized carbon atoms space then joined with each other by sigma and pi-bonds (a twin bond), as displayed in component B.
The way in which atom orbitals overlap to kind molecular orbitals is generally illustrated by a correlation diagram. Two instances of together diagrams because that the basic diatomic aspects F2 and also N2 will certainly be drawn over when the suitable button is clicked. The 1s and also 2s atomic orbitals perform not provide any all at once bonding, since orbital overlap is minimal, and also the result sigma bonding and antibonding materials would cancel. In both these situations three 2p atomic orbitals integrate to form a sigma and also two pi-molecular orbitals, each together a bonding and antibonding pair. The in its entirety bonding order counts on the variety of antibonding orbitals that space occupied.
See more: How Many Yard In An Acre - How Many Yards Are In An Acre
The subtle readjust in the energy of the σ2p bonding orbital, family member to the 2 degenerate π-bonding orbitals, is as result of s-p hybridization that is unimportant to the present discussion. An impressive instance of the advantages offered by the molecule orbital strategy to bonding is uncovered in the oxygen molecule. A molecular orbit diagram because that oxygen may be seen by clicking Here. | <urn:uuid:58db0a35-3e9d-4a8d-83c8-7ed916566141> | CC-MAIN-2022-33 | https://aufdercouch.net/are-double-bonds-more-polar-than-single-bonds/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571090.80/warc/CC-MAIN-20220809215803-20220810005803-00096.warc.gz | en | 0.907629 | 4,939 | 3.34375 | 3 |
In sociology, taste or palate is an individual or a demographic group's subjective preferences of dietary, design, cultural and/or aesthetic patterns. Taste manifests socially via distinctions in consumer choices such as delicacies/beverages, fashions, music, etiquettes, goods, styles of artwork, and other related cultural activities. The social inquiry of taste is about the arbitrary human ability to judge what is considered beautiful, good, proper and valuable.
Social and cultural phenomena concerning taste are closely associated to social relations and dynamics between people. The concept of social taste is therefore rarely separated from its accompanying sociological concepts. An understanding of taste as something that is expressed in actions between people helps to perceive many social phenomena that would otherwise be inconceivable.
Aesthetic preferences and attendance to various cultural events are associated with education and social origin. Different socioeconomic groups are likely to have different tastes. Social class is one of the prominent factors structuring taste.
The concept of aesthetics has been the interest of philosophers such as Plato, Hume and Kant, who understood aesthetics as something pure and searched the essence of beauty, or, the ontology of aesthetics. But it was not before the beginning of the cultural sociology of early 19th century that the question was problematized in its social context, which took the differences and changes in historical view as an important process of aesthetical thought. Although Immanuel Kant's Critique of Judgement (1790) did formulate a non-relativistic idea of aesthetical universality, where both personal pleasure and pure beauty coexisted, it was concepts such as class taste that began the attempt to find essentially sociological answers to the problem of taste and aesthetics. Metaphysical or spiritual interpretations of common aesthetical values have shifted towards locating social groups that form the contemporary artistic taste or fashion.
In his aesthetic philosophy, Kant denies any standard of a good taste, which would be the taste of the majority or any social group. For Kant, as discussed in his book titled the Critique of Judgment, beauty is not a property of any object, but an aesthetic judgement based on a subjective feeling. He claims that a genuine good taste does exist, though it could not be empirically identified. Good taste cannot be found in any standards or generalizations, and the validity of a judgement is not the general view of the majority or some specific social group. Taste is both personal and beyond reasoning, and therefore disputing over matters of taste never reaches any universality. Kant stresses that our preferences, even on generally liked things, do not justify our judgements.
Every judgement of taste, according to Kant, presumes the existence of a sensus communis, a consensus of taste. This non-existent consensus is an idea that both enables judgements of taste and is constituted by a somewhat conceptual cultivation of taste. A judgement does not take for granted that everyone agrees with it, but it proposes the community to share the experience. If the statement would not be addressed to this community, it is not a genuine subjective judgement. Kant's idea of good taste excludes fashion, which can be understood only in its empirical form, and has no connection with the harmony of ideal consensus. There is a proposition of a universal communal voice in judgements of taste, which calls for a shared feeling among the others.
Bourdieu argued against the Kantian view of pure aesthetics, stating that the legitimate taste of the society is the taste of the ruling class. This position also rejects the idea of genuine good taste, as the legitimate taste is merely a class taste. This idea was also proposed by Simmel, who noted that the upper classes abandon fashions as they are adopted by lower ones.
Fashion in a Kantian sense is an aesthetic phenomenon and source of pleasure. For Kant, the function of fashion was merely a means of social distinction, and he excluded fashion from pure aesthetics because of its content's arbitrary nature. Simmel, following Kantian thought, recognises the usefulness of fashionable objects in its social context. For him, the function lies in the whole fashion pattern, and cannot be attributed to any single object. Fashion, for Simmel, is a tool of individuation, social distinction, and even class distinction, which are neither utilitarian or aesthetical criteria. Still, both Kant and Simmel agreed that staying out of fashion would be pointless.
Taste and consumption are closely linked together; taste as a preference of certain types of clothing, food and other commodities directly affects the consumer choices at the market. The causal link between taste and consumption is however more complicated than a direct chain of events in which taste creates demand that, in turn, creates supply. There are many scientific approaches to taste, specifically within the fields of economics, psychology and sociology.
Definition of consumption in its classical economical context can be summed up in the saying "supply creates its own demand". In other words, consumption is created by and equates itself to production of market goods. This definition, however, is not adequate to accommodate any theory that tries to describe the link between taste and consumption.
A more complex economic model for taste and consumption was proposed by economist Thorstein Veblen. He challenged the simple conception of man as plain consumer of his utmost necessities, and suggested that the study of the formation of tastes and consumption patterns was essential for economics. Veblen did not disregard the importance of the demand for an economic system, but rather insisted on rejection of the principle of utility-maximization. The classical economics conception of supply and demand must be therefore extended to accommodate a type of social interaction that is not immanent in the economics paradigm.
Veblen understood man as a creature with a strong instinct to emulate others to survive. As social status is in many cases at least partially based on or represented by one's property, men tend to try and match their acquisitions with those who are higher in a social hierarchy. In terms of taste and modern consumption this means that taste forms in a process of emulation: people emulate each other, which creates certain habits and preferences, which in turn contributes to consumption of certain preferred goods.
Veblen's main argument concerned what he called leisure class, and it explicates the mechanism between taste, acquisition and consumption. He took his thesis of taste as an economic factor and merged it with the neoclassical hypothesis of nonsatiety, which states that no man can ever be satisfied with his fortune. Hence, those who can afford luxuries are bound to be in a better social situation than others, because acquisition of luxuries by definition grants a good social status. This creates a demand for certain leisure goods, that are not necessities, but that, because of the current taste of the most well off, become wanted commodities.
In different periods of time, consumption and its societal functions have varied. In 14th century England consumption had significant political element. By creating an expensive luxurious aristocratic taste the Monarchy could legitimize itself in high status, and, according to the mechanism of taste and consumption, by mimicking the taste of the Royal the nobility competed for high social position. The aristocratic scheme of consumption came to an end, when industrialization made the rotation of commodities faster and prices lower, and the luxuries of the previous times became less and less indicator of social status. As production and consumption of commodities became a scale bigger, people could afford to choose from different commodities. This provided for fashion to be created in market.
The era of mass consumption marks yet another new kind of consumption and taste pattern. Beginning from the 18th century, this period can be characterized by increase in consumption and birth of fashion, that cannot be accurately explained only by social status. More than establishing their class, people acquired goods just to consume hedonistically. This means, that the consumer is never satisfied, but constantly seeks out novelties and tries to satisfy insatiable urge to consume.
In above taste has been seen as something that presupposes consumption, as something that exists before consumer choices. In other words, taste is seen as an attribute or property of a consumer or a social group. Alternative view critical to the attributative taste suggests that taste doesn't exist in itself as an attribute or a property, but instead is an activity in itself. This kind of pragmatic conception of taste derives its critical momentum from the fact that individual tastes can not be observed in themselves, but rather that only physical acts can. Building on Hennion, Arsel and Bean suggest a practice-theory approach to understanding taste.
Consumption, especially mass consumerism has been criticized from various philosophical, cultural and political directions. Consumption has been described as overly conspicuous or environmentally untenable, and also a sign of bad taste.
Many critics have voiced their opinion against the growing influence of mass culture, fearing the decline in global divergence of culture. For example, it is claimed that the convenience of getting the same hamburger at fast food places like McDonald's can reduce consumer interest in traditional culinary experiences.
The Western culture of consumerism has been criticized[according to whom?] for its uniformity. The critics argue, that while the culture industry promises consumers new experiences and adventures, people in fact are fed the same pattern of swift but temporary fulfillment. Here taste, it is suggested, is used as a means of repression; as something that is given from above, or from the industry of the mass culture, to people who are devoid of contentual and extensive ideologies and of will. This critique insists that the popular Western culture does not fill people with aesthetic and cultural satisfaction.
Arguably, the question of taste is in many ways related to the underlying social divisions of community. There is likely to be variation between groups of different socioeconomic status in preferences for cultural practices and goods, to the extent that it is often possible to identify particular types of class taste. Also, within many theories concerning taste, class dynamics is understood as one of the principal mechanisms structuring taste and the ideas of sophistication and vulgarity.
Imitation and distinction
Sociologists suggest that people disclose much about their positions in social hierarchies by how their everyday choices reveal their tastes. That is preference for certain consumer goods, appearances, manners etc. may signal status because it is perceived as part of the lifestyle of high-status groups. It is further argued that patterns of taste are determined by class structure because people may also strategically employ distinctions of taste as resources in maintaining and redefining their social status.
When taste is explained on account of its functions for status competition, interpretations are often built on the model of social emulation. It is assumed, firstly, that people desire to distinguish themselves from those with lower status in the social hierarchy and, secondly, that people will imitate those in higher positions.
The German sociologist Georg Simmel (1858–1918) examined the phenomenon of fashion - as manifested in rapidly changing patterns of taste. According to Simmel, fashion is a vehicle for strengthening the unity of the social classes and for making them distinct. Members of the upper classes tend to signal their superiority, and they act as the initiators of new trends. But upper-class taste is soon imitated by the middle classes. As goods, appearances, manners etc. conceived as high-class status markers become popular enough, they lose their function to differentiate. So the upper classes have to originate yet more stylistic innovations.
The particular taste of the upper classes has been further analyzed by an economist Thorsten Veblen (1857–1929). He argues that distancing oneself from hardships of productive labour has always been the conclusive sign of high social status. Hence, upper-class taste is not defined by things regarded as necessary or useful but by those that are the opposite. To demonstrate non-productivity, members of the so-called leisure class waste conspicuously both time and goods. The lower social stratum try their best to imitate the non-productive lifestyle of the upper classes, even though they do not really have means for catching up.
One of the most widely referenced theories of class-based tastes was coined by the French sociologist Pierre Bourdieu (1930–2002), who asserted that tastes of social classes are structured on basis of assessments concerning possibilities and constraints of social action. Some choices are not equally possible for everyone. The constraints are not simply because members of different classes have varying amounts of economic resources at their disposal. Bourdieu argued that there are also significant non-economic resources and their distribution effects social stratification and inequality. One such resource is cultural capital, which is acquired mainly through education and social origin. It consists of accumulated knowledge and competence for making cultural distinctions. To possess cultural capital is a potential advantage for social action, providing access to education credentials, occupations and social affiliation.
By assessing relationships between consumption patterns and the distribution of economic and cultural capital, Bourdieu identified distinct class tastes within French society of the 1960s. Upper-class taste is characterized by refined and subtle distinctions, and it places intrinsic value on aesthetic experience. This particular kind of taste was appreciated as the legitimate basis for "good taste" in French society, acknowledged by the other classes as well. Consequently, members of the middle classes appeared to practice "cultural goodwill" in emulating the high-class manners and lifestyles. The taste of the middle classes is not defined as much by authentic appreciation for aesthetics as by a desire to compete in social status. In contrast, the popular taste of the working classes is defined by an imperative for "choosing the necessary". Not much importance is placed on aesthetics. This may be because of actual material deprivation excluding anything but the necessary but, also, because of a habit, formed by collective class experiences. Class related tastes become manifest in different cultural domains such as food, clothing, arts, humor, and even religion.
Criticism of class-based theories
Theories of taste which build on the ideas of status competition and social emulation have been criticized from various standpoints. Firstly, it has been suggested that it is not reasonable to trace all social action back to status competition; while marking and claiming status are strong incentives, people also have other motivations as well. Secondly, it has been argued that it is not plausible to assume that tastes and lifestyles are always diffusing downwards from the upper classes, and that in some situations the diffusion of tastes may move in the opposite direction.
It has also been argued that the association between social class and taste is no longer quite as strong as it used to be. For instance, theorists of the Frankfurt School have claimed that the diffusion of mass cultural products has obscured class differences in capitalist societies. Products consumed passively by members of different social classes are virtually all the same, with only superficial differences regarding brand and genre. Other criticism has concentrated on the declassifying effects of postmodern culture; that consumer tastes are now less influenced by traditional social structures, and they engage in play with free-floating signifiers to perpetually redefine themselves with whatever they find pleasurable.
Bad taste (also poor taste or even vulgar) is generally a title given to any object or idea that does not fall within the moralizing person's idea of the normal social standards of the time or area. Varying from society to society, and from time to time, bad taste is generally thought of as a negative thing, but that also changes with each individual.
A contemporary view—a retrospective review of literature—is that "a good deal of dramatic verse written during the Elizabethan and Jacobean periods is in poor taste because it is bombast [high-sounding language with little meaning]".
- Outwaite & Bottonmore 1996, p. 662
- Gronow 1997, pp. 11, 87
- Gronow 1997, pp. 88-90
- Gronow 1997, p. 83
- Ekelund & Hébert 1990, pp. 154-157
- Ekelund & Hébert 1990, p. 462
- Ekelund & Hébert 1990, p. 463
- McCracken 1990
- Bragg & 25 October 2007, Taste harvnb error: no target: CITEREFBragg25_October_2007 (help)
- Gronow 1997, pp. 78–79
- Campbell 1989
- cf. Hennion 2007
- Arsel & Bean 2013
- Ritzer 1997
- Adorno & Horkheimer 1982, pp. 120–167.
- Bourdieu 1984
- Slater 1997, pp. 153, 156
- Slater 1997, p. 156
- Simmel 1957
- Slater 1997, pp. 154–155
- Bourdieu 1986
- Slater 1997, pp. 159–163
- Friedman and Kuipers 2013
- Koehrsen 2018
- Slater 1997, pp. 157–158
- Holt 1998, p. 21
- Theodore A. Gracyk, "Having Bad Taste", The British Journal of Aesthetics, Volume 30, Issue 2, 1 April 1990, pp. 117–131, https://doi.org/10.1093/bjaesthetics/30.2.117 Published: 1 April 1990.
- M. H. Abrams, "Vulgarity. Dictionary of Literary Terms< and Literary Theory (1977),Penguin, 1998, p.976.
- Arsel, Zeynep; Jonathan Bean (2013). "Taste Regimes and Market-Mediated Practice". Journal of Consumer Research. 39 (5): 899–917. doi:10.1086/666595.
- Bourdieu, Pierre (1984). Distinction: A Social Critique of the Judgement of Taste. London: Routledge. ISBN 0-415-04546-0.
- Bourdieu, Pierre (1986). "The Forms of Capital". In Richardson, John G (ed.). Handbook of Theory and Research for the Sociology of Education. New York: Greenwood Press. ISBN 0-313-23529-5.
- Bragg, Melvyn (25 October 2007), Taste, In Our Time, BBC Radio 4, retrieved 18 September 2010
- Ekelund, Jr., Robert B.; Hébert, Robert F. (1990). A History of Economic Theory and Method. 3rd ed. New York: McGraw-Hill Publishing Company. ISBN 0-07-019416-5.
- Gronow, Jukka (1997). Sociology of Taste. London: Routledge. ISBN 0-415-13294-0.
- Friedman, Sam; Giselinde Kuipers (2013). "The divisive power of humour: Comedy, taste and symbolic boundaries" (PDF). Cultural Sociology. 7 (2): 179–195. doi:10.1177/1749975513477405. S2CID 53362319.
- Hennion, Antoine (2007). "Those Things That Hold Us Together: Taste and Sociology." Cultural Sociology, Vol. 1, No. 1, 97-114. London: Sage.
- Holt, Douglas B. (1998). "Does Cultural Capital Structure American Consumption?" The Journal of Consumer Research, Vol. 25, No. 1 (Jun., 1998), pp. 1-25.
- Horkheimer, Max; Adorno, Theodor W (1982). Dialectic of the Enlightenment. New York: The Continuum publishing Corporation. ISBN 0-8264-0093-0.
- Koehrsen, Jens (2018). "Religious Tastes and Styles as Markers of Class Belonging" (PDF). Sociology. doi:10.1177/0038038517722288. S2CID 149369482.
- Outwaite, William; Bottonmore, Tom (1996). The Blackwell Dictionary of Twentieth-Century Social Thought. Oxford: Blackwell Publishers.
- Simmel, Georg (1957). "Fashion". The American Journal of Sociology, Vol. 62, No. 6 (May, 1957), pp. 541-558.
- Slater, Don (1997). Consumer Culture and Modernity. Cambridge: Polity Press. ISBN 978-0-7456-0304-9.
- Stern, Jane; Michael Stern (1990). The Encyclopedia of Bad Taste. New York: Harper Collins. ISBN 0-06-016470-0.
- Vercelloni, Luca (2016). The Invention of Taste. A Cultural Account of Desire, Delight and Disgust in Fashion, Food and Art. London: Bloomsbury. ISBN 978-1-4742-7360-2.
- Aesthetic Taste, Internet Encyclopedia of Philosophy | <urn:uuid:3869bdb2-a83b-4a76-b128-0f4e50cdbd09> | CC-MAIN-2022-33 | https://en.wikipedia.org/wiki/Taste_(sociology) | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572221.38/warc/CC-MAIN-20220816060335-20220816090335-00095.warc.gz | en | 0.930378 | 4,398 | 3.5625 | 4 |
Corn Amylase: Improving the Efficiency and Environmental Footprint of Corn to Ethanol through Plant Biotechnology
John M. Urbanchuk and Daniel J. Kowalski
Bruce Dale and Seungdo Kim
Michigan State University
Treatment of starch-based grain with an alpha-amylase enzyme is essential to convert available starch to fermentable sugars in the production of ethanol. In an effort to improve the efficiencies of corn-based ethanol production, Syngenta has developed a new variety of corn that expresses alpha-amylase directly in the seed endosperm. This technology represents a novel approach to improving ethanol production in a way that can be integrated smoothly into the existing infrastructure. Between October 2007 and December 2008, Syngenta, in collaboration with Western Plains Energy, LLC, of Oakley, Kansas, conducted a commercial-scale trial of Corn Amylase. The results of this trial confirmed many of the potential benefits identified in laboratory trials, which include significant reductions in the amount of natural gas, electricity, water, and microbial alpha-amylase required to produce a gallon of ethanol. These savings are realized through the unique characteristics of Corn Amylase that enable ethanol producers to increase throughput at the plant without the typical tradeoff of losing conversion yield. Corn Amylase, therefore, will reduce the demand for natural resources, the consumption of fossil fuels, and the emission of greenhouse gases. Corn Amylase will also reduce utility costs at the plant and improve the energy balance (compared to ethanol produced from conventional corn).
Key words: alpha-amylase, corn, energy balance, ethanol, greenhouse gas, throughput, trial.Introduction
Over the past 18 months, biofuels made from food crops such as corn have received considerable attention as to whether they provide environmental benefits greater than the petroleum-based fuels they are intended to replace. The net energy balance and carbon footprint are metrics that have been widely used to debate the viability of corn-based ethanol production. The energy balance for corn-based ethanol is calculated by dividing the energy value (BTU) in a gallon of ethanol by the fossil fuel energy used to produce that gallon of ethanol. Fossil fuel consumption includes all farming, transportation, and manufacturing activities. Depending on the methodology used, the net energy balance for corn to ethanol has been reported to be in the range from 0.54 to 2.10 (Wang, 2005).1
Researchers have also found varying results from analyzing the carbon footprint of corn-based ethanol. When compared with gasoline, ethanol from corn has been found to reduce greenhouse gas (GHG) emissions by as much as 60%, while others have reported that it may increase carbon emissions over gasoline by as much as 20% (Wang, 2005). Most recently, two articles published in Science (Fargione, Hill, Tilman, Polasky, & Hawthorne, 2008; Searchinger et al., 2008) raised the importance of considering land-use changes when evaluating greenhouse gas improvements.
While the ranges are broad, the life cycle analyses (LCA) published in Science—and referred to above—principally reflect the agricultural (feedstock) phase of biofuel production and do not account for the continuous improvement in efficiencies at the ethanol plant. The biofuel industry has made a number of advancements in energy and water efficiency and ethanol yields over the past several years (Wu, 2008). Corn Amylase (CA) represents another approach to improving upon the efficiency, cost, and environmental footprint of biofuels. Consequently, this article reviews the potential economic and environmental benefits of CA on the production of ethanol from corn and sorghum.
Syngenta has developed a corn variety that expresses a thermostable alpha-amylase enzyme within the grain. The corn-expressed enzyme has characteristics suitable for the starch processing step of dry-grind ethanol production. Ideally amylases for this industry should work at high temperatures and have low calcium requirements. Syngenta’s amylase enzyme expressed in Event 3272 (i.e., Corn Amylase) matches these criteria and is expected to replace the microbially produced alpha-amylase as an external input for ethanol production. Syngenta has concluded its consultation on the food and feed safety of amylase corn and is awaiting regulatory approval from the USDA. Syngenta has also applied for import clearances from a number of countries that purchase grain and distiller’s grains from the United States, such as Japan, Mexico, and Canada.
From October 2007 through December 2008, Syngenta collaborated with Western Plains Energy, LLC (WPE) of Oakley, Kansas, to evaluate the processing characteristics of CA. WPE has a plant capacity of roughly 40 million gallons per year and uses a combined feedstock of corn and/or sorghum for ethanol production. Numerous experiments were performed during the 14 month of evaluation. The data presented here were collected during a 31-day trial conducted in August 2008.
WPE mixed CA corn with conventional corn and sorghum at various ratios to assess conversion rates and throughput. The trial results were then used to simulate the effects of CA on a typical dry-grind ethanol plant by using the USDA-Eastern Regional Research Center (ERRC) ethanol process model (Kwiatkowski, McAloon, Taylor, & Johnston, 2005) and an ethanol production cost model developed at LECG. The USDA-ERRC model was utilized to determine the impact of increasing the rate of throughput, which is accomplished primarily by increasing the solids content in fermentation. Cost implications of utility savings experienced during the trials were calculated through the LECG model.
The energy balance and greenhouse gas emission results described below are based upon the principal findings from the 31-day trial and simulations:
The impact of CA corn on GHG emissions was estimated using Argonne National Laboratory’s Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET) Model (GREET, 2008). While several scenarios were modeled to gain an understanding of the product’s potential, three scenarios were identified as the best indicators of CA corn’s impact, based on the trials. These three scenarios include: (1) a conventional corn baseline, (2) feedstock containing 25% CA with increased throughput,2 and (3) feedstock containing 25% CA with increased throughput, in conjunction with Syngenta’s recommended agronomic practices for no-till production (Syngenta unpublished data). The three scenarios help define (based on the timeframe the study was conducted) the potential benefits of CA corn and Syngenta’s grower recommendations.
The Renewable Fuel Standard (RFS) established in the 2007 Energy Independence and Security Act specifies that the use of first-generation ethanol (mostly from corn) will reach its peak in 2015. For this reason, 2015 was selected as the base year for all modeling scenarios. A second notable parameter of the analysis is that the supply chain was shortened from the default GREET ‘Well (farm) to Pump’ analysis to a more appropriate ‘Farm to Ethanol Plant’ analysis. Rather than include the transportation, distribution, and storage of finished ethanol, this analysis encompasses all activities from the farm to the ethanol plant. This change reflects the realization that all differences between conventional No. 2 yellow corn and CA corn are captured through the production phase of ethanol. Extending the analysis to the fuel pump dilutes the overall impact of CA corn in reducing GHGs by including distribution impacts that are not unique to or affected by CA corn.
The baseline scenario takes into account all activities and processes that contribute to the production of ethanol from corn at a dry-grind manufacturing facility. This includes all downstream energy use for agriculture as well as the discovery, extraction, processing, and transportation of necessary fossil fuels. The simulation was run for the year 2015, assuming that dry-mill plants make up 88% of production (wet mill 12%), and 80% of dry-mill thermal energy is produced from natural gas (20% from coal). Several changes were made to the GREET version 1.8 model to reflect the most recent plant efficiency data as reported by May Wu of Argonne National Laboratory in March 2008. In addition, because alpha-amylase is not factored into GREET as an ethanol production input (M. Wu, personal communication, 2008), the LCA impact from the production and transportation of microbial alpha-amylase was added to the baseline. The alpha-amylase LCA reflects the impact of how alpha-amylase is sourced by WPE at their Oakley plant when CA corn is not used. Factors from GREET were used to calculate the impact of transporting the amylase from the supplier, and previous work by Nielsen, Oxenbøll, and Wenzel (2007) was used to capture the impact of producing microbial alpha-amylase.
Scenario 1 simulates the replacement of 25% of a plant’s conventional corn feedstock with CA corn and the increase of plant throughput by 6%. An increased solids/liquid ratio provided by CA decreases water use and will increase throughput. Throughput is a measure of the amount of corn that can be processed into ethanol during a production run. The ability to increase throughput provides the ethanol plant manager with a greater degree of flexibility. When using CA corn, the manager will have the ability to increase output by processing more corn than would be possible with conventional No. 2 yellow corn. Alternatively, the manager could reduce the plant’s operating rate and still maintain the same quantity of ethanol as with conventional No. 2 yellow corn. A reduced operating rate would lower stress on machinery and equipment and potentially reduce repair and maintenance costs. A 6% increase in throughput is consistent with the anticipated reduction in water use and increase in solids during fermentation. The results highlight the operational impacts due to these changes, and the associated impact to energy balance and GHG emissions. Changes implemented in this scenario include:
Scenario 2 includes all the changes from Scenario 1 as well as the impact of shifting from GREET’s baseline agronomic practices—based on the US average of all tillage practices (M. Wu & M. Wang, personal communication)—to the Best Management Practice (BMP) recommendations of Syngenta for use on CA corn in the Midwest, which include no-till. Corn Amylase will be marketed—and is expected to be grown—in the Midwest Region where 10 states account for 83% of corn production and more than 90% of ethanol production. Therefore, projected corn yield and chemical input rates for the Midwest were selected for use in Scenario 2, in contrast to the national averages used in the baseline. No-till efficiencies were calculated from 2006 University of Illinois data, which compares typical tillage systems for corn operations with no-till systems (University of Illinois, 2006).
CA will be commercialized as a specialty grain and, thus, will be grown under contract by farmers within a closed-loop system. Participating farmers will be encouraged to implement sustainable agronomic practices, such as no-till and the use of cover crops, that will help increase soil carbon sequestration and nitrogen fixation and reduce fuel and fertilizer use. For example, the amylase trait will be stacked with those for herbicide tolerance, an enabler of no-till, as well as insect resistance to increase yield. Furthermore, Syngenta maintains an agricultural product portfolio of herbicides, pesticides, and fungicides that can enable CA growers to realize the highest yields with the lowest environmental impact possible. The grower contract will also ensure the ethanol plant receives a consistent and steady supply of quality CA grain.
It is important to note that the agronomic practices are not related directly to the properties of CA but reflect differences in input use (fertilizer and chemical application) and field practice recommendations of Syngenta relative to GREET defaults. When compared with GREET, Syngenta recommends lower application rates of nitrogen and insecticides and higher rates of P205, K20, and herbicides (Cirrus Partners, 2008). When this is considered, model changes due to agronomic recommendations include:
The increases in herbicide and fertilizer use are more than offset by reductions in other inputs and savings gained from adjusted field practices. Therefore, Syngenta’s agronomic recommendations have a significant beneficial impact.
Scenario 1 reflects the energy savings realized from replacing 25% of a plant’s conventional corn feedstock with CA corn, and Scenario 2 adds the net impacts of reduced fertilizer, pesticide, and diesel fuel use proscribed by Syngenta’s agronomic BMP. The replacement of 25% of a plant’s corn requirements with CA corn results in a 6.6% improvement in energy balance on a per-gallon basis when compared to the use of conventional corn. The improvement in energy balance under this scenario is provided by a combination of reduced electricity and natural gas use and the elimination of the life cycle impacts associated with the production and transportation of microbial alpha-amylase. When the Syngenta BMP agronomic recommendations (Scenario 2) are included, the net energy balance improves by 10.7% per gallon compared to the baseline.
The use of CA corn also results in substantial reductions in greenhouse gas emissions. The replacement of 25% of a plant’s corn requirements with CA corn (Scenario 1) is shown to result in a reduction of nearly ½ pound of CO2-equivalent per gallon of ethanol, or 4.9%. Of the three gases that are classified as having global warming potential, carbon dioxide has far and away the greatest impact on GHG emissions. The GREET model results indicate that the use of CA corn will reduce CO2 emissions by nearly 6%. The incorporation of Syngenta’s BMP agronomics with the replacement of 25% of a plant’s corn requirements with CA corn reduces total GHG emissions by 1.06 pounds of CO2-equivalent per gallon of ethanol, or 11%, relative to baseline levels. The results of this analysis are summarized in Table 1.
Table 1. GREET results of Corn Amylase simulations.
Reduced Water Use
The use of CA can reduce the amount of water required to produce a gallon of ethanol by 7.7%. This reduction is directly related to the 5% increase in solids content during fermentation.3 If CA achieves a 30% market share of US dry-grind plants by 2015, the 7.7% reduction would save 870 million gallons of water annually, which would provide every man, woman, and child in the United States with an additional 45, 8-ounce glasses of water per year or two glasses of water for each of the world’s 6.8 billion people.
Reduced Electricity Requirements
Trial results reveal that on average, an ethanol plant with the capacity to produce 100 million gallons per year (MGY) would reduce electricity use by 1.3 million kilowatt-hours (kWh) and save $84,000 by using CA corn. Using US Department of Energy’s Energy Information Administration (US DOE EIA) data on household electricity use as a base, if Event 3272 corn achieves a 30% market share by 2015, this reduction is equivalent to 51.2 million kWh of electricity, or enough power to light more than 54,000 homes for a full year (US DOE, EIA, n.d.).
Reduced Natural Gas Use
Typically, natural gas is the second largest component of production cost for an ethanol producer. Syngenta trial results show that increased throughput capabilities gained from using CA corn can reduce natural gas use on a per-gallon basis by 8.9% for the period of time tested. Based on these results, a 100 MGY plant using CA corn could reduce its annual natural gas consumption by approximately 244 billion BTUs, at a savings of about $1.6 million per year. If CA corn achieves a 30% market share by 2015, this translates into a savings of approximately 9.6 quadrillion BTUs and nearly $61 million. This quantity of natural gas would be sufficient to heat more than 175,000 homes for an entire year (US DOE EIA, 2008, Table 1).
Potential for Increased Ethanol Yields
While CA trials have shown improvements in energy balance and GHG emissions due to throughput efficiencies, preliminary evaluations conducted by Syngenta in a pilot-scale laboratory and at WPE indicate that Event 3272 corn also may provide an improvement in starch conversion that could result in higher ethanol yields. The opportunity for an ethanol plant manager to manage production for improved ethanol yield could reduce the amount of corn required to produce a given amount of ethanol. Table 2 illustrates the potential energy and GHG impact of an assumed modest 2% increase in ethanol yields when combined with the throughput benefits defined above.
Table 2. Impact of a potential 2% yield increase from CA corn.
The replacement of 25% of a plant’s corn requirements with CA corn would result in a 6.9% improvement in energy balance when compared to the use of conventional corn if CA corn provided a 2% ethanol yield increase. When the Syngenta BMP agronomic recommendations (Scenario 2) are included, the net energy balance would improve by 11%, compared to the baseline.
Improved ethanol yields also would result in substantial reductions in greenhouse gas emissions. The replacement of 25% of a plant’s corn requirements with CA corn that provided a 2% yield improvement would reduce GHG emissions on a CO2-equivalent per-gallon basis of 5.4%. The incorporation of Syngenta’s BMP agronomics with the replacement of 25% of a plant’s corn requirements with CA corn would reduce total GHG emissions by 1.1 pounds of CO2-equivalent per gallon of ethanol for a 2% yield improvement. An improvement in ethanol yields means that an ethanol plant would require fewer bushels of corn to produce the same amount of ethanol. If CA corn provided a 2% ethanol yield improvement at a 30% market share, the corn requirement could be reduced by 27.6 million bushels at the 2007 average yield of 151 bu/acre or the equivalent of 166,500 acres (US Department of Agriculture, National Agricultural Statistics Service, 2009).
It is important to note that the potential for ethanol yield increases from CA corn are preliminary and must be validated by additional plant trials.
Over the past 18 months, corn ethanol has received a lot of attention and has been blamed for increases in the cost of oil (Johnson, 2008), food, and grain (Mitchell, 2008); food shortages, riots, and trade restrictions (Martin, 2008); land-use changes in developing nations (Searchinger et al., 2008); the loss of biodiversity in the Amazon (Keeney & Nanninga, 2008); and increases in global warming (Fargione et al., 2008). It is well recognized that a number of other issues have contributed to these events—not least of which is the rise in oil and fuel prices to record levels—increased meat consumption, drought, investor speculation, etc. Nonetheless, the results seen with CA indicate the potential for technology to help ameliorate many concerns with the use of renewable fuels.
For example, critics of corn ethanol state that the corn-ethanol production process delivers little improvement in energy balance and a 10-13% improvement in GHG emissions when compared with gasoline. The Energy Independence and Security Act of 2007 stipulates that renewable fuels produced at new facilities must lead to at least a 20% reduction in lifecycle GHG emissions compared to GHG emissions generated by petroleum products replaced by ethanol (Energy Independence and Security Act, 2007). The use of CA in conjunction with recommended agronomic practices brings corn-based ethanol over that 20% hurdle, which is critical for the renewable fuel industry.
Further opportunities for improvements in energy balance and GHG emissions will likely be identified as more experience is gained with CA. Improvements such as these would prove valuable to financially challenged ethanol plants, farmers, and technology providers. Regardless, it’s clear that technology will play a critical role in making current biofuels more efficient and second generation biofuels a commercial reality.
1 A value less than 1.0 indicates a process that consumes more energy than it produces; a value greater than 1.0 indicates a process which produces more energy than it consumes.
2 The final use rate of corn amylase has not been determined. The 25% ratio of CA to conventional feedstock was used for the purposes of this study only.
3 A 7.7% reduction in water use is the result of modeling a 5% increase in solids content in the fermenter, using the USDA-ERRC ethanol process model.
Johnson, K. (2008, July 16). High oil prices? Blame ethanol, OPEC says. Message posted to http://blogs.wsj.com/environmentalcapital/2008/07/16/
Kwiatkowski, J.R., McAloon, A.J., Taylor, F., & Johnston, D.B. (2005). Modeling the process and costs of fuel ethanol production by the corn dry-grind process. Industrial Crops and Products, 23, 288-296.
Nielsen, P.H., Oxenbøll, K.M., & Wenzel, H. (2007). Cradle-to-gate environmental assessment of enzyme products produced industrially in Denmark by Novozymes A/S. The International Journal of Life Cycle Assessment, 12(6), 432-438.
Searchinger, T., Heimlich, R., Houghton, R.A., Dong, F., Elobeid, A., Fabiosa, J., et al. (2008). Use of U.S. croplands for biofuels increases greenhouse gases through emissions from land-use change. Science, 319, 1238-1240.
US Department of Agriculture (USDA), National Agricultural Statistics Service (NASS). (2009, January 12). Corn: Yield by year, US [data chart]. Washington, DC: Author. Available on the World Wide Web: http://www.nass.usda.gov/Charts_and_Maps/Field_Crops/cornyld.asp.
US Department of Energy (DOE), Energy Information Administration (EIA). (n.d.). End-use consumption of electricity, 2001 [data table]. Washington, DC: Author. Available on the World Wide Web: http://www.eia.doe.gov/emeu/recs/recs2001/enduse2001/
US DOE, EIA. (2008). Residential natural gas prices: What consumers should know (DOE Brochure # DOE/EIA-X046). Available on the World Wide Web: http://www.eia.doe.gov/oil_gas/natural_gas/analysis_publications/
The authors would like to thank USDA-ERRC engineers Andrew McAloon and Winnie Yee for their efforts to parameterize the USDA-ERRC process model to reflect the plant conditions experienced in processing Corn Amylase.
Suggested citation: Urbanchuk, J.M., Kowalski, D.J., Dale, B., & Kim, S. (2009). Corn amylase: Improving the efficiency and environmental footprint of corn to ethanol through plant biotechnology. AgBioForum, 12(2), 149-154. Available on the World Wide Web: http://www.agbioforum.org.
|© 2009 AgBioForum | Design and support provided by Express Academic Services | Contact ABF: firstname.lastname@example.org| | <urn:uuid:c8ef2db4-1890-4c6c-802d-04b54b079d1c> | CC-MAIN-2017-51 | http://www.agbioforum.org/v12n2/v12n2a01-stone.htm | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948545526.42/warc/CC-MAIN-20171214163759-20171214183759-00728.warc.gz | en | 0.915072 | 4,953 | 3.1875 | 3 |
by Matt Owens December 30, 2012
Things are coming into very clear focus now, and this is quite an exciting time. Sadly, most people are willfully blind to the impending climate changes, with the consequence that major losses will radiate to everyone – even including those who are climate-aware today. However, with the rapidly improving projections, we who are aware are now coming into a position to act and safeguard ourselves against many of the direct losses others will suffer.
Specifically, sea level rise (SLR) is the issue of focus here.
In light of updated and improved records of ice loss from Shepherd et al. (2012), NASA researcher James Hansen and Makiko Sato have revised their projections for future ice loss rates.
The pair's previous estimates were actually released earlier this year, and used data that was about 3 years old as the basis of projections. The speed at which developments are now happening is a testament to how serious and rapid climate change has become.
The main conclusions from Hansen and Sato's updated estimate:
Sea level rise will likely follow an exponential path.
Other attempts to establish upper limits on ice sheet disintegration rates are based on fundamentally flawed logic.
A reasonable projection that fits with observed rates of ice loss so far has 1 meter of SLR by 2045.
Negative feedbacks are modeled to start once SLR reaches 1 meter, and rates of SLR thereafter would be expected to, and could slow therefore - but not stop altogether. [SLR might also come much faster - i.e. 5 to 20 meters in a handful of years - if larger areas of ice sheets disintegrate in a structurally catastrophic manner.]
The negative feedbacks would result from massive volumes of ice floating away from the polar ice caps and interfering with ocean temperature and density patterns.
As a consequences of the floating ice, initial model results suggest that global warming in such a context would slow, but not stop; and, that a strong difference in regional temperature between polar and temperate regions would result in extremely intense storm activity (which does not equate to alleviation of the very serious global drought conditions already forecast for temperate farming regions by mid-century).
This chart represents the 5-year doubling time scenario; negative feedbacks kick in after 1 meter of sea level rise is reached in 2045, and by 2067 rates of sea level rise slow significantly - but still continue at a devastating rate. I have added an approximately inverse rate of slowing SLR (starting before the 5 meter point) to demonstrate one potential rate of negative feedback. [UPDATE Jan. 11, 2013: more in-depth analysis of possible negative feedback.]
Including projections from another recent study on housing unit property losses at various levels of sea level rise (Strauss et al. 2012) reveals a dramatic toll on society (see chart below).
The initial stages of the loss curve is less certain, as resolution below 1 meter of sea level rise is hazy (due to poor resolution of topographical data, see study for detailed explanation). Here, I have filled the first portion (from 0 to 1 meter) of the curve using an approximation where half the housing unit losses by 1 meter of SLR occur between 0.75 and 1.0 meters SLR, and where one quarter occur between 0.5 and 0.75 meters SLR, with the remaining quarter occurring between 0 and 0.5 meters SLR.
Regardless of uncertainties arising from resolution, the data is very clear at 1 meter intervals, and shows that by 1 meter of SLR, nearly 2 million housing units will be lost to the seas. This happens by 2045 in the Hansen and Sato 5-year doubling time projection. By 2050 the total losses rise another million, to 3 million lost. The trend continues through 2055, by when another million are lost and the total reaches 4 million.
Then, the trend accelerates and by 2060, after just five years more, 2 million more units have gone under, bringing the total to about 6 million. By 2070, in just another 10 years, the total rises over 10 million housing units lost.
With 2.6 people per household (according to US Census), these figures mean that more than 26 million people will lose their homes to the rising ocean by 2070. And these figures are for the population today, so numbers could be higher if population grows - and especially if populations don't move away from the coastlines earlier rather than later.
The housing unit loss estimates by Strauss et al. (2012) was based on data from the 2010 US Census. Using that same US Census data source, and assuming that the ratio of housing units to location above sea level remains constant in the next few decades, I have converted the absolute loss to a percent of total housing (see below).
From 2040 onward, losses mount quickly. The US as a whole has a housing unit vacancy rate just above 10 percent - about the same percent of losses by 2070. So the housing loss alone could put a major strain on housing capacity. That is, unless new building could outpace loss.
Simply based on past rates of housing unit expansion, the US should have no problem adding enough housing inventory to accommodate the newly homeless as seas rise. The loss rate would reach about 2% per decade by 2050 and reach about 3% per decade by 2070. From 2000 to 2010, the US added about 10% to its number of housing units.
However, who could pay for building new housing? Insurance does not cover sea level rise. Government is the obvious answer. A massive program could pay for new construction. In today's dollars, an average housing unit costs around $200,000, or just a little less depending on how you calculate it.
Using $200k, a loss of 3 million units per decade at the beginning (2040-50) and then reaching about 4.5 million (2060-70) would mean $600 billion in just home losses per decade at the start, and $900 billion per decade by the 2060's. Infrastructure, planning, and other costs should be factored in too, as well as replacement costs of non-residential structures and assets. According to Burchell (1998), associated costs for new development are about $50,000 per housing unit (in today's dollars).
So, somewhat surprisingly, the cost of replacing residential property, structures, and associated infrastructure may not be especially high under the 5-year doubling time scenario. Taking $250k as the total associated costs for building a replacement housing unit, and then tripling that to account for industrial and commercial and other unexpected costs that would be needed too gives a figure of $225 billion per year from 2040-50 and $337 billion per year from 2060-70. That starts at slightly less than 4% of today's federal budget and 1.5% of the US 2011 GDP, and rises to about 6% and 2% of US federal spending and US GDP respectively by 2060-70. Both are painful, but tolerable costs - at least taken out of context where there are no other impacts from climate change.
This added expense on the backs of all Americans would under normal circumstances tend to inhibit economic expansion. But on the other hand, all the government spending would actually tend to stimulate the economy. Also worth considering, the losses of property value are not losses in the standard economic sense we're used to where the market value drops and thus destroys value. This loss is absolute, permanent, and total. The property can no longer be of any use.
In fact, submerged properties will likely become serious liabilities to us all as toxic and non-toxic chemicals and substances leach out into coastal waters. Common household products, considered non-toxic on dry land, become pollutants in water. For example, copper is toxic to most aquatic organisms, from fish to invertebrates. What's more, the vast quantity and array of chemical substances found in buildings of all types could lead to a sick cocktail in the new coastal waters.
The reality could be very complicated. More on complex water pollution here:
Another issue is saltwater intrusion ruining well-water, municipal aquifers, and farmland. Plus, increased storm activity out at sea along with iceberg hazards could cause significant inefficiencies for international commerce (via shipping and air transport) and thus cause US and international GDP to decline year after year.
Drought conditions by 2040 could also become so severe that agricultural output could drop by 30% to 40%, or possibly even more. To add insult to injury, increasingly severe and damaging storms could easily cause persistent headaches for everyday commercial activity. The average number of days taken off from work due to bad weather could rise steeply.
Much like how the pernicious build-up of carbon dioxide has been slowly pushing our climate into a warming trend, the constant aggravation from severe weather, poor climate, and sea level rise could push the US and other countries into a declining economic trend.
Keep in mind this is just the lower 48 contiguous states of the US, and this is just residential housing units. Commercial, industrial, civic, and infrastructure property of all types and importance would also be effected along with residential. And the same general effect would be felt around the world from Northern Europe to the Mediterranean, to Asia, Australia, South America, and all the way to Hawaii and Alaska.
The actual cost of these losses is therefore hard to figure. Basic infrastructure like ports and highways are not often given monetary values.
It's no wonder either, if you consider the complexity of the economic web that moves through those systems. If a port is shut down for just a few months, it can mean collapse of regional industry and commerce in the areas that are dependent on that port for exports. And with SLR, the possibility exists that ports could be shut for years.
Imagine maintaining a port's infrastructure while sea levels rise by 4 meters (13.1 feet) between 2045 and 2067, a 22 year period. That's 18 centimeters of rise per year (or 7 inches per year) on average.
With variability from year to year, there could be several years where the rise is almost nothing, followed by a year where the rise is 2 or 3 times the average - or perhaps even more. If the ice losses are concentrated over summer, and in just one or two months, that could mean a bad year sees a rise in sea level by 54 centimeters (21 inches, nearly 2 feet) in just a couple of months.
Will ports be able to build their docks higher as seas rise? Even if they can raise their docks, what about getting the goods off the docks? As the seas rise, connecting roads all around will be lost. Electrical and other connections will also be lost too. Can on-site power generation suffice? Surely the answers to these questions are highly variable depending on the specific port and its specific terrain and local topography.
Intensification of storm activity and new iceberg hazards at mid-latitudes (i.e. where there are heavily-used shipping lanes) will also need to be addressed if a scenario like Hansen and Sato have outlined unfolds.
But will their projections turn out to be correct? Unfortunately there is every reason to believe so. The update by Hansen and Sato really explains it best (see the excerpts below). The greatest area of uncertainty is the remaining ice sheet response to rising sea level and the cooling effect from broken ice rafting out across the world's oceans. The curve of a slowing increase (i.e. negative feedback) in SLR that I have added to the charts above is just one degree of feedback, and just one possible net SLR response. At the other end of the spectrum is the possibility that SLR will sufficiently destabilize much vaster areas of polar ice sheets and possibly lead to massive tens of meters of sea level rise in short time frames. Such diverse outcomes complicate planning for the future, but they need to be considered. SLR of tens of meters in mere years would require much different strategies then the comparably modest rise of the 5-year doubling time scenario examined here.
Hansen, the lead co-author of this new SLR estimate update, has so far been close to spot-on right about climate change - while so many other scientists have been, like the general public - so wrong and so reluctant to accept the truth. What else could be expected however? To arrive at the startling conclusions Hansen has reached required decades of intense ground-breaking research and synthesis of diverse scientific fields. Most scientists study something amazingly specific and are highly sceptical by nature. So how does someone effectively communicate an idea that they themselves only just learned, and by doing the research themselves at that (i.e. Plato's Cave).
Hansen was a chief architect of the GISS General Circulation Model II for global climate which came into use in 1980 and was used by NASA into the 1990's. The model is still used today as a quick first-estimation of climate impacts by some researchers. Climate model results have been warning us for decades now, although their results are typically misunderstood and therefore misinterpreted by the mainstream press.
Starting back in the 1980's and increasingly in recent years, Hansen has been outspoken about the lethal, growing danger climate change poses to his, and everyone's grandchildren. In fact, he wrote a book called "Storms of My Grandchildren" to highlight just that point.
Excerpts from the Hansen and Sato release.
Hansen and Sato's December 26th update lays out the basis for their findings:
“IPCC (2007) [the previous UN international scientific consensus report] suggested a most likely sea level rise of a few tens of centimeters by 2100. Several subsequent papers suggest that sea level rise of ~1 meter is likely by 2100. However, those studies, one way or another, include linearity assumptions, so 1 meter can certainly not be taken as an upper limit on sea level rise (see discussion and references in the appendix below, excerpted from our recent paper). Sea level rise in the past century was nearly linear with global temperature, but that is expected behavior because the main contributions to sea level rise last century were thermal expansion of ocean water and melting mountain glaciers.
“In contrast, the future sea level rise of greatest concern is that from the Greenland and Antarctic ice sheets, which has the potential to reach many meters. Hansen (2005) argues that, if business-as-usual increase of greenhouse gases continue throughout this century, the climate forcing will be so large that non-linear ice sheet disintegration should be expected and multimeter sea level rise not only possible but likely. Hansen (2007) suggests that the position reflected in IPCC documents may be influenced by a "scientific reticence". In such case the consensus movement of sea level rise estimates from a few tens of centimeters to ~1 meter conceivably is analogous to the reticence that the physics community demonstrated in its tentative steps to improve upon estimates of the electron charge made by the famous Millikan.¹”
The footnote for the Millikan reference is a quote itself, from Feynman, 1997, and is as follows:
“Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It's a little bit off because he had the incorrect value for the viscosity of air. It's interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan's, and the next one's a little bit bigger than that, and the next one's a little bit bigger than that, until finally they settle down to a number which is higher. Why didn't they discover the new number was higher right away? It's a thing that scientists are ashamed of - this history - because it's apparent that people did things like this: When they got a number that was too high above Millikan's, they thought something must be wrong - and they would look for and find a reason why something might be wrong. When they got a number close to Millikan's value they didn't look so hard.”
Continuing with the Hansen and Sato update:
“Perceived authority in the case of ice sheets stems from ice sheet models used to simulate paleoclimate sea level change. However, paleoclimate ice sheet changes were initiated by weak climate forcings changing slowly over thousands of years, not by a forcing as large or rapid as human-made forcing this century. Moreover, in a paper submitted for publication (Hansen et al., 2013) we present evidence that even paleoclimate data do not support the degree of lethargy and hysteresis that exists in such ice sheet models."
And, Hansen and Sato make a rebuttal to the best counter-claim that SLR cannot be more than 1 or 2 meters at most this century.
“Pfeffer et al. (2008) argue that kinematic constraints make sea level rise of more than 2 m this century physically untenable, and they contend that such a magnitude could occur only if all variables quickly accelerate to extremely high limits.
“They conclude that more plausible but still accelerated conditions could lead to sea level rise of 80 cm by 2100. The kinematic constraint may have relevance to the Greenland ice sheet, although the assumptions of Pfeffer at al. (2008) are questionable even for Greenland. They assume that ice streams this century will disgorge ice no faster than the fastest rate observed in recent decades. That assumption is dubious, given the huge climate change that will occur under BAU scenarios, which have a positive (warming) climate forcing that is increasing at a rate dwarfing any known natural forcing. BAU scenarios lead to CO2 levels higher than any since 32 My ago, when Antarctica glaciated. By mid-century most of Greenland would be experiencing summer melting in a longer melt season. Also some Greenland ice stream outlets are in valleys with bedrock below sea level. As the terminus of an ice stream retreats inland, glacier sidewalls can collapse, creating a wider pathway for disgorging ice.
“The main flaw with the kinematic constraint concept is the geology of Antarctica, where large portions of the ice sheet are buttressed by ice shelves that are unlikely to survive BAU climate scenarios. West Antarctica's Pine Island Glacier (PIG) illustrates nonlinear processes already coming into play. The floating ice shelf at PIG's terminus has been thinning in the past two decades as the ocean around Antarctica warms (Shepherd et al., 2004; Jenkins et al., 2010). Thus the grounding line of the glacier has moved inland by 30 km into deeper water, allowing potentially unstable ice sheet retreat. PIG's rate of mass loss has accelerated almost continuously for the past decade (Wingham et al., 2009) and may account for about half of the mass loss of the West Antarctic ice sheet, which is of the order of 100 km³ per year (Sasgen et al., 2010).
“PIG and neighboring glaciers in the Amundsen Sea sector of West Antarctica, which are also accelerating, contain enough ice to contribute 1-2 m to sea level. Most of the West Antarctic ice sheet, with at least 5 m of sea level, and about a third of the East Antarctic ice sheet, with another 15-20 m of sea level, are grounded below sea level. This more vulnerable ice may have been the source of the 25 ± 10 m sea level rise of the Pliocene (Dowsett et al., 1990, 1994). If human-made global warming reaches Pliocene levels this century, as expected under BAU scenarios, these greater volumes of ice will surely begin to contribute to sea level change. Indeed, satellite gravity and radar interferometry data reveal that the Totten Glacier of East Antarctica, which fronts a large ice mass grounded below sea level, is already beginning to lose mass (Rignot et al., 2008).
“The eventual sea level rise due to expected global warming under BAU GHG scenarios is several tens of meters, as discussed at the beginning of this section. From the present discussion it seems that there is sufficient readily available ice to cause multi-meter sea level rise this century, if dynamic discharge of ice increases exponentially. Thus current observations of ice sheet mass loss are of special interest.” | <urn:uuid:24ce87a2-1fc6-42e5-9fd9-deb92c3ee5df> | CC-MAIN-2022-33 | https://climatewatch.typepad.com/blog/2012/12/estimated-future-ice-loss-rates-updated-dec-2012.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00096.warc.gz | en | 0.951491 | 4,231 | 3 | 3 |
Provincial and Territorial Energy Profiles – Alberta
Please send comments, questions, or suggestions to
Figure 1: Hydrocarbon Production
Source and Description:
This graph shows hydrocarbon production in Alberta from 2010 to 2020. Over this period, crude oil production has grown from 2.2 MMb/d to 3.8 MMb/d, with almost all growth coming from the oil sands. Natural gas production has deceased from 10.9 Bcf/d to 9.7 Bcf/d.
Figure 2: Electricity Production (2019)
Source and Description:
This pie chart shows electricity generation by source in Alberta. A total of 76.1 TW.h of electricity was generated in 2019.
Figure 3: Crude Oil Infrastructure Map
Source and Description:
This map shows all major crude oil pipelines, rail lines, and refineries in Alberta.
PDF version [909 KB]
Figure 4: Natural Gas Infrastructure Map
Source and Description:
This map shows all major natural gas pipelines in Alberta.
PDF version [1 341 KB]
Figure 5: End-Use Demand by Sector
Source and Description:
This pie chart shows end-use energy demand in Alberta by sector. Total end-use energy demand was 4 091 PJ in 2018. The largest sector was industrial at 75% of total demand, followed by transportation (at 11%), commercial (at 8%), and lastly, commercial (at 6%).
Figure 6: End-Use Demand by Fuel (2019)
Source and Description:
This figure shows end-use demand by fuel type in Alberta in 2018. Natural gas accounted for 2 340 PJ (57%) of demand, followed by refined petroleum products at 1 370 PJ (34%), electricity at 274 PJ (7%), biofuels at 105 PJ (3%), and other at 2 PJ (less than 1%).
Note: "Other" includes coal, coke, and coke oven gas.
Figure 7: GHG Emissions by Sector (2019)
Source and Description:
This stacked column graph shows GHG emissions in Alberta by sector every five years from 1990 to 2020 in MT of CO2e. Total GHG emissions have increased in Alberta from 166 MT of CO2e in 1990 to 237 MT of CO2e in 2020.
Figure 8: Emissions Intensity of Electricity Generation
Source and Description:
This column graph shows the emissions intensity of electricity generation in Alberta from 1990 to 2020. In 1990, electricity generated in Alberta emitted 950 g of CO2e per kWh. By 2020, emissions intensity decreased to 590 g of CO2e per kWh.
- In 2020, Alberta produced 3.79 million barrels per day (MMb/d) of crude oil (including condensate and pentanes plus) (Figure 1). Alberta is the largest producer of crude oil in Canada, accounting for 80% of total Canadian production as of 2020.
- Over three-quarters of Alberta’s crude oil production comes from the oil sands in northern Alberta. In 2020, Alberta had 8 operating oil sands mines, and 29 thermal in situ oil sands operations. In 2020, Alberta produced 2.99 MMb/d of oil sands raw bitumen. From that amount, 1.09 MMb/d of synthetic crude oil (SCO) was produced. SCO can be transformed into refined petroleum products or in some cases used to dilute raw bitumen for transport.
- Four upgraders are currently operational in Alberta: Syncrude, Suncor, and CNRL Horizon (all near Fort McMurray), and Shell Scotford in Edmonton. Combined, these upgraders have the capacity to process 1.52 MMb/d of bitumen.
- In 2020, Alberta also produced 334.8 thousand barrels per day (Mb/d) of conventional light oil and 88.2 Mb/d of conventional heavy oil. Alberta’s condensate and pentanes plus production was 379.1 Mb/d.
- Between January 2019 and December 2020, the Alberta government’s mandated production curtailment were put in place because oil production exceeded pipeline capacity thereby affecting oil prices in Alberta. By the end of 2020, monthly production limits were put on hold and as of December 31, 2021 the oil production curtailment policy expired.
- At year-end 2020, Alberta’s remaining resource of crude oil, including the oil sands, is estimated to be 310 billion barrels.
Refined Petroleum Products (RPPs)
- Alberta has five refineries: Strathcona (Imperial Oil), Edmonton (Suncor), and Scotford (Shell) in the Edmonton area; Sturgeon (NWR) in Redwater; and Lloydminster (Cenovus) in Lloydminster. Combined, these refineries have a total oil processing capacity of 542.4 Mb/d. This amounts to 28.5% of Canada’s total refining capacity, the largest share of any province in Canada.
- As of 1 June 2020, the Sturgeon Refinery began processing bitumen through a fee-for-service tolling mechanism. Prior to this, it was only processing SCO. The Alberta government’s Alberta Petroleum Marketing Commission has a 30-year tolling arrangement to provide 75% of the required bitumen blend feedstock to the Sturgeon Refinery (under Alberta’s Bitumen Royalty in Kind policy).
- Alberta’s refineries process only western Canadian crude oil, including a large proportion of blended bitumen and SCO. In 2020, 68% of the oil processed in Alberta refineries was upgraded bitumen including pentanes plus, with the remaining 32% being crude oil and non-upgraded bitumen.
- Alberta’s refinery utilization was 94% in 2020.
Natural Gas/Natural Gas Liquids (NGLs)
- In 2020, Alberta’s natural gas production averaged 9.72 billion cubic feet per day (Bcf/d) (Figure 1). Alberta’s gas production represented 63% of total Canadian natural gas production in 2020.
- At year-end 2020, Alberta’s total potential for recoverable, sales-quality natural gas is estimated to be 563 trillion cubic feet (Tcf), with 380 Tcf remaining after production is subtracted.
- Alberta’s NGL production in 2020 was about 416.8 Mb/d, not including condensate and pentanes plus, which are included with crude oil.
- Some NGLs are fractionated into individual components (for example, ethane, propane, butane, and condensate) at field plants or fractionators in Alberta.
- Alberta has nearly 500 active gas processing field plants, 13 fractionators, and 8 straddle plants.
- In 2019, Alberta generated 76.1 terawatt hours (TW.h) of electricity (Figure 2), which is approximately 12% of total Canadian generation. Alberta is the third largest producer of electricity in Canada and has an estimated generating capacity of 16 330 megawatts (MW).
- About 89% of electricity in Alberta is produced from fossil fuels– approximately 36% from coal and 54% from natural gas. The remaining 10% is produced from renewables, such as wind, hydro, and biomass.
- Alberta, along with Ontario, are the only jurisdictions in Canada that have competitive generation and retail markets for electricity.
- Some of Alberta’s largest electricity generators include TransAlta, Heartland Generation, Suncor, ENMAX, and Capital Power.
- In 2019, Alberta’s coal fleet was the largest in Canada with a total capacity of 5 555 MW.
- Under Alberta’s climate change legislation, emissions from coal-fired generation will be phased out in the province by 2030. However, power generators in Alberta (including Capital Power, Heartland Generation, and TransAlta) have decided to advance plans for coal-to-gas conversions with most coal-fired facilities expected to switch by 2022, and all by 2024.
- The Shepard Energy Centre is Alberta’s largest natural gas-fired power station. It is located east of Calgary and has a capacity of 860 MW.
- In 2019, Alberta’s wind fleet had a capacity of roughly 1 467 MW, ranking it 3rd highest in the country after Ontario and Quebec. Most of Alberta’s wind turbines are located in southern and central-east Alberta.
- Significant growth is expected for wind and solar generation in Alberta, with over 2 000 MW of new projects expected between 2019 and 2023. The 465 MW Travers Solar project, the largest solar installation in Canada, is under construction and expected to be operational in late 2022.
- Alberta’s Micro-Generation Regulation allows Alberta residents to generate electricity from renewable or alternative energy sources and sell the surplus to the Alberta grid in exchange for energy credits, with a limit of 5 MW of installed capacity. As of November 2020, microgeneration capacity totaled 95 MW across more than 6 000 sites, with solar accounting for approximately 93% of total capacity.
Energy Transportation and Trade
Crude Oil and Liquids
- Alberta has a vast network of crude oil and condensate pipelines that gather and deliver crude oil from production regions to pipeline and storage hubs in Edmonton and Hardisty (Figure 3).
- The Enbridge Mainline system is Canada’s largest transporter of crude oil. The Mainline starts in Edmonton and delivers light and heavy crude oil, RPPs, and NGLs to markets in the Prairies, U.S. Midwest, and Ontario.
- The Trans Mountain Pipeline also starts in Edmonton and transports crude oil and RPPs to refineries and terminals in British Columbia (B.C.) and Washington. Crude oil delivered by Trans Mountain is also exported to Asia via the Westridge Marine Terminal in Burnaby, B.C. TC Energy’s Keystone Pipeline and Enbridge’s Express Pipeline both originate in Hardisty and export crude oil to refining markets in the U.S. Midwest and the Gulf Coast. The Enbridge Mainline also connects to Hardisty.
- Plains Midstream’s Milk River and Aurora pipelines are two smaller CER-regulated pipelines that also transport crude across the border from Alberta to Montana. Milk River connects to the much longer provincially-regulated Bow River Pipeline, owned by Inter Pipeline Ltd. The Bow River system gathers and transports crude oil from oil fields in southeastern Alberta and transports it to Hardisty and Milk River. Aurora connects to the provincially regulated Rangeland Pipeline, which starts in Edmonton and is also owned by Plains Midstream Canada.
- Alberta also receives crude oil from Norman Wells, Northwest Territories (NWT), via the Enbridge Norman Wells pipeline.
- Alberta’s two main import pipelines for condensate are Enbridge’s Southern Lights and Pembina’s Cochin. These pipelines transport condensate from the U.S. to distribution centres in Edmonton and Fort Saskatchewan, where it is then used as diluent in oil sands projects.
- Enbridge Line 3 Replacement Project, which delivers crude oil from Edmonton to Superior, Wisconsin, became fully operational in October 2021. The project roughly doubled the pipeline’s capacity to 760 Mb/d. Line 3 forms a part of the Enbridge Mainline.
- The Trans Mountain Expansion project will transport crude oil from Edmonton to the Westridge Marine Terminal and Parkland refinery in Burnaby, B.C. The expansion will twin the existing Trans Mountain pipeline and increase the pipeline’s capacity to 890 Mb/d from 300 Mb/d. Construction of the new pipeline began in November 2019 and is expected to be complete December 2022.
- Alberta is a large supplier of RPPs, such as gasoline and diesel, to markets in neighbouring provinces. Products are transported to B.C. largely via Trans Mountain, and to Saskatchewan and Manitoba primarily via the Enbridge Mainline.
- RPPs are moved within Alberta by truck and rail, and by the Alberta Products Pipeline. This line transports an average of 48.4 Mb/d of RPPs and connects Edmonton refineries to markets in southern Alberta. The Alberta Products Pipeline is regulated by the Alberta Energy Regulator (AER).
- Alberta has 16 crude oil rail loading facilities with a total capacity of approximately 802 Mb/d.
- Major pipelines that transport Alberta’s natural gas to other provinces and to the U.S. include: Nova Gas Transmission Ltd. (NGTL), TC Canadian Mainline, Foothills, and Alliance (Figure 4). The first three are owned by TC Energy.
- The NGTL System extends through most of Alberta and transports western Canada-produced natural gas to markets in Canada and the U.S. NGTL has been adding capacity in recent years to accommodate increasing production from the Montney formation in northeastern B.C. and northwest Alberta. Overall, the NGTL System currently has a $9.9 billion infrastructure program underway that will add 3.5 Bcf/d of incremental delivery capacity from 2020 to 2024.
- The Canadian Mainline transports natural gas to eastern Canada and the U.S. The pipeline extends from the Alberta/Saskatchewan border across Saskatchewan, Manitoba and Ontario, and through a portion of Quebec. It connects with the Trans-Québec & Maritimes pipeline near the Ontario/Quebec border.
- The Foothills pipeline system is connected to the southern part of the NGTL System and consists of several segments: Foothills BC, Foothills SK, and Foothills Alberta. Foothills Alberta is operated in conjunction with the NGTL System.
- The Alliance Pipeline originates in northeastern B.C., crosses Alberta, and enters the U.S. at Alameda, Saskatchewan. Alliance transports liquids-rich natural gas from B.C. and Alberta and delivers it to the Aux Sable gas processing and fractionation facility near Chicago, Illinois.
- ATCO Gas is Alberta’s largest natural gas distributor and serves over 1.1 million customers in nearly 300 communities. Apex Utilities Inc. (previously AltaGas Utilities Inc.) distributes natural gas to over 80 000 residential, rural, and commercial customers in over 90 communities across Alberta. ATCO and Apex Utilities Inc. are both regulated by the Alberta Utilities Commission (AUC).
- Provincial natural gas projects and pipelines are regulated by the Alberta Energy Regulator and the AUC.
Natural Gas Liquids
- Alberta has many pipelines that transport natural gas liquids, including ethane, propane, butanes, and NGL mix.
- NGLs are primarily transported out of Alberta on rail cars across North America, or as NGL mix on the Enbridge Mainline to Sarnia, Ontario, and the U.S. Midwest.
- Plains Midstream Canada's Petroleum Transmission Company (PTC) Pipeline delivers propane and butane produced at the Empress straddle plants to rail and truck terminals on the Prairies. PTC has a capacity of 15 Mb/d and runs from Empress, Alberta, through Regina, Saskatchewan, to Fort Whyte, Manitoba.
- Pembina’s 68 Mb/d Vantage Pipeline transports ethane from Tioga, North Dakota, to Empress to connect with the Alberta Ethane Gathering System–the main system supplying the Alberta petrochemical industry.
Liquefied Natural Gas (LNG)
- Ferus operates a small-scale LNG facility in Elmworth, west of Grand Prairie. The Elmworth facility produces 50 000 gallons per day and services the transportation sector, hydrocarbon drilling, mining, and power generation in Whitehorse, Yukon and Inuvik, NWT.
- The Cavalier LNG facility near Strathmore is operated by Alberta LNG. The facility has been in operation since 2013 and supplies up to 6 500 gallons per day of LNG to the transportation sector, including rail.
- In 2019, Alberta’s net interprovincial and international electricity inflows were 2.7 TWh. Alberta trades electricity with B.C., Saskatchewan, and Montana.
- Alberta has approximately 26 000 km of transmission lines and more than 200 000 km of distribution lines.
- Transmission systems are owned and operated by shareholder-owned companies such as AltaLink and ATCO. Distribution systems are owned by municipally-owned companies such as ENMAX, EPCOR; or the cities of Red Deer, Lethbridge, and Medicine Hat; or by shareholder-owned companies such as ATCO and Fortis. The AUC regulates these companies’ transmission and distribution tariffs, while the Alberta Electric System Operator (AESO) works with these companies to operate the Alberta electricity system and the competitive electricity market.
- There are more than 200 Alberta electricity market participants registered with the AESO.
Energy Consumption and Greenhouse Gas (GHG) Emissions
Total Energy Consumption
- Total end-use demand in Alberta was 4 160 petajoules (PJ) in 2019. The largest sector for energy demand was industrial at 74% of total demand, followed by transportation at 11%, commercial at 9%, and residential at 6% (Figure 5). Alberta’s total energy demand was the largest in Canada, and the largest on a per capita basis.
- Natural gas was the largest fuel type consumed in Alberta, accounting for 2 373 PJ, or 57% of consumption in 2019. RPPs and electricity accounted for 1 400 PJ (34%) and 275 PJ (7%), respectively (Figure 6).
Refined Petroleum Products
- Alberta’s motor gasoline demand in 2019 was 1 608 litres per capita, 27% above the national average of 1 268 litres per capita.
- Alberta’s diesel demand in 2019 was 1 815 litres per capita, more than double the national average of 855 litres per capita.
- Alberta has a net surplus of RPPs and nearly all the gasoline consumed in Alberta is produced within the province.
- Alberta consumed an average of 6.4 Bcf/d of natural gas in 2020. Alberta's demand represented 56% of total Canadian demand.
- The largest consuming sector for natural gas was the industrial sector (including heavy oil and oil sands production), which consumed 5.6 Bcf/d in 2020. The residential and commercial sectors consumed 0.44 Bcf/d and 0.37 Bcf/d, respectively.
- In 2019, annual electricity consumption per capita in Alberta was 17.5 megawatt-hours (MWh). Alberta ranked fifth in Canada for per capita electricity consumption and consumed 17% more than the national average.
- Alberta’s largest consuming sector for electricity in 2019 was industrial at 48.2 TWh. The commercial and residential sectors consumed 17.7 TWh and 10.2 TWh, respectively.
- Alberta’s GHG emissions in 2020 were 256.4 megatonnes (MT) of carbon dioxide equivalent (CO2e). Alberta’s emissions have increased 55% since 1990 and 19% since 2005.Footnote 1
- Alberta’s emissions per capita are the second highest in Canada at 58.02 tonnes CO2e– three times the national average of 17.68 tonnes per capita.
- The largest emitting sectors in Alberta are oil and gas production at 52% of emissions, electricity generation at 11%, and transportation at 11% (Figure 7).
- Alberta’s GHG emissions from the oil and gas sector in 2020 were 132.8 MT CO2e. Of this total, 128.0 MT were attributable to production, processing, and transmission and 4.9 MT were attributable to petroleum refining and natural gas distribution.
- Alberta’s electricity sector produces more GHG emissions than any other province because of its size and reliance on coal-fired generation. In 2020, Alberta’s power sector generated 29.3 MT CO2e emissions, or 52% of total Canadian GHG emissions from power generation.
- The greenhouse gas intensity of Alberta’s electricity grid, measured as the GHGs emitted in the generation of the province’s electric power, was 590 grams of CO2e per kilowatt-hour (g CO2e per kWh) electricity generated in 2020. This is a 35% reduction from the province’s 2005 level of 910 g CO2e per kWh. The national average in 2020 was 110 g CO2e per kWh (Figure 8).
- Alberta Energy
- Alberta Energy Regulator
- Alberta Utilities Commission
- Alberta Electric System Operator (AESO)
- CER–Canada's Renewable Power: Alberta
- CER–Market Snapshot: Comparing Canada’s Energy Demand Trends in Alternative Energy Futuress
- CER–Market Snapshot: How the 2021 Summer Heat Dome Affected Electricity Demand in Western Canada
- CER–Market Snapshot: Canada’s historical GHG emissions – 2020 Update
- CER–Market Snapshot: Canada’s oil exports started to recover from COVID-19 in June and July 2020
- CER–Market Snapshot: How hydrogen has the potential to reduce the CO2 emissions of natural gas
- CER–Market Snapshot: Oil sands use of natural gas for production decreases considerably in early 2020
- CER–Market Snapshot: Natural gas production in Alberta and British Columbia is changing
- CER–Market Snapshot: Crude-by-rail exports reach record highs before falling to near record lows in 2020
- CER–Market Snapshot: Canada’s share of total crude oil exported to the U.S. is growing
- CER–Market Snapshot: CCS in Alberta and Saskatchewan – long-term storage capacity and the potential to lower industrial sector emissions intensity
- CER–Market Snapshot: Western Canadian conventional, tight, and shale oil production is expected to steadily grow to 2040
- CER–Market Snapshot: Even though Canada exports a lot of electricity, it imports a lot too
- CER–Market Snapshot: Of the almost 1 million barrels per day of cuts to western Canadian oil supply in mid-May, about 700 thousand barrels per day has come back online
- CER–Market Snapshot: Rail remains important for transporting western Canadian crude oil
- CER–Market Snapshot: Ethane potential from natural gas production is significant and is expected to continue to grow in Canada
- CER–Market Snapshot: Production of condensate and pentanes plus reached an all-time high in western Canada in 2018
- CER–Market Snapshot: Canada’s 1st refinery in over 30 years comes online near Edmonton, Alberta with greener technology
Provincial & Territorial Energy Profiles aligns with the CER’s latest Canada’s Energy Future 2021 datasets. Energy Futures uses a variety of data sources, generally starting with Statistics Canada data as the foundation, and making adjustments to ensure consistency across all provinces and territories.
- Date modified: | <urn:uuid:997060ea-a6e9-43f2-bd1b-139ce02ef248> | CC-MAIN-2022-33 | http://one-neb.gc.ca/en/data-analysis/energy-markets/provincial-territorial-energy-profiles/provincial-territorial-energy-profiles-alberta.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573172.64/warc/CC-MAIN-20220818063910-20220818093910-00295.warc.gz | en | 0.905624 | 4,891 | 2.6875 | 3 |
Source: Carbon Brief
The EU has come to the end of a three-year effort to reform its controversial farming subsidy programme, known as the Common Agricultural Policy (CAP).
Agriculture is responsible for at least a tenth of the EU’s greenhouse gas emissions and has been described as the “main driver of environmental degradation in Europe”.
This means that, as the bloc’s main farming strategy and the single largest part of EU spending, the CAP has significant potential to tackle climate change.
However, campaigners and scientists have warned for years that the policy has pumped funding into high-emitting livestock farms and unsustainable fertiliser use.
A recent review by official EU auditors found that the €100bn of CAP funds set aside for climate action between 2014-2020 so far have had “little impact” on emissions.
The new CAP rules, which will take effect on 1 January 2023, include some new environmental measures, but critics say the final text – which was agreed last Friday – is riddled with “loopholes” and is unlikely to bring significant change.
In this Q&A, Carbon Brief explains the potential role of the CAP in cutting emissions and what the newly agreed reforms entail.
- What is the CAP and why is it being updated?
- Why is the CAP important for cutting emissions in the EU?
- How successful has the CAP been in tackling emissions so far?
- What were the key climate-related battlegrounds in the CAP discussions?
- What are the reforms that have been agreed upon?
- How have climate experts and NGOs responded to the new CAP?
- What is the UK planning as a replacement for the CAP?
What is the CAP and why is it being updated?
The Common Agricultural Policy (CAP) was one of the first major policies enacted by the European Economic Community, the precursor to the EU. Created by the Treaty of Rome in 1958 and put in place in 1962, its aim was to increase the self-sufficiency of Europe’s food system and reduce shortages in the wake of the second world war.
The share of the EU budget that goes to the CAP has been steadily declining since the mid-1980s, as shown in the chart below. This decline is set to continue, but funding for the CAP will still make up about one-third of the EU budget during 2021-2027.
For the years 2021–2027, a total of €386.6bn has already been allocated to the CAP. This figure is inclusive of around an extra €8bn dedicated to rural development under the NextGeneration EU plan designed to aid recovery from the Covid-19 pandemic. These supplemental funds are part of the budget for the years 2021 and 2022. The just-agreed reforms will take effect on 1 January 2023.
The CAP is organised into two pools of funding, commonly known as the two “pillars”. The first pillar is the European Agricultural Guarantee Fund; the second is the European Agricultural Fund for Rural Development.
The first pillar encompasses about three-quarters of the CAP’s funding. The majority of this money – about 65% of the total CAP budget for 2021 – goes to income support for farmers. This income support is mostly in the form of “direct payments”, but also includes extra payments for using environmentally friendly farming practices.
The remaining funds in the first pillar are earmarked for market interventions, such as buying crop surpluses to preserve price stability.
The second pillar, added to the CAP under the “Agenda 2000” reforms, focuses on rural development. Its stated priorities are “fostering agricultural competitiveness”, “ensuring sustainable management of natural resources” and “achieving balanced territorial development of rural economies and communities”.
Compared to the money allocated under the first pillar, member states have much more flexibility in how they spend their second-pillar funding. It makes up 25% of the 2021 CAP budget.
The original iteration of the CAP “worked extremely well” in terms of increasing European food security, Dr Mark Brady, an agricultural and environmental economist at the Swedish University of Agricultural Sciences and Lund University, tells Carbon Brief. He adds:
“It really helped farmers to finance investments and to increase food production. But it had a best-before date…By the 1970s, [the CAP] had worked so well that the EU was even over-producing – producing far more food than could be consumed in Europe.”
In fact, reformers began clamouring for change soon after the policy was enacted. But, as a result of the growing over-production problem, true reforms to the policy began in the 1980s, such as the milk quota introduced in 1984 to reduce production of dairy products.
Reforms in the 1990s reduced the amount of support that was directly linked to production, but these were “relatively ineffective”, Brady says.
(For the most part, these reform attempts are linked to the seven-year budget cycle of the EU. Changes to the policy are proposed by the European Commission, but have to be agreed upon by the commission, the European Council and the European Parliament.)
Reforms that entered into force in 2005 majorly restructured support by “decoupling” payments from production. Under this “Single Payment Scheme” (or “Single Farm Payment”), farmers were no longer required to produce crops on unproductive land; instead, they could receive payments for maintaining the condition and quality of the land. This was the “most significant reform of the CAP to date”, Brady says. But, he adds, it still fell short:
“The 2005 reform had a fantastic effect. It really eliminated these huge production surpluses and the big trade problems. But it didn’t solve a lot of the environmental problems associated with agriculture.”
These subsidies were still paid out on a per-hectare basis, with payments varying based on the historical production levels of a given farm.
The most recent round of reforms, in 2013, aimed to improve on the changes made in 2005. These reforms evened out the per-hectare payments that farmers receive and introduced policies to “green” the CAP. (See: How successful has the CAP been in tackling emissions so far?)
Most of the funds set aside for direct payments go to a small number of large farms and the attempts to “green” the CAP have had little impact on the environmental impact of agriculture in the EU, de Pous says. He tells Carbon Brief:
“Every effort to set the budget for another seven years was part of a reform effort to try and align it with market realities, new challenges, sustainability and, of course, at some point climate…But what I think is important to flag is that every time they try to reform the system, it basically runs into the ground and very little changes in the end.”
For example, the per-hectare direct payment scheme leads to large subsidies to a handful of large farms, while small farms receive less money. (In 2019, 74% of direct payment funds went to just over 15% of EU farms.)
This latest set of reforms was supposed to further “green” the CAP and increase its effectiveness at producing “public goods”, says Dr Tim Benton, the director of the energy, environment and resources programme at thinktank Chatham House.
But this round of reforms began under the previous European Commission, which “didn’t have much of a green agenda at all”, de Pous says.
Calls by climate campaigners for the current commission to withdraw the proposal and replace it with a “greener” one were rejected by European Commission climate chief Frans Timmermans. (See: How could the reforms that have been agreed impact climate action?)
Because of the money and incentives involved, reforming the CAP in any real “green” way is inherently difficult, Benton tells Carbon Brief:
“We have developed a food system predicated on producing ever more calorie-rich food ever more intensively, driving down the price of food and making it economically rational to waste…This system is very entrenched as an important engine of economic growth, even though it is bad for people and the planet…Unlike the energy sector where it is possible to imagine selling more, but more renewable, getting food right implies selling less – and so, politically, it is all rather toxic to be ambitious.”
The establishment of the CAP was foundational to the formation of the European Economic Community and the farming lobby remains very powerful today. EU farmers have historically protested against attempts to reform the CAP or enact other policies that they see as harmful to their bottom lines.
Why is the CAP important for cutting emissions in the EU?
Agriculture in the EU – including the UK – emitted 435m tonnes of CO2 equivalent (MtCO2e) in 2018, around a tenth of the bloc’s total emissions. As the EU’s primary agricultural policy, the CAP is seen as vital for tackling these emissions.
The bloc’s overall emissions have fallen by nearly a quarter since 1990, while greenhouse gases from its farms have fallen by a fifth. Agricultural emissions have actually risen slightly over the past decade, as the chart below shows.
Emissions from key sectors between 1990-2018 for EU member states and the UK, millions of tonnes of CO2 equivalent. Source: Eurostat. Chart by Carbon Brief using Highcharts.
Emissions cuts that have taken place in the sector since 1990 have been attributed to increases in productivity, a decline in cattle numbers and improvements in European agriculture, such as more efficient use of inorganic fertilisers.
The sector is also indirectly responsible for large volumes of emissions from sources such as imported animal feed. One analysis concluded that the livestock sector alone could account for up to 17% of the EU’s total emissions footprint.
According to the agricultural emissions reported by the EU to the UN Framework Convention on Climate Change (UNFCCC), which exclude energy-related CO2 emissions from farms, 55% is methane from livestock and 43% is nitrous oxide from fertilisers and manure management.
Significantly cutting these emissions is challenging as they result from biological processes that are difficult to address or replace, but measures to cut fertiliser use or livestock numbers could have a significant impact.
Farmland can also store carbon – for example, in planted trees and grassland – and landowners could be incentivised to do this. (Owing to the way greenhouse gases are conventionally recorded, these emissions savings would not be described as “agricultural”, but would come under land use, land-use change and forestry – known as “LULUCF”.)
As it stands, however, EU agricultural land use is a net emissions source rather than a sink, releasing around 56MtCO2e in 2018 due to the way organic soils are managed and the conversion of grassland into crops, according to thinktank Germanwatch. This brings agriculture’s share of EU emissions up to 12%.
(The 56MtCO2e figure does not include the latest UK greenhouse gas inventory figures in which, due to improvements in the approach to calculating peatland emissions, the land sector swapped from being a sink to being a source.)
The CAP has the potential to address all of these areas, although, in practice, it has been criticised for failing to live up to this potential (See: How successful has the CAP been in tackling emissions so far?).
The EU has raised its overall climate ambition in recent months, aiming for net-zero greenhouse gas emissions by 2050 and “at least” a 55% reduction in emissions by 2030, compared to 1990 levels. The European Commission has said the new CAP will be aligned with the ambition of these targets as part of its “European green deal”.
Under existing policies, the commission expects all sectors to curb their non-CO2 emissions “significantly…with [the] notable exception of agriculture” in the coming decades. This can be seen in the chart below.
Commission modelling suggests that a combination of technical mitigation measures on farms and dietary changes, such as a shift away from meat, could cut the sector’s non-CO2 emissions from 430MtCO2e in 2015 to 230MtCO2e in 2050.
However, given its “restricted mitigation potential”, even in a “deep decarbonisation” scenario by 2050 agriculture is likely to make up “most of the remaining sources of EU greenhouse gas emissions”, according to the commission’s analysis.
“It reflects the exceptionalism where for decades in EU policymaking, agriculture…has been kept outside of environmental policy, by-and-large.”
The impact assessment for the 2030 emissions target also concludes that agriculture is expected to be the single largest emissions source in the coming years, noting the need for CO2 removals to make up for it. However, it also emphasises the need to cut emissions where possible:
“Current policies need to be accompanied by ambitious implementation of the national CAP strategic plans, requiring member states to focus on increased environmental ambition. The absence of such ambition will result in a stagnation of non-CO2 emissions of the sector.”
How successful has the CAP been in tackling emissions so far?
In an outline of the CAP, the commission stated prior to the reform that it already offered “a number of instruments to find adequate answers to the challenges of climate change”.
It estimated that €104bn – 25% of the 2014-2020 CAP allocation – was “related to climate”. These climate-relevant policies included:
- A “cross-compliance mechanism”, which set basic environmental standards that farmers had to meet in order to receive subsidies.
- The “green direct payment” or “greening”, introduced in 2015, granted for implementing three compulsory practices: crop diversification, ecological focus areas and permanent grassland. This made up 30% of the direct payment budget.
- Climate action is also an important aspect of the European Agricultural Fund for Rural Development, which supports farm modernisation. Between 2014-2019, 21% of the fund went to voluntary schemes known as agri-environmental climate measures (AECMs), which encouraged “green” practices on farms, 9% to organic farming and 0.7% to Natura 2000 sites.
The first two measures were both compulsory parts of the CAP’s “first pillar”, whereas the rural development policy – the “second pillar” – left member states to come up with their own spending programmes and AECMs. (For more on how the CAP is structured, see: What is the CAP and why is it being updated?)
When these environmental measures were first introduced, the commission said they would make the CAP “better targeted, more equitable and greener”. However, many academics and NGOs have been highly critical of the CAP’s climate impact so far.
A briefing released by NGO coalition Climate Action Network (CAN) Europe at the end of 2020 stated that “the current CAP measures have not significantly contributed to the EU’s climate change mitigation and adaptation efforts and needs”.
This conclusion was supported by a special report from the European Court of Auditors (ECA) released just days before the final CAP negotiations were set to start. It found that, despite billions being spent on green practices, there had been “little impact” on emissions, with no funding to reduce livestock numbers or prevent peatland drainage.
CAN’s briefing said that the environmental standards of the cross-compliance mechanism “set a very low bar in terms of climate”, with no limitations on fertiliser use or livestock numbers.
Meanwhile, analysis suggests that less than 5% of the farmland that has benefited from “greening” direct payments has seen a change in agricultural practice.
An earlier report by the ECA concluded that the threshold for receiving green direct payments had been set too low and that “greening as currently implemented is unlikely to provide significant benefits for the environment and climate”.
Member states have been given some discretion in applying greening rules and they have taken advantage of this to “limit the burden on farmers and themselves” rather than to maximise climate benefits, according to the ECA report.
Among the potential AECMs that farmers could pick are measures with the potential to tackle emissions, such as maintaining grasslands and planting leguminous crops which can reduce the need for artificial fertilisers.
However, once again the flexibility that member states have been given has resulted in a general lack of action, as Prof Alan Matthews, a European agricultural policy researcher at the Trinity College Dublin, tells Carbon Brief.
“The CAP gives you a set of options, a set of interventions you can draw upon and design your programmes around, but it is up to the member states, ultimately, to decide what their priorities are and the evidence is…that actually very small allocations were made to climate action at the outset.”
The ECA found that the packages of measures offered to farmers in Greece, Spain, France, Poland and the Netherlands remained broadly unchanged from 2014-2020 compared to the previous period, in which green measures were not offered.
Also, while AECMs are thought to be effective when properly implemented, they have received a fraction of the funding that goes directly to landowners.
In the 2014-2020 round of CAP funding, direct payments were worth €40.4bn annually compared to the €3.5 bn that goes to AECMs, organic farming and Natura 2000 sites.
Nyssens tells Carbon Brief that while the commission tends to emphasise the “potential” of the CAP, this potential has largely failed to materialise:
“Every external independent evaluation and report has come to the same conclusion: indeed, there are some good instruments, but they are not used effectively…and member states are generally choosing the easy solutions and putting economic interests before environmental interests.”
What were the key climate-related battlegrounds in the CAP discussions?
The latest CAP reforms were first proposed by the European Commission in 2018, before the arrival of the bloc’s “green deal” and carbon neutrality target. Even so, the commission has emphasised that the two can go hand-in-hand.
The commission farm-to-fork and biodiversity strategies that have followed, as part of the green deal, both set out ambitious targets including reducing nutrient loss by 50% and chemical fertiliser use by 20% by 2030.
However, as Matthews tells Carbon Brief, while the original draft proposal from June 2018 included various stronger standards, over the course of negotiations “nearly all of those proposals have been watered down if not simply removed by the co-legislators”.
Dr Ana Frelih Larsen, a senior fellow at the Ecologic Institute who specialises in the implementation of the CAP, tells Carbon Brief that the majority of agricultural ministers on the Council of the EU, as well as farming lobbyists Copa Cogeca, have resisted increased environmental ambition.
According to Mattthews, “much of the foot dragging” on this issue comes down to its perceived impact on farmers’ incomes.
Meanwhile, environmental NGOs, including the climate activist Greta Thunberg and Greens in the European Parliament, have been pushing for more climate ambition under the CAP reforms, accusing EU leaders of “greenwashing”.
In March 2020 they were joined by more than 3,600 scientists who signed a letter saying they were “concerned about current attempts to dilute the environmental ambition of the future CAP and the lack of concrete proposals for improving the CAP in the draft of the European green deal”.
“These instruments are not systematically linked to any effective measure for greenhouse gas reduction or climate adaptation, thus lacking any justification of this statement. Instead, they even partly support practices and sectors with significant greenhouse gas emissions.”
The key focal points around climate change in the new CAP discussions concerned its “green architecture” – the underlying rules that govern how and when money is allocated to farmers.
Initially, the commission intended to enhance the “conditionality” of direct payments, meaning improved environmental standards would have to be met to receive money, compared to the previous cross-compliance and greening rules.
While existing standards are widely seen as ineffective (see: How successful has the CAP been in tackling emissions so far?), there was hope that this conditionality could be expanded to provide tighter protections for peatlands and grasslands, in particular.
However, over the course of negotiations these standards were progressively weakened. NGOs warned that this would result in CAP beneficiaries ploughing up land that should be acting as a carbon sink without losing their EU funding, for example.
Perhaps more significant was a shift from direct payments towards payments that actively reward farmers for improving ecosystem services.
This came in the form of newly developed “eco-schemes”, which fall under “pillar one” of the CAP and would see farmers receive higher payments if they meet additional environmental conditions. Crucially, these schemes were proposed to take a chunk of money from direct payments.
The commissions published a list of potential eco-schemes in January 2021, including organic farming practices, re-wetting peatlands and planting climate-resistant crops.
“Eco-schemes have been really brandished as the one reason this new CAP will be so great for the environment,” Nyssens tells Carbon Brief.
However, she says that, while they are a step in the right direction, the rules surrounding them have been left “extremely weak or vague so member states have full discretion in how they allocate that money and how high they set the level of ambition”.
Additionally, NGOs have warned that not only are subsidies to high-emitting livestock farms set to continue under the new CAP, but eco-schemes could provide “hidden subsidies for factory farms” in the form of payments to improve animal welfare.
Matthews is more optimistic about the eco-schemes and other improvements. “We could be doing more, sure, we have a climate emergency…[but] there is progress, it’s just too slow,” he tells Carbon Brief.
Larsen says a wider issue is the lack of clear emissions targets for agriculture in the EU’s overall climate framework.
Instead, member states have targets to cut emissions from all sectors outside of the EU’s emissions trading system (ETS), which includes not only agriculture, but also transport, buildings and waste. In practice, this means agriculture could be left untouched.
“Without these clear targets there is less pressure for CAP to deliver,” she says.
Larsen adds that a lack of climate ambition emerging from EU-level negotiations would mean more comes down to the strategic plans for agriculture released by member states, giving Poland’s plan as an example of one that has already emerged with “too little ambition”.
What are the reforms that have been agreed upon?
The CAP reforms were meant to be finalised at the end of May 2021, but in the end they were pushed back another month after EU lawmakers initially failed to reach an agreement.
According to Politico, the earlier negotiations failed after a “skull-crunching head-to-head clash” between EU governments and members of the European Parliament over how much of the CAP should be spent on green measures.
The “trilogue” negotiations between the European Parliament, member state government representatives on the Council of the EU and the European Commission resumed in late June and a compromise on many of the key points was reached at around 1am on the morning of Friday 25 June.
Several ministers and members of parliament announced that a deal had been reached in principle by Friday evening, although the final reforms still need to be rubber-stamped by both the European Parliament and member states.
Several of the new reforms, which will go into effect from 1 January 2023, are focused on mitigating the climate and environmental impacts of agriculture. These include:
- “Eco-schemes”, where farmers are paid for taking actions that are beneficial to the environment – such as soil restoration or reduced pesticide use – will now make up 25% of the direct payment budget. The first two years after the CAP takes effect will be a “learning period”, where member states only have to commit 20% of the budget towards eco-schemes. For the remaining years, there will be a “floor” of 20%. Any money between 20-25% that the member states do not spend can be transferred to second pillar funding for other “green” measures.
- “Conditionality”, which replaces the cross-compliance mechanism and “greening” requirements, requires all farmers to maintain certain environment- and climate-friendly practices in order to qualify for direct support. While this covers the protection of grasslands, peatlands, buffer strips along rivers, soil cover, crop rotation and biodiversity spaces, Nyssens says “loopholes and exemptions” in the rules mean they will have little impact. This also means the 30% of direct payments that were previously awarded for “greening” are now unconditional, although these payments have partly been replaced by eco-schemes.
- Within the European Agricultural Fund for Rural Development – the CAP’s second pillar – 35% of the budget will go towards “environmental objectives”, covering AECMs, organic farming, Natura 2000 sites, environment-related investments and support areas with “natural constraints” (ANCs) for farming, such as mountainous regions. There are concerns that including ANCs in this funding could mean less money is spent on measures that bring actual environmental benefits.
The funding for eco-schemes was a major sticking point in the negotiations, with the European Parliament pushing for a mandatory 30% of the first pillar budget going towards such payments and the member states pushing for the mandatory level to be 20% instead.
Nyssens calls the final figure of 25% a “reasonable landing zone”. However, she adds, parties both in favour of and opposed to stricter green spending rules can find fault with the ultimate agreement. She tells Carbon Brief:
“Since the pot of money for eco-schemes is taken from the pot of money that was traditionally income support payments, it means a lot of farmers consider that this is their money and they shouldn’t have to do more for it – and a lot of member states kind of agree with that.”
The flexibility in eco-scheme funding means some countries may prioritise the ease with which farmers can access the funds, rather than real environmental change, Nyssen adds. She explains:
“That’s why we are so critical of eco-schemes…there is absolutely no guarantee that this will lead to change.”
The commission says it will have ultimate authority over how much funding will be permitted for various eco-schemes, but Nyssens says it will have its work cut out in reviewing all 27 member states’ draft strategic plans for CAP spending in a relatively short period of time.
In addition, member states will be permitted to decrease the amount of funding committed to eco-schemes if they spend more than 30% of their rural development funding on AECMs, in what has been labelled a “loophole”.
Other changes to the CAP focus on ensuring workers’ rights, supporting young farmers and the establishment of an agricultural crisis reserve fund.
The new CAP will also not place any caps on single-farm subsidies, a move that Thomas Waitz, an Austrian MEP, called a “missed chance to change the system”. Waitz, who is also a co-chair of the European Greens, said that “small farmers will not survive this way”.
Indeed, some have called for the abolition of direct payments entirely. Dr Guy Pe’er, a conservation biologist at the German Centre for Integrative Biodiversity Research and the Helmholtz Centre for Environmental Research, said in a press briefing following the CAP agreement:
“There is an elephant in the room, a very big one, which is called direct payments. Economists have been saying for a long time and demonstrating that this is a huge waste of money…The biggest losers from the CAP reform, or the lack of a CAP reform, are the farmers.”
Despite the various environmental conditions farmers have to meet to access direct payments, compromises on the rules mean it will largely be left up to member states to decide which farms qualify, potentially rendering the rules ineffective.
Environmental activists and NGOs had pushed for the new CAP to integrate the aims of the green deal. While the parliament did adopt amendments in that direction, these were ultimately scrapped from the deal.
The final reforms do include text that nods to the EU’s commitments under the green deal, but contain nothing that is legally binding. EurActiv reports that the text will read:
“When assessing the proposed CAP Strategic Plans, as referred to in Article 106, the Commission should assess the consistency and contribution of the proposed CAP Strategic Plans to the Union’s environmental and climate legislation and commitments.”
How have climate experts and NGOs responded to the new CAP?
Although EU agriculture commissioner Janusz Wojciechowski wrote on Twitter that the reform was “one of the most ambitious” in history, climate experts, activists and NGOs have been critical of the deal, with Greenpeace labelling it “a disaster for the climate, nature and small farms”.
Greenpeace EU’s agricultural policy director, Marco Contiero, wrote in a press release:
“When it comes to farming, the EU doesn’t listen to science, to small farmers or even to its own auditors, and has delivered a policy that only benefits land barons and the biggest agricultural players. This CAP deal largely keeps things as they are.”
Green MEP Thomas Waitz, who had been vocally opposed throughout the two days of negotiations last week, tweeted that “this is a shady deal” and that the eco-scheme compromises are “worse than before”.
— Thomas Waitz (@thomaswaitz) June 25, 2021
Larsen tells Carbon Brief that, as it stands, it looks “unlikely” that the CAP will support climate action in the EU:
“The absence of binding alignment with green-deal targets, the weakened definitions of GAEC [good agricultural and environmental condition of land] standards and many exemptions are very disappointing. This puts even greater pressure on the commission to ensure the ambitious implementation of remaining environmental provisions, especially around eco-scheme design, conditionality and AECMs.”
Matthews, on the other hand, highlights the potential of some of the outcomes, including eco-schemes, and says it will “now be up to member states to make use of these new tools in their national CAP strategic plans”.
Many experts note that by failing to act decisively on climate change and environmental protections, the CAP would ultimately end up hurting farmers. Pe’er said in a press briefing:
“The science shows very clearly that the greatest risks for food security come from the climate crisis and the biodiversity crisis. Combined, the fate of farmers and food security are completely dependent on the environment. Yet, the CAP fails on all frontiers and it continues to fail, not because there are not win-wins to find, but because there’s no political interest in that.”
Harriet Bradley, a senior agriculture policy officer with BirdLife Europe, said in a press release:
“This CAP deal is a free-for-all dressed up as system change. There is nothing to stop EU countries from continuing to fund the destruction of nature. This is totally incompatible with the EU Parliament’s promises to transform agriculture and their commitments under the Climate Law and Biodiversity Strategy.”
Bradley and Nyssens have both called on the parliament to scrap the deal.
Timmermans said on Twitter that while the new CAP was “not perfect” it was a “real shift” towards greener agricultural policy and that it was “a step in the right direction”.
Activists from outside the EU have pointed out that the CAP reforms will have impacts far beyond Europe’s borders. In a press briefing on Friday, Valentina Ruas, a Portugal-based Brazilian climate activist with Fridays for Future, said:
“It’s clear that in a globalised world where patterns of development have been shaped by colonialism and injust relations, discussions such as the future of CAP and climate will need to be visualised beyond EU territory, where these policies have a huge say in what happens to communities worldwide…This means that for every action taken, European leaders cannot disassociate them from the consequences.”
Given its history of “environmental plunder” in the global south, Europe “should be taking the lead” on developing ambitious policies with climate justice in mind, Ruas added.
Nyombi Morris, a Fridays for Future activist from Uganda, said at the same press conference:
“Over the past 20 years, the EU has used its economic partnership agreement to enable their highly subsidised agricultural products to be dumped in Africa…We have to protect our sustainable farming.”
What is the UK planning as a replacement for the CAP?
The UK government introduced a replacement scheme under its Agriculture Act, which the Department for Environment, Food and Rural Affairs (Defra) says will move the nation towards “a future where farmers are properly supported to farm more innovatively and protect the environment”. It passed into law at the end of 2020.
Under the new scheme, farmers in England will be paid to produce “public goods”, such as environmental improvements, replacing the direct payments they previously received under the CAP.
(Agriculture is a devolved issue, meaning that most of the bill will not affect farmers in Scotland, Wales and Northern Ireland. Devolved governments have been working on their own replacements for the CAP.)
Prior to Brexit, the UK government had been vocal in its desire for CAP reform and had advocated for using “taxpayers money to pay farmers for public goods that the market otherwise would not reward, such as protecting the natural environment”.
Matthews tells Carbon Brief:
“The UK approach on paper is more radical, because they have signalled they wanted to move away from the income support payment. We will have to see to what extent the government can actually implement that policy.”
These schemes are set to be rolled out over the next two years, gradually replacing basic payments from the government, which will end completely in 2027.
In May, the government released its peat and tree strategies for England, which emphasised the importance of these schemes in driving its significant tree-planting and peat-restoration targets.
Environmental campaigners have broadly welcomed the shift to a “public money for public goods” approach and UK government adviser the Climate Change Committee (CCC) stated in a 2020 report that “a shift in the subsidy system towards the delivery of environmental benefits is needed”.
To achieve the UK’s net-zero emissions target, the CCC says there should be a “high take-up” of practices that cut non-CO2 emissions, as well as a fifth of farmland being converted into forests, peatland and energy crops.
There are concerns that the new payment system alone will not be enough to encourage such practices on English farms, particularly as the government pushes trade deals with nations such as Australia, where environmental standards are already lower.
The post Q&A: Will EU Common Agricultural Policy reforms help tackle climate change? appeared first on Carbon Brief. | <urn:uuid:ab8d1205-24d0-4360-9678-3d01e80f22ca> | CC-MAIN-2022-33 | http://www.climatechange.ie/qa-will-eu-common-agricultural-policy-reforms-help-tackle-climate-change/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00497.warc.gz | en | 0.96227 | 7,583 | 3.75 | 4 |
An archaeological examination of the ruins and ancient cities claiming to be the first historical city in Thailand — and the information you’ll need to visit them.
What Is the Oldest City in Thailand?
For millennia, Thailand has been the crossroads of Southeast Asia, bearing witness to the rise, fall, and conquest of numerous kingdoms. This region wedged between India and China has a long history of overlapping migrations which have all built atop one another. Among the modern borders now inhabiting these lands, perhaps none has seen more waves of peoples, religions, and empires in the last 2000 years than Thailand.
These have included the Dvaravati, Khmer, Lawa, Srivijaya, Lao and Thai all claiming, combating, and coalescing with each other until the modern nation emerged as it stands today. Many of these factions and cities are held up as the first — but in reality;
What is “Thailand’s First City”?
The ruins in Nakhon Pathom are the first and oldest city settled in mainland Thailand by the Indianized and Buddhist Dvaravati culture, which would give direct influence and rise to the Theravada kingdom of the ethnic Thai people, forming the foundations of modern Thailand.
However, Thailand is unique in Southeast Asia as it is a patchwork of millennia of overlapping cultures and empires, each leaving their own imprint on the landscape, including their own claims to being the first ancient city of Thailand.
Nakhon Pathom (Nakhon Chai Si)
Era: c. 500-1000 CE
GPS Coordinates: 13.81526, 100.09705
A Brief History of Nakhon Pathom
The modern name of “Nakhon Pathom” translates directly as “first city” and lies only a half-hour due west of modern Bangkok. This ancient city served as the entry point and first settlement of the Dvaravati, a Mon people migrating from Burma who were the first to introduce Theravada Buddhism to Thailand, many centuries before the would-be Thais arrived from southern China.
From Nakhon Pathom, the Dvaravati spread their sphere of influence through the majority of modern Thailand, coming into contact with the indigenous Lawa people in the north, the Khmer Empire in the east. Their settlements and influence on the landscape would lay the foundations for modern Thailand as they were gradually absorbed into the encroaching Khmer and Thai kingdoms.
What Is There to See in Nakhon Pathom?
- The most famous monument in Nakhon Pathom is the Phra Pathom Chedi, a massive golden stupa that competes for the title of largest in the world. It is built over an ancient Dvaravati stupa rediscovered by the Thai King Mongkut in the 1800s.
- The Phra Pathom Chedi National Museum is located within the stupa’s complex and hosts many displays on the Dvaravati culture.
- Several other Dvaravati-era ruins exist around the city, such as the Phra Pathon Chedi, Wat Phra Men, and Wat Dhammasala.
How to Get to Nakhon Pathom
Nakhon Pathom city is the capital of Nakhon Pathom province and sits about 50 km west of modern Bangkok. Regular buses and trains are running to the city and take about 30-60 minutes, depending on where in Bangkok you are leaving from.
Ban Chiang (ancient name unknown)
Era: c. 1500-900 BCE
GPS Coordinates: 17.4076, 103.23636
A Brief History of Ban Chiang
Main article: Ban Chiang: Isaan’s Forgotten Past
Ban Chiang was a wholly unexpected discovery in Thailand’s northeastern region, known collectively as Isaan. Around 3500 years ago, at a time when both Indian and Chinese civilizations were in their relative infancies and beginning their respective Bronze Ages, Southeast Asia was mostly considered a remote and undeveloped land.
This perception changed in 1966, when archaeological discoveries in Udon Thani province uncovered a settlement possessing not only sophisticated pottery and bronze-working but also that the entire village had been built on an artificially elevated mound.
However, despite the importance of the find, a small, modern village now inhabits that same mound. This understandably complicated attempts at excavation. However, a handful of residents and local temples allowed excavations at their properties which yielded impressive finds.
As a result, Ban Chiang became one of Thailand’s first UNESCO World Heritage Sites and is home to a museum showcasing the finds. A walk around the town will bring you to some of the excavation sites that are still open to the public.
What Is There to See in Ban Chiang?
The UNESCO World Heritage Museum is the main attraction, featuring very good informational displays, artifacts found from the ancient city, and representations of the excavations
Around Ban Chiang town are covered excavation pits open for visitors.
How to Get to Ban Chiang
Ban Chiang is a rural village about 60km east of Udon Thani city. It’s possible and easy to do by private transportation — but would be a long, hot roundtrip if you’re renting a motorbike.
Alternatively, the Udon Thani bus station is located in the middle of the city and has regular buses heading out of the city in that direction. There are roadside signs, but to make sure you don’t miss the stop, alert the bus driver or conductor that is where you’re headed.
Once off the bus, motorcycle taxis can take you the remaining 3 km into the village and archaeological site.
Wiang Chet Lin (Wiang Misankorn)
Era: c. 500-1200 CE
GPS Coordinates: 18.81288, 98.95267
A Brief History of Wiang Chet Lin
The Lawa people are considered by historians and the Thai people themselves as the original inhabitants of their country. That said, their native-born cities which once surrounded the modern city of Chiang Mai brought them into conflict, subjugation, and alliance with several generations of migrants.
The pinnacle of this was their tenuous relationship with the Dvaravati kingdom of Hariphunchai. This city was based in modern-day Lamphun and their records contain many stories of their relationship with the Lawa, who are represented as antagonists (from the Hariphunchai point of view).
However, the legendary Lawa founder of Wiang Misankorn is also the founder of Hariphunchai. The two cultures collaborated on Buddhist temples such as Wat Ku Din Khao and San Ku, which sits atop Doi Pui.
By the time that settlers from Chiang Saen established the Lanna Kingdom in the Chiang Mai-Lamphun Basin, the majority of the valley’s Lawa inhabitants had abandoned their cities, leaving their walled ruins for the Thais to rebuild into Wiang Chet Lin, Wiang Suan Dok, and Chiang Mai.
What Is There to See in Wiang Chet Lin?
Very little remains of any of the Lawa cities, as they were built over by the Thai newcomers when they founded Chiang Mai.
- The circular Wiang Chet Lin city wall runs through the Huay Kaew Arboretum.
- Wat Ku Din Khao is a Lawa-Hariphunchai-era temple in the Chiang Mai Zoo.
- San Ku is a Lawa-Hariphunchai-era temple at the peak of Doi Pui mountain.
How to Get to Wiang Chet Lin
Wiang Chet Lin sits at the northwestern edge of Chiang Mai city, at the base of Doi Suthep mountain. It is easily accessible by any city transport heading that way, including tuk-tuks, red trucks, and the city buses. Get off on the Huay Kaew Road outside the Chiang Mai zoo, and you’ll be in the heart of Wiang Chet Lin.
Nakhon Si Thammarat (Tambralinga)
Era: c. 500-1300
GPS Coordinates: 8.42766, 99.96377
A Brief History of Nakhon Si Thammarat
Long before Indian culture penetrated into the Southeast Asian mainland, the maritime trade routes between China and India had resulted in several Indianized kingdoms emerging throughout the coastal areas. Tambralinga was one such kingdom, emerging as early as the 5th Century CE, and eventually fell within the mandala of the Srivijaya Empire.
Unlike the many Buddhist cities that would emerge in the Thai mainland, Tambralinga was a Hindu kingdom and the overwhelming amount of relics found from this area was to the Shaivite sect. The remains of more than 40 Hindu shrines dating from the 600-900 CE have been found in the area, with the religion maintaining a strong influence in the city well into the 18th Century CE. Brahmins (Hindu priestly caste) from Nakhon Si Thammarat were even employed by the king of Siam when the capital was moved to Bangkok in the 1800s.
The city was an important seaport. According to records of the ancient city from the Tang dynasty, Tambralinga was surrounded by a wooden fence for protection. Homes of the higher class were built of wood and common citizens’ homes built of bamboo.
However, as mainland kingdoms such as Lavo, Sukhothai, and Ayutthaya began to grow in their influence, Tambralinga soon fell under their dominion, eventually settling under the Ayutthaya flag and becoming the Thai city of Nakhon Si Thammarat.
What Is There to See in Nakhon Si Thammarat?
- The Nakhon Si Thammarat National Museum holds a vast selection of artifacts from the Hindu Kingdom of Tambralinga.
- The Ayutthaya-era Nakhon Si Thammarat city wall is in the central city
- Several ancient Buddhist and Hindu shrines still exist within the city limits.
How to Get to Nakhon Si Thammarat
All of the major sites from the era of Tambralinga and its later periods within the modern city of Nakhon Si Thammarat. While the city is not a major destination for foreign tourists, it is still a lively provincial capital providing easy access and accommodations.
Era: 1238-1438 CE
GPS Coordinates: 17.02092, 99.70247
A Brief History of Sukhothai
Main article: Sukhothai: Dawn of Happiness, Dawn of Thailand
Sukhothai’s name translates as “Dawn of Happiness” and it is officially recorded as the first Thai kingdom. However, there are a few caveats to this. The first is that the city had already been in existence under the control of the Lavo and Khmer Empires. Another is that Thai kingdoms already did exist farther north in the Mekong-Golden Triangle region, such as Chiang Saen.
During an era when the Khmer Empire was experiencing inner turmoil, the Thai locals seized the opportunity to break away. Upon its overthrow of the Khmer rulers in 1236 CE, Sukhothai became the first Thai kingdom in the Central Plains, the heartland of what would eventually become the modern Thai nation-state.
Sukhothai prospered for a little under a century, gaining tributary kingdoms spanning the majority of modern Thailand. However, this short-lived golden age ended shortly after the death of their legendary King Ramkhamhaeng. Following this, most of the tributary kingdoms broke away or were seized by other powers.
Finally, in 1349 Sukhothai succumbed to the growing power of Ayutthaya, becoming a part of their expanding empire. Sukhothai would retain some level of autonomy under Ayutthaya, rather than being dominated outright, but eventually, it did fade from even the memory of Siam until its rediscovery by King Mongkut in the 1800s.
What Is There to See in Sukhothai?
Because Sukhothai was abandoned, much of the ancient city was left intact and fell into ruin.
- The main attractions are in the Wat Mahathat temple grounds in the middle of the ancient city.
- Khmer ruins exist both in the city wall as well as in a moated temple outside the city wall.
- Dozens of other ruined temples exist outside the Sukhothai city wall ready to be explored.
How to Get to Sukhothai
Sukhothai Historical Park is located about 10km east of the modern provincial capital of Sukhothai. There is transportation available from the city bus station. Additionally, many hotels and hostels can rent bicycles, motorbikes, or arrange private transportation to the ruins.
Chiang Saen (Hiran Nakhon Ngoenyang)
Era: c. 600-1300 CE
GPS Coordinates: 17.02092, 99.70247
A Brief History of Chiang Saen
Chiang Saen, and its Ngoenyang Kingdom, was the first of the ethnically Tai kingdoms to be located in modern Thailand. Settled by the Tai Yong ethnic group stemming from southern China, this city is located at the modern Golden Triangle, where Laos, Myanmar, and Thailand all converge at the Mekong River.
Such a location put it at a strategic position for transit and trade with other contemporaneous kingdoms. However, this era came to an end as the ongoing Mongol-led campaigns farther north prompted King Mangrai to move south into what is now Thailand.
The Ngoenyang Kingdom was among the earliest and longest-lived Tai kingdoms, enduring until its descendants established the subsequent Lanna Kingdom at Chiang Mai in 1292 CE. After the founding of Lanna, Chiang Saen remained an important outpost of the kingdom.
What Is There to See in Chiang Saen?
Most of what remains at Chiang Saen today is from its era under Lanna rule rather than its Ngoenyang period.
- The ancient city is surrounded by a conch-shaped city wall and populated with dozens of ruined temples.
- Chiang Saen Noi to the south holds another set of ruined temples.
- The area outside the city wall has many ruined temples.
- Wat Phra That Phu Khao overlooks the Golden Triangle (meeting point of Myanmar, Laos, and Thailand) and holds an ancient mountain temple from the Ngoenyang Period.
How to Get to Chiang Saen
Chiang Saen is in the far north of Chiang Rai Province. From the provincial capital, Chiang Rai city, there are regular buses leaving for Chiang Saen and the Golden Triangle, which is a major tourist attraction. The bus ride takes between 2-3 hours.
Era: c. 450-1100 CE
GPS Coordinates: 14.79917, 100.61435
A Brief History of Lopburi
Main article: Ancient Lopburi: Lost Cities Travel Guide
Signs of human habitation at Lopburi stem back much further than the arrival of Indianized culture. However, stemming out of the Mon-Dvaravati’s cultural center at Nakhon Pathom, Lavo (located at modern-day Lopburi) became the center of perhaps the most influential Dvaravati polity, the Lavo Kingdom.
While it’s unlikely that Lavo directly controlled the entirety of the Dvaravati realm, which is instead thought to have been culturally-similar, disparate city-states, Lavo nonetheless was the major power broker among them, even factoring into the legendary founding of the Hariphunchai Kingdom, which ruled Northern Thailand until eclipsed by Chiang Mai.
Whatever level of direct rule, at its height, the Lavo Kingdom’s influence over much of Thailand was significant enough to gain the attention of the Khmer Empire based in Angkor. After subjugating Lavo, the entire Lavo Kingdom (which included the majority of Central Thailand) became a tributary kingdom of the Khmer Empire.
As the Khmers were eventually pushed out of Thailand by the Thais in Ayutthaya, Lopburi’s significance decreased. However, being only 50km from Ayutthaya, it remained an important cultural location. It was also the preferred residence of some Thai kings, such as Narai.
What is there to see in Lopburi?
Due to Lopburi hosting the Dvaravati, Khmer, and Thai cultures, many of the older ruins were either built over or incorporated into later temples.
- Wat Nakhon Kosa is the only remaining Dvaravati ruin in the city.
- Prang Sam Yat and Prang Khaek are two signature 3-tower Khmer monuments dating from the 12th Century CE and 9th Century CE, respectively.
- The Phra Narai Palace and Wat Phra Sri Rattana Mahathat are the best examples of ancient Thai architecture in the city.
How to Get to Lopburi
Lopburi city is the capital of Lopburi province and sits about 130 km north of modern Bangkok. There are regular buses and trains running to the city and take about 2-3 hours, depending on where in Bangkok you are leaving from.
Ayutthaya (Phra Nakhon Si Ayutthaya)
GPS Coordinates: 14.35703, 100.55314
A Brief History of Ayutthaya
As the Khmer Empire’s dominion over Southeast Asia began to falter, the Ayutthaya Kingdom rose up to fill the power vacuum. In time, they allied with, conquered, and absorbed all competing states in the area, such as Sukhothai, Lavo, and Lanna. Ayutthaya even ruled over the fallen Khmer Empire for several centuries.
In this essence, Ayutthaya created the first unified Thai state of Siam, much as Angkor had done with the Khmer Empire several centuries before. After devastating wars with the Burmese and decades of diplomatic maneuvers with European colonial powers, the Siamese capital was moved to modern Bangkok as the Rattanakosin Kingdom, which evolved into the modern nation-state of Thailand.
What Is There to See in Ayutthaya?
Ayutthaya is an interesting blend of a modern Thai town built around its ancestors’ ruins.
- Dozens of ancient and ruined temples fill the spaces between commercial buildings.
- The largest and most important temples are in dedicated parks, some of which have admission fees to enter.
- Many other lesser-explored ruins exist off the main island citadel of Ayutthaya.
How to Get to Ayutthaya
Ayutthaya city is the capital of Ayutthaya province and sits about 50 km north of modern Bangkok. There are regular buses and trains running to the city and take about 1-1.5 hours, depending on where in Bangkok you are leaving from.
I would personally cite Nakhon Pathom as the first city of Thailand, as it is the first settlement of the Dvaravati, who would introduce the cultural traditions that would be absorbed and mixed with all the incoming populations and dominate the country for centuries to come.
However, the title of “Thailand’s First City” would really depend on what you consider to be “Thailand”;
- Ban Chiang was the first advanced culture in the region, mastering bronze working, rice cultivation, and elaborate pottery.
- Nakhon Pathom is the first city in Thailand to adopt the Indianized, Buddhist Dvaravati culture, laying the foundations of what would become the modern Thai nation.
- Wiang Chet Lin is said to be the first of the walled cities by the original Lawa inhabitants of Thailand.
- Lopburi is one of the oldest continuously inhabited cities in the region, hosting the powerful Dvaravati Lavo Kingdom, as well as the Khmer and Thai empires.
- Nakhon Si Thammarat was a part of the Srivijaya’s maritime sphere of influence and adopted Indianized culture long before the rest of mainland Thailand.
- Chiang Saen was the first Thai kingdom within the modern borders of Thailand.
- Sukhothai was the first Thai kingdom that would become part of the future unified Thai state.
- Ayutthaya was the first capital of Siam, the unified nation-state that would become modern Thailand.
City in central Thailand and historic capital of the Ayutthaya Kingdom, which was succeeded by the Thonburi Kingdom in 1767.
Dharmic religion centered on the belief of karma and release from the cycle of reincarnation. Based on the teachings of Siddhartha Gautama.
City in northern Thailand and historic capital of the Lanna Kingdom founded by King Mengrai in 1293.
City in northern Thailand and historic capital of the Ngoenyang Kingdom until the establishment of its successor, the Lanna Kingdom, in 1293 CE.
Mon-Burmese ethnic group based in modern Nakhon Pathom, Thailand. Responsible for the introduction of Buddhism (Theravada sect) to Thailand.
Dvaravati kingdom in northern Thailand centered in the modern town of Lamphun. Eventually conquered by the Lanna Kingdom.
Dharmic religion centered on the belief of karma and release from the cycle of reincarnation. It stems from Vedic teachings and one of the oldest extant religions in the world.
A culture adopting Indian culture, religion, and social structures.
Hindu-Buddhist kingdom which ruled much of Southeast Asia from their capital at Angkor.
Thai kingdom based in northern Thailand and northwestern Laos. Its capitals included Chiang Rai, Wiang Kum Kam, and Chiang Mai.
Dvaravati kingdom in central Thailand centered in the modern town of Lopburi. Eventually conquered by the Khmer Empire.
Ethnic minority group who constructed three walled cities in the Chiang Mai valley: Wiang Nopburi, Wiang Ched Lin, and Wiang Suan Dok. They are also referenced in historic writings as Lua, Milukku, Tamilla, and La.
City in central Thailand and historic capital of the Lavo Kingdom.
Political system found in historic Southeast Asia in which tributary states surrounded a central power without being directly administered by them.
Ethnic group originating in Myanmar who established the first civilizations in modern Thailand. The Mon kingdoms in Thailand are collectively referred to at Dvaravati.
The first settlement of the Mon-Dvaravati culture which existed from c. 500-1000 CE. Also known as Nakhon Chai Si.
Nakhon Si Thammarat
City in southern Thailand and the historic capital of Tambralinga.
Tai kingdom based in Chiang Saen, which was succeeded by the Lanna Kingdom after the establishment of Chiang Mai
Legendary king of Sukhothai who is popularly credited with creating the Thai writing system.
The unified Thai state that began in the Ayutthaya Kingdom and continued through the Rattanakosin Kingdom into modern Thailand.
Empire based in Sumatra which controlled or influenced Buch of the Malay archipelago circa 600-1200 CE.
City in central-northern Thailand and abandoned the capital of the Sukhothai Kingdom.
Wiang Chet Lin
Fortification built by Lanna King Sam Fangkaen over the ruins of Wiang Misankorn.
Lawa city at the base of Doi Suthep founded before the Hariphunchai Period. | <urn:uuid:6d80c783-66d1-4c7b-aee1-fd0271d33966> | CC-MAIN-2022-33 | https://pathsunwritten.com/thailand-oldest-city/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00096.warc.gz | en | 0.944123 | 5,197 | 3.140625 | 3 |
Disclaimer: As a Functional Medicine doctor, it is my intent to seek out and present many approaches to health and recommend what is best for my patients on an individual basis. In the case of vaccines, I am pro-vaccine-safety; I believe they can be helpful and necessary but that we should continue to pursue a more nuanced scientific perspective about the risks versus benefits in uniquely susceptible populations.
UNTIL THE LATE 1960’S THE MEDICAL COMMUNITY BELIEVED that autism was caused by “bad mothering”. Today, most people — and most of the medical community — believes that Autism is a genetic brain disorder. I’m here to tell you that neither one of these statements is true.
Think about it. Rates of autism have skyrocketed over the years, from an estimated 1 child in 3,000 to just 1 in 59 kids today. Sure, wider criteria for diagnosis and better detection might explain some of it — but not an increase of this magnitude.
The real reason we are seeing increasing rates of autism is simply this: Autism is a systemic body disorder that affects the brain. A toxic environment triggers certain genes in people susceptible to this condition. And research supports this position.
Today I will review some of this research and explain how imbalances in the 7 key systems of the body may be the real cause–and thus the real cure–of autism.
A New Understanding of Autism
Dramatic scientific discoveries have taken place during the last 10 to 20 years that reveal the true causes of autism — and turn conventional thinking on its head. For example, Martha Herbert, MD, a pediatric neurologist from Harvard Medical School has painted a picture of autism that shows how core abnormalities in body systems like immunity, gut function, and detoxification play a central role in causing the behavioral and mood symptoms of autism.
She’s also given us a new way of looking at mental disease (and disease in general) that is based on systems biology. Coming from the halls of the most conservative medical institution in the world, this is a call so loud and clear that it shatters our normal way of looking at things.
Everything is connected, Dr. Herbert says. The fact that these kids have smelly bowel movements, bloated bellies, frequent colds and ear infections, and dry skin is not just a coincidence that has nothing to do with their brain function. It is central to why they are sick in the first place! Yet conventional medicine often ignores this.
My friend and mentor, Sidney Baker, MD — a pioneer in the treatment of autism as a body disorder that affects the brain — often says, “Do you see what you believe or do you believe what you see?”
The problem in medicine is we are so stuck in seeing what we believe that we often ignore what is right in front of us because it doesn’t fit our belief system. Nowhere is this true more than in the treatment of autism.
This is in the front of my mind, because I see so many behavioral symptoms in kids from learning disabilities to attention-deficit hyperactivity disorder (ADHD) and even autism.
And I see the rates of medication use skyrocketing for these kids — from stimulants to anti-psychotics (one of the fastest growing drug categories) to anti-seizure medicine, and more. There is another way … Let me tell you a story about a little boy I saw recently.
Treating autism as a body disorder that affects the brain gives us many treatment choices. Children treated in this way can often have dramatic and remarkable — if not miraculous — recoveries.
Sam’s Case: Autism as a Systemic Disorder
Recently, a mother came to see me, desperate because her 2 1/2 year old son had just been diagnosed with autism.
Her son, Sam, was born bright and happy, was breast-fed, and received the best medical care available (including all the vaccinations he could possibly have). He talked, walked, loved, and played normally — that is, until after his measles, mumps, and rubella vaccination at 22 months.
He received diphtheria, tetanus, whooping cough, measles, mumps and rubella, chicken pox, hepatitis A and B, influenza, pneumonia, hemophilous, and meningitis vaccines — all before he was 2 years old. Then something changed. Vaccines may affect susceptible children through different mechanisms. In some it is overwhelming of an already taxed immune system with over 2 dozen vaccinations at a very young age, for some it is the thimerosal (ethylmercury) used as a preservative until recently in most vaccines (although it is still present in most flu vaccines).
He lost his language abilities and became detached. He was unable to relate in normal ways with his parents and other children. And he became withdrawn, and less interactive. These are all signs of autism.
Sam was taken to the best doctors in New York and “pronounced” as having autism, as if it were a thing you catch like a bug. His parents were told that nothing could be done except arduously painful and barely effective behavioral and occupational therapy techniques. The progress would be slow, and his parents should keep their expectations low, the doctor said. Devastated, the mother began to seek other options and found her way to me.
There is much to undo and peel away, like the layers of an onion. But treating autism as a body disorder that affects the brain gives us SO many other treatment choices. Children treated in this way can often have dramatic and remarkable — if not miraculous — recoveries.
Before I explain how I found the clues that gave me a means to treat Sam, let me remind you that the whole basis of functional and systems medicine is the concept of biochemical individuality.
That means that if you take 100 kids with autism, each one may have unique genetics, and unique causes or triggers for their autism and need very different treatments to get better. Autism is just a label. Like every condition or illness, the key is to dig into the layers and peel the onion to discover what is really happening. It is not usually one thing but a collection of insults, toxins and deficiencies piled on susceptible genetics that leads to biochemical train wrecks we see in these children.
We have to pay close attention to what we see, and be ready to work with the unexpected according to the basic principles of systems biology and medicine (known as functional medicine).
That is what I did for Sam …
When I first saw him, this little boy was deep in the inner wordless world of autism. Watching him was like watching someone on a psychedelic drug trip. So we dug into his biochemistry and genetics and found many things to account for the problems he was having.
He had very high level of antibodies to gluten. He was allergic not only to wheat, but to dairy, eggs, yeast, and soy — about 28 foods in total.
He also had a leaky gut, and his gut was very inflamed. The immune system in his gut showed a high level of inflammation by a marker called eosinophil protein X. He had 3 species of yeast growing in his gut and no growth of healthy bacteria. Urine tests showed very high levels of D-lactate, an indicator of overgrowth of bacteria in the small intestine.
Sam was also deficient in zinc, magnesium, and manganese, vitamins A, B12, and D, and omega-3 fats. Like many children with autism, he had trouble making energy in his cells, or mitochondria.
His amino acids — necessary for normal brain function and detoxification — were depleted. And his blood showed high levels of aluminum and lead, while his hair showed very high levels of antimony and arsenic — signs of a very toxic little boy. His levels of sulfur and glutathione were low, indicating that he just couldn’t muster the power to detox all these metals. In fact, his genes showed a major weak spot in glutathione metabolism, which is the body’s main antioxidant and major detoxification highway for getting rid of metals and pesticides.
Sam also had trouble with a key biochemical function called methylation that is required to make normal neurotransmitters and brain chemicals and is critical for helping the body get rid of toxins. This showed up as low levels of homocysteine (signs of problems with folate metabolism) and high methylmalonic acid (signs of problems with B12 metabolism). He also had two genes that set him up for more problems with this system.
Finally, he also had very high levels of oxidative stress or free radical activity, including markers that told me that his brain was inflamed and under free-radical fire.
This may all seem complicated, but it really isn’t. When I see any patient, I simply work through the 7 keys to UltraWellness (based on functional medicine) to see how everything is connected, create a plan to get to the causes of the problems, and then help each patient deal with all the biochemical and physiological rubble that those causes have left along the road.
After 10 months, Sam’s bowels were back to normal, he was verbally fluent, mainstreamed in school and he “lost” his diagnosis of autism.
Having a roadmap, a new GPS system based on functional medicine and UltraWellness, makes this straightforward. You just take away what’s bothering the patient. Give his body what it is missing and needs to thrive (based on the individual’s biochemical uniqueness). Then the body does the rest.
Here is the roadmap I used to help Sam recover.
Sam’s Roadmap to Recovery: A Model for Treating Autism
Step 1: Fix His Gut and Cool the Inflammation There
This step included a number of different tactics including:
- Taking away gluten and other food allergens
- Getting rid of his yeast with anti-fungals
- Killing off the toxic bacteria in his small intestine with special antibiotics
- Replenishing healthy bacteria with probiotics
- Helping him digest his food with enzymes
Step 2: Replace the Missing Nutrients to Help His Genes Work Better
In Sam’s case we:
- Added back zinc, magnesium, folate, and vitamins A, B6, B12, and D
- Supported his brain with omega-3 fats
Step 3: Detoxify and Reduce Oxidative Stress
Once his biochemistry and nutrition was tuned up, we helped him detoxify and reduce oxidative stress.
As I said before, the keys of UltraWellness can help, no matter what the disease or condition. You see, biology has basic laws, which we have to follow and understand. All the details of Sam’s story fit into these laws. We just have to dig deep, peel back the layers, and understand what is going on. When we do this the results are nothing short of miraculous …
After following a gluten-free diet and treating his gut for 3 weeks, Sam showed dramatic and remarkable improvement. He’s getting back much of his language skills and showing much more connection and relatedness in his interactions.
After 4 months, he was more focused, unstuck and verbal.
After 10 months, his bowels were back to normal, he was verbally fluent, mainstreamed in school and he “lost” his diagnosis of autism.
After 2 years all his abnormal tests were normal including the high metals, gut inflammation and damage to his mitochondria and free radicals.
And more importantly, the child was totally normal. Not every child has such a dramatic recovery but many improve, and some improve dramatically using the approach of functional or systems medicine.
Every child with behavior problems, ADHD, or autism is unique — and each has to find his or her own path with a trained doctor. But the gates are open and the wide road of healing is in front of you. You simply have to take the first step.
Please visit the Defeat Autism Now website for more information on this subject, including resources and conferences for doctors and parents.
Now I’d like to hear from you…
Are you raising a child with autism?
How is he or she being treated?
Have you tried any of the approaches here? How have they helped?
Please leave your thoughts by adding a comment below — but remember, we can’t offer personal medical advice online, so be sure to limit your comments to those about taking back our health!
To your good health,
Mark Hyman, MD
Because of the interest in this topic and controversies surrounding it, I am posting all the references for the issues talked about in the article.
1. Curtis TR, ed. The London Encyclopedia. London: Griffi n and Co; 1839.
2. James SJ, Melnyk S, Jernigan S, et al. Metabolic endophenotype and related genotypes are associated with oxidative stress in children with autism. Am J Med Genet B Neuropsychiatr Genet. 2006;141B(8):947-956.
3. Williams TA, Mars AE, Buyske SG, et al. Risk of autistic disorder in affected offspring of mothers with a glutathione S-transferase P1 haplotype. Arch Pediatr Adolesc Med. 2007;161(4):356-361.
4. Reddy MN. Reference ranges for total homocysteine in children. Clin Chim Acta. 1997;262(1-2):153-155.
5. James SJ, Cutler P, Melnyk S, et al. Metabolic biomarkers of increased oxidative stress and impaired methylation capacity in children with autism. Am J Clin Nutr. 2004;80(6):1611-1617.
6. Bull G, Shattock P, Whiteley P, et al. Indolyl-3-acryloylglycine (IAG) is a putative diagnostic urinary marker for autism spectrum disorders. Med Sci Monit. 2003;9(10):CR422-CR425.
7. Wright B, Brzozowski AM, Calvert E, et al. Is the presence of urinary indolyl-3-acryloylglycine associated with autism spectrum disorder? Dev Med Child Neurol. 2005;47(3):190-192.
8. Amminger GP, Berger GE, Schäfer MR, Klier C, Friedrich MH, Feucht M. Omega-3 fatty acids supplementation in children with autism: a double-blind randomized, placebo- controlled pilot study. Biol Psychiatry. 2007;61(4):551-553.
9. Johnson SM, Hollander E. Evidence that eicosapentaenoic acid is effective in treating autism. J Clin Psychiatry. 2003;64(7):848-849.
10. Poling JS, Frye RE, Shoffner J, Zimmerman AW. Developmental regression and mitochondrial dysfunction in a child with autism. J Child Neurol. 2006;21(2):170-172.
11. Herbert MR. Autism: A brain disorder or a disorder of the brain? Clin Neuropsychiatry. 2005;2(6):354-379.
12. Herbert MR. Large brains in autism: the challenge of pervasive abnormality. Neuroscientist. 2005;11(5):417-440.
13. Vargas DL, Nascimbene C, Krishnan C, Zimmerman AW, Pardo CA. Neuroglial activation and neuroinflammation in the brain of patients with autism. Ann Neurol. 2005;57(1):67-81. Erratum in: Ann Neurol. 2005 Feb;57(2):304.
14. Wakefi eld AJ, Ashwood P, Limb K, Anthony A. The signifi cance of ileo-colonic lymphoid nodular hyperplasia in children with autistic spectrum disorder. Eur J Gastroenterol Hepatol. 2005;17(8):827-836.
15. Millward C, Ferriter M, Calver S, Connell-Jones G. Gluten- and casein-free diets for autistic spectrum disorder. Cochrane Database Syst Rev. 2004;(2):CD003498.
16. Uhlmann V, Martin CM, Sheils O, et al. Potential viral pathogenic mechanism for new variant infl ammatory bowel disease. Mol Pathol. 2002;55(2):84-90.
17. Kawashima H, Mori T, Kashiwagi Y, Takekuma K, Hoshika A, Wakefi eld A. Detection and sequencing of measles virus from peripheral mononuclear cells from patients with inflammatory bowel disease and autism. Dig Dis Sci. 2000;45(4):723-729.
18. Hornig M, Briese T, Buie T, et al. Lack of association between measles virus vaccine and autism with enteropathy: a case-control study. PLoS ONE. 2008;3(9):e3140.
19. Bradstreet JJ, El Dahr J, Anthony A, Kartzinel JJ, Wakefi eld AJ. Detection of measles virus genomic RNA in cerebrospinal fl uid of children with regressive autism: a report of three cases. J Am Phys Surgeons. 2004;9(2):38-45.
20. Taylor B, Miller E, Farrington CP, et al. Autism and measles, mumps, and rubella vaccine: no epidemiological evidence for a causal association. Lancet. 1999;353(9169):2026-2029.
21. Williams R. Biochemical Individuality, New York: McGraw Hill; 1998.
22. Autism Research Initiative. Treatment Options for Mercury/metal Toxicity in Autism and Related Developmental Disabilities: Consensus Position Paper. San Diego, CA: Autism Research Initiative; 2005. Available at: http://www.autism.com heavymetals.pdf. Accessed September 17, 2008.
23. Holmes AS, Blaxill MF, Haley BE. Reduced levels of mercury in fi rst baby haircuts of autistic children. Int J Toxicol. 2003;22(4):277-285.
24. Adams JB, Romdalvik J, Ramanujam VM, Legator MS. Mercury, lead, and zinc in baby teeth of children with autism versus controls. J Toxicol Environ Health A. 2007;70(12):1046-1051.
25. Thompson WW, Price C, Goodson B, et al. Early thimerosal exposure and neuropsychological outcomes at 7 to 10 years. N Engl J Med. 2007;357(13):1281-1292.
26. Geier DA, Geier MR. A prospective study of mercury toxicity biomarkers in autistic spectrum disorders. J Toxicol Environ Health A. 2007;70(20):1723-1730.
27. Echeverria D, Woods JS, Heyer NJ, et al. The association between a genetic polymorphism of coproporphyrinogen oxidase, dental mercury exposure and neurobehavioral response in humans. Neurotoxicol Teratol. 2006;28(1):39-48
28. Heyer NJ, Echeverria D, Bittner AC Jr, Farin FM, Garabedian CC, Woods JS. Chronic low-level mercury exposure, BDNF polymorphism, and associations with self-reported symptoms and mood. Toxicol Sci. 2004;81(2):354-363. Epub 2004 Jul 14.
29. Echeverria D, Woods JS, Heyer NJ, et al. Chronic low-level mercury exposure, BDNF polymorphism, and associations with cognitive and motor function. Neurotoxicol Teratol. 2005;27(6):781-796. | <urn:uuid:cd56ace5-a1cf-41bd-958a-677dfd6a0433> | CC-MAIN-2022-33 | https://drhyman.com/blog/2010/05/19/autism/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573118.26/warc/CC-MAIN-20220817213446-20220818003446-00697.warc.gz | en | 0.933585 | 4,166 | 2.828125 | 3 |
Native to Asian countries like Indonesia, 印度, Pakistan, even areas of East 非洲, clove is a 香料 that offers many health benefits. 这些 benefits include aiding in digestion, boosting the immune system, controlling 糖尿病. 丁香还含有抗突变和抗微生物的特性 which may even 有助于防治口腔疾病和头痛.
What are 丁香s?
丁香 is a popular 香料 used in a variety of ways across the world, particularly in Asia. It 形成了许多不同的亚洲菜系的烹饪基础.
丁香是干了的花蕾 from the tree Syzygium aromaticum. It belongs to the plant family named 桃金娘科. It is an evergreen plant growing in tropical and subtropical conditions.
丁香是一种草本植物,人们用它的不同部位, including the dried buds, 茎, leaves to make medicine. 丁香油 因其古老的药用特性而闻名.
Just like many other 香料s originating in Asia, clove has a great history behind it. During the 13th and 14th centuries, 它们是从印尼一路运到中国的, 印度, 波斯, 非洲, 和欧洲.
荷兰 殖民 马鲁古群岛的历史. 今天,丁香在世界各地都是一种非常重要的经济作物.
丁香 has been used for thousands of years in 印度 and China not only as a 香料 but also as a medicine for many ailments.
- Ayurvedic medicine used cloves for tooth decay and halitosis (bad breath).
- In Chinese medicine, clove was considered to possess aphrodisiac properties.
- 磨碎的丁香传统上适用于小切口 疗愈 目的.
- 丁香油 is thought to help relieve headaches, flatulence, as well as reduce stretch marks.
- 它也被广泛用作杀虫剂和驱虫剂. 只要往水中滴几滴,就能看到它们消失!
|Serving Size :|
|Total lipid (fat) [g]||13|
|Carbohydrate, by difference [g]||65.53|
|Fiber, total dietary [g]||33.9|
|Sugars, total including NLEA [g]||2.38|
|Glucose (dextrose) [g]||1.14|
|Magnesium, Mg [mg]||259|
|Phosphorus, P [mg]||104|
|Manganese, Mn [mg]||60.13|
|Selenium, Se [µg]||7.2|
|Vitamin C, total ascorbic acid [mg]||0.2|
|Pantothenic acid [mg]||0.51|
|维生素b - 6 [mg]||0.39|
|Folate, total [µg]||25|
|Choline, total [mg]||37.4|
|Vitamin A, RAE [µg]||8|
|Carotene, beta [µg]||45|
|Cryptoxanthin, beta [µg]||103|
|Vitamin A, IU [IU]||160|
|Vitamin E (alpha-tocopherol) [mg]||8.82|
|Vitamin K (phylloquinone) [µg]||141.8|
|Fatty acids, total saturated [g]||3.95|
|Fatty acids, total monounsaturated [g]||1.39|
|16:1 c [g]||0.03|
|18:1 c [g]||0.78|
|22:1 c [g]||0.02|
|Fatty acids, total polyunsaturated [g]||3.61|
|18:2 n-6 c, c [g]||2.56|
|18:3 n-3 c,c,c (ALA) [g]||0.59|
|20:2 n-6 c, c [g]||0.02|
|20:3 n-6 [g]||0.01|
|20:5 n-3 (EPA) [g]||0.01|
|22:5 n-3 (DPA) [g]||0.18|
|Fatty acids, total trans [g]||0.25|
|Fatty acids, total trans-monoenoic [g]||0.21|
|18:1 t [g]||0.21|
|18:2 t not further defined [g]||0.04|
|Fatty acids, total trans-polyenoic [g]||0.04|
|Aspartic acid [g]||0.6|
|Glutamic acid [g]||0.56|
|Sources include : USDA|
丁香s Nutrition Facts
According to the USDA 菠菜导航网Data Centralcarbohydrates, 蛋白质, energy, dietary 纤维. 矿物质 in cloves include 钾, 钙, 钠, 镁. The vitamins found in them include 维生素E, folate, 烟酸. They also contain 磷, 铁, 锌, 维生素C硫胺, 核黄素, 维他命A 和K. 考虑到这种香料在许多菜肴中使用的数量很少, while they contain many nutrients, 人们可能不会大量获得它们.,丁香中的营养成分包括
Bioactive Substances in 丁香s
According to a University of Texas at Austin research study, certain bioactive 化合物 isolated from clove 提取 include flavonoids, 己烷, methylene chloride, 乙醇, 百里酚, 丁香酚, 和苯. 这些化学物质已被报道具有抗氧化剂, hepatoprotective, anti-microbial, anti-inflammatory properties.
Health 好处 的丁香
Packed with nutrients and bioactive 化合物, it is no wonder that even a small amount of cloves have some interesting health benefits to offer. 让菠菜导航网600来看看它们的菠菜导航网600益处.
丁香s are known to have been used in several traditio部分 medici部分 cultures as a way to help with stomach issues. According to the 书 ‘治愈草药:对菠菜导航网600有益的自然疗法’, cloves have been used to boost digestion and control gastrointesti部分 irritation. Furthermore, ingestion of fried cloves may even stop vomiting, owing to their anesthetic properties. It can also be an effective agent 对 可用于溃疡和泻药.
丁香s are touted by many for their antibacterial properties 对 several human pathogens. The 提取 of cloves were thought to be potent enough to kill those pathogens.
丁香含有大量的抗氧化剂, which may prove to be ideal for protecting the vital organs from the effects of free radicals, especially the liver. 新陈代谢, in the long run, 增加自由基的产生和脂质结构, 同时减少肝脏中的抗氧化剂. In such cases, clove 提取 may prove to be a helpful component 用它的肝保护特性来抵消这些影响.
Might Assist in Diabetes Management
Extracts from cloves imitate insulin in certain ways and might help in controlling blood 糖 水平. One study published in the Jour部分 of Ethnopharmacology found that cloves may have a beneficial effect on 糖尿病 as part of a plant-based diet.
Can Help in Bone Preservation
这种香料的水醇提取物包括酚类化合物, such as 丁香酚, its derivatives, such as flavones, 异黄酮, flavonoids. Studies have suggested that these 提取 may be helpful in preserving bone density and the mineral content of bone, 以及增加骨骼的抗拉强度以防 骨质疏松症. 需要更多的研究来证实这些发现的有效性.
Can Work As An Immunity Booster
Ayurveda describes certain plants to be effective in developing and protecting the immune system. One such plant is clove. The dried flower bud of clove contains 化合物 that can help in improving the immune system by increasing the white blood cell count, 从而, 改善dth.
这种香料具有消炎和止痛的作用. Studies on clove 提取 administered to lab rats suggest that the presence of 丁香酚 reduced the inflammation caused by 水肿. It was also confirmed that 丁香酚 can reduce pain by stimulating pain receptors.
Might Help Restore Oral Health
丁香可以用来减少 gum diseases 例如牙龈炎和牙周炎. 丁香 bud 提取 have the potential to 显著控制口腔病原菌生长, 哪些是导致各种口腔疾病的原因, as per a study published in the Jour部分 of Natural Products. They can also be used for toothaches due to their pain-killing properties.
Since ancient times, 香料s such as clove and 肉豆蔻 have been said to possess aphrodisiac properties, according to Unani medicine. Experiments on clove and 肉豆蔻 提取 were tested 对 standard drugs administered for that reason, 丁香和肉豆蔻都有阳性结果.
Might Cure Headaches
Where to Buy 丁香s?
You can buy good quality whole cloves online or at your local supermarket. They are best stored in tiny 香料 glasses or steel containers in a cool, 干燥的地方,可以保存数月. 新鲜的丁香味道浓郁,所以一定要少用.
丁香中的成分可以 prove harmful if used in excess or undiluted. Here are some of the derivatives of cloves that should be used responsibly.
- 丁香 Essential Oil: 丁香 essential oil must not be used directly. 相反, dilute it either in olive oil or in distilled water. 一般认为丁香精油是安全的, 但一项研究表明,它们具有细胞毒性.
- 丁香 Cigarettes: 在印尼, 丁香以香烟的形式被大量消费, popularly known as kreteks. 这些 clove cigarettes have emerged as an alternative to tobacco cigarettes, but research shows that 丁香香烟比传统香烟更不利于菠菜导航网600. 就丁香香烟而言,其含量 尼古丁, carbon monoxide, tar entering into the lungs was higher than that from normal tobacco cigarettes. | <urn:uuid:e5bb8cb8-c863-4f56-9d3c-c64884c28f0a> | CC-MAIN-2022-33 | http://www.ttasuperstores.com/health-benefits/herbs-and-spices/health-benefits-of-cloves.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572127.33/warc/CC-MAIN-20220815024523-20220815054523-00296.warc.gz | en | 0.675219 | 4,555 | 2.890625 | 3 |
photo: ®Uplift, BRAC program
About the authors: Anne H. Hastings is a Global Advocate for Uplift and was the Director of Fonkoze when she brought the graduation program to that institution. Steven Werlin is the Communications and Learning Officer for Fonkoze’s graduation program, Chemen Lavi Miyò (the Pathway to a Better Life).
Eliminating poverty stands as the first of the 2030 Sustainable Development Goals (SDGs) that 193 countries voluntarily committed to achieving. A growing movement is arguing that to achieve this goal, we must immediately and urgently focus our anti-poverty efforts on the poorest of the poor because they are the most difficult to reach and have the toughest time making their way out of poverty.
They face barriers to changing their lives beyond those faced by most of the poor: they often have many children who are not in school, go days at a time with little to eat, and have few or no productive assets that would help them make a living – like livestock, tools or land. They often have no hope of a job because they have neither job skills nor literacy and there are few, if any, jobs to be had.
Yet evidence suggests that the world is making incredible progress in reducing poverty. Poverty and even extreme poverty have declined markedly in past decades, according to the World Bank. (1) Recent estimates suggest that, in 2013, 10.7 percent of the world’s population lived on less than US$1.90 a day (the prevailing threshold of extreme poverty) down from 35 percent in 1990 and 42 percent in 1981.(2) In all likelihood, it has continued to fall. Homi Kharas of the Brookings Institution calculates that someone emerges from extreme poverty every 1.2 seconds.(3)
The Rationale for “Targeting”
So why do we need to focus on the poorest of the poor? According to The Economist, the struggle to eliminate poverty is about to get much harder, very quickly.(4) The main reason poverty has decreased so rapidly over the past two decades is the tremendous progress made in China, Indonesia, and India. Most of the poorest people now live in Sub-Saharan Africa and South Asia, regions that have endured many more failures than successes. The Economist has argued:
“With more destitute inhabitants than any other region, sub-Saharan Africa now drives the global poverty rate.”(5)
Economic growth is weak, governments are fragile, conflict is rampant, welfare systems are often non-existent and, perhaps most importantly, both the proportion and the intensity of poverty are greater.
It would be a mistake, therefore, to assume that a rising tide will lift all boats: that just because poverty has been decreasing, it will easily be eliminated. We can ill afford such optimism. Overall improvements say nothing necessarily about what is happening at the very end of the spectrum. Reducing poverty generally and helping the poorest and most excluded – those we describe as living in “ultra-poverty”(6) – are distinct undertakings. Many anti-poverty programs do not even reach people living in ultra-poverty because they are invisible.
Their lives are undocumented. They lack birth certificates and do not appear in government records. Their neighbors ignore them, and they themselves withdraw from the communities surrounding them due to shame, fatigue, and fear of rejection. In Haiti, as in many Sub-Saharan African countries, for example, overall levels of extreme poverty have declined while the rate in rural areas has remained unchanged.(7) There is a need, then, for specific interventions that can support those struggling to leave ultra-poverty.
If meeting the first SDG requires that we reach those households in conditions of ultra-poverty, how do we do that? Any remedy must have specific and transparent strategies with which to identify these households and must be held accountable for doing so.
The Problem with “Targeting”
Graduation programs, many of which are based on a model originally developed by BRAC in Bangladesh, identify households in ultra-poverty by first recognizing the poorest regions in a country and then the poorest households within those regions. Some conditional and unconditional cash transfer schemes attempt to accomplish the same thing in different ways.
Important as it is, any such strategy, usually referred to as “targeting”, is controversial, however, drawing criticism that generally falls under one of three themes:
Thinking of people as “targets” conceives of them as passive recipients rather than “actors” who, with assistance, can make their way out of poverty. It demeans them as incapable of their own agency.(8)
- Targeting is part and parcel of a neoliberal concept of social policy that prioritizes low taxation and limited social spending and therefore favors targeting people in ultra-poverty as a means of reducing costs. In other words, it “rations” benefits to the needy when it should be ensuring entitlements to all citizens.(9) This is the rights-based approach now taken by the International Labour Organization and most United Nations agencies.
- Targeting mechanisms in general are inaccurate, expensive, and engender conflict within communities.(10)
Each of these criticisms deserves careful attention. The first may seem to address specifically the use of the word “targeting,” but we think it is bigger than that. Any process by which an organization or a government takes responsibility for choosing those it will serve subjects those chosen to its selection process. Assisting people living in ultra-poverty means facing, together with them, their isolation, their lack of confidence, skills and opportunities, as well as their lack of hope that tomorrow will be better.
How they initially engage in a program matters less than where that engagement leads. Graduation programs, as opposed to older forms of aid, aim specifically to help their participants take on long-term responsibility for managing their own livelihoods and lives. Their agency grows with participation and completion of the graduation program.
With respect to the second criticism, we agree that countries should ideally distribute public resources in ways that benefit everyone, at least in countries with a functioning set of social protection systems, such as affordable healthcare, pension systems, and other such supports. But elsewhere it seems appropriate to focus investments, especially where the lack of resources entails severe consequences, like hunger. In addition, households have different needs as they move along the poverty spectrum. Making the same entitlements available for everyone does not mean that all will have the capacity to use those entitlements effectively.
Equal and equitable are not necessarily the same. And where resources are most scarce, equal distribution can leave too little available for the neediest. For instance, among the 14 countries with the highest burden of ultra-poverty, the revenue per capita varies from US$1.85 in Ethiopia to US$781.00 in India. (11)
The third criticism requires a more detailed response, which we provide shortly. But in any case, we set out from the fundamental assumption that eliminating ultra-poverty with whatever resources are likely to be available will require accurate identification of those who need the closest accompaniment and the biggest “push” in order to change their lives. We cannot afford the luxury of the pessimism expressed by Nicholas Freeland, who calls such an assumption “delusional.”(12) So, whether one speaks of “targeting” or “rationing” or, as we tend to say in Fonkoze’s program in Haiti, “selection,” finding a good way to reach the very poorest remains a fundamental part of the struggle to end poverty.
Selection Errors and Proxy Means Testing
We acknowledge that accurate identification of households viewed as being in ultra-poverty is difficult. Both our own experience in Haiti and relevant studies tell us that much. As Dean Karlan and Bram Thuysbaert explain, it is hard both to establish the right criteria for selection (given the multi-dimensionality of ultra-poverty) and to identify the households that meet the chosen criteria. Measuring income is perhaps most difficult given that those in ultra-poverty earn whatever income they have from informal sources and are often paid in-kind. Moreover, potential recipients may not wish to give fully candid answers to questions about their livelihoods.(13) In our experience in Haiti, some people tried to hide their poverty, often out of shame, and others exaggerated their poverty, hoping to qualify for benefits.
Strategies to identify the poorest can run afoul of errors of inclusion and of exclusion. The former occur when households are identified incorrectly as qualifying for services, and the latter when a selection process misses households that do qualify. Including those who do not need the program can substantially increase program costs, and excluding those who do need the program guarantees a program’s failure from the start.
In Haiti, we worry about both sorts of errors. But rather than starting from the assumption that accurate selection is impossible, we look to the more promising methods of finding those families truly mired in ultra-poverty. There are clear distinctions among the various methods that have been or could be employed, and those differences hold varying levels of promise.
Many critics reserve particular ire for proxy means testing (PMT), a method that identifies a relatively small number of markers to calculate the likelihood that a household belongs to a particular wealth category. Kidd and his colleagues describe the approach in some detail:
Conventional means tests assess eligibility for social assistance schemes by verifying whether an individual’s or household’s actual financial resources fall below a predetermined threshold. The PMT methodology, on the other hand, tries to predict a household’s level of welfare using a statistical model. It was developed to address the concern that undertaking a conventional means test based on measuring incomes would be difficult in developing countries, since only a small proportion of the population are in the formal economy, meaning that governments cannot easily obtain information on their incomes. (14)
Characteristics like quality of housing, educational attainment, or location are entered into an algorithm that weights each factor differently. Together they serve as indicators, or proxies, of a household’s total means.
Kidd shares the results of a series of studies of its efficacy. Kidd focuses – to our mind correctly – only on errors of exclusion. While wasting money on services that are not really necessary is undesirable, failing to reach out to families who need those services seems, to us, far worse. If the studies he cites are typical, then exclusion is a regular feature of government programs that depend on PMTs. He mentions error rates as low as 56 percent in Cambodia and as high as 93 percent in Indonesia. If PMTs are missing more than half of the poorest families in the best case, then one has to wonder why they are being used at all.
A Better Selection Process
PMTs are not the only way to target. Fonkoze’s approach in Haiti – like those used by many who have copied or adapted the BRAC graduation program – combines a method of gathering information about community members, via an open public meeting, with a two-step verification that uses program-specific inclusion and exclusion criteria.
Fonkoze does not rely simply on consideration of income, nor an analysis of consumption, to define who is “poorest”. It goes beyond these two measures to look at access to quality health and education, as well as clean water and sanitation. Our approach rests on an understanding that poverty has many dimensions.
This approach is not perfect but it is the best we have seen partly because it allows consideration of a range of the various dimensions of poverty. Our own small study of the process, undertaken by the Institute of Development Studies (IDS), is underway, but preliminary indications are that it selects families who are, on average, significantly poorer than those selected by the PMT most commonly used in Haiti.(15)
Karlan and Thuysbaert’s study of a similar approach – they refer to it as the TUP (as “Targeting the Ultra-Poor” was the name of the original BRAC program) – in Honduras and Peru showed mixed results, but in the end they were able to conclude that:
Overall, the comparison unveils three insights into the TUP selection process. First, when judged using five different poverty metrics, the TUP process typically performs better than random selection. Second, the TUP process, compared to PPI (Progress out of Poverty Index)(16) and the Housing index, leads to selecting households with less land and less valuable livestock. Third, the pattern demonstrates that the TUP process performs best for measures that are easily observable to the community; i.e., the TUP process leads to selection (based) on assets, and less so on consumption or education. (17)
Identifying families in ultra-poverty through a combination of community participation and careful verification against fixed criteria has two very distinct advantages: 1) it ensures that we select those households that are most in need and can most benefit from the program, and 2) it builds a sense of ownership and buy-in within the community. (18)
The process begins with geographical targeting to identify the poorest districts within the country using statistical data from the World Bank, the World Food Program, or other analyses of vulnerability and poverty within the country. This is supplemented by discussions with stakeholders, such as local governments and perhaps microfinance institutions, in those regions.
The next step is taken within the communities themselves, which typically consist of about 50-80 households. If the community is larger, we have to divide it into two given the process depends on participants’ accurate knowledge of their neighbors; this knowledge is less accurate as the number of households increases. We use an exercise called “Participatory Wealth Ranking” (PWR). First, we visit a community informally, looking for local leaders who are willing and able to publicize a meeting and motivate as many of their neighbors as possible to attend. We provide written invitations that he or she can distribute.
On the day of the meeting, participants are asked to draw a map of the community, which they often trace in the dirt with a stick. They identify all the community’s landmarks and place a numbered marker for each household. During this activity, we have a staff member putting each family’s name on an index card. Our staff members also identify participants who seem knowledgeable and respected. We generally try to pick out five or six.
While one of our staff members copies the map, another offers refreshments to the participants. Meanwhile, the third draws the five or six aside for a second activity. In that smaller group, we take the first two index cards, and ask which family is wealthier, putting the cards in separate piles. We then compare the third and then the fourth card, and then the rest, one at a time, organizing the cards into about five different piles. When this is finished, we ask the participants to analyze each pile and to identify the traits that all the families in the pile share. If there are families that seem out of place, they can be moved into the correct pile at this time.
Finally, staff members visit each and every household in the lowest two categories in order to assess their eligibility for the program according to a series of program-specific inclusion and exclusion criteria. We use multiple, sometimes-overlapping criteria as doing so gives us the best chance of finding simple and verifiable ways to evaluate a family. Staff members also use the map produced at the PWR meeting to identify any households that the PWR might have missed. We have found that some of the poorest households go unmentioned at these meetings. These families are invisible, even to their closest neighbors.
Photo: Fonkoze Program
The criteria we use include the following:
- Food insecurity with hunger, such that a household regularly goes days at a time without even a single cooked meal
- A lack of productive assets, like livestock, land under cultivation, or business capital
- The presence in the household of a woman who has dependents
- A dependence on income from begging or day labor
- School-age children who are not in school
- A lack of external support from another family member or another organization
Staff present to management a list of the households they believe are eligible for the program and a supervisor again visits those households to verify the information they have been given. This verification visit is as important as the first visit, and often new information is discovered.
We make the criteria as clear and as verifiable as possible, both to facilitate eventual community buy-in and to minimize errors. The IDS study of our selection process initially showed apparent inclusion errors regarding 8 percent of the families we selected, but further investigation of all these cases showed that selection was probably appropriate. We found, rather, that the PMT tool used for comparison had weighted factors that distorted the households’ circumstances.(19) We have less evidence concerning possible exclusion errors but given we are able, in principle, to work in an area more than once, we can use the period of our initial intervention to find any additional families that we might have left behind.
In summary, the process does a reasonably good job of enabling us to exclude both those who do not qualify based on their relative wealth and those whom our program cannot help due to their age or infirmities. It creates what we believe to be the ideal situation, where a program for qualified families in ultra-poverty is one piece of a comprehensive social protection system that offers appropriate support to all those who need it.
For example, those whom we cannot help because of their age or disability could be helped by a government pension system. Those, however, who do not qualify (perhaps because they are supported by a family member living in the U.S. and so have sufficient food and their children are in school) might be better assisted by a microfinance institution that could teach them how they can best invest the money they receive.
We in Haiti are frustrated at our inability to find alternative services for those who do not qualify for our work. On the one hand, the government does not currently have a safety net for the aged or the severely disabled. On the other hand, even pro-poor microfinance institutions are often unwilling to accept families who are too well off to need graduation; the small loan sizes or remoteness involved can make them too expensive to serve.
Of course, there can be barriers to any selection process. For instance, the households can be so far apart that it is virtually impossible to identify anything that could be defined as a community. In Haiti, the rural population often is not really organized into distinct villages, so the divisions we draw in our selection units can be arbitrary. Sometimes the politics of the community might make it impossible to assemble a group that represents all the differing factions. Only a very few of the communities we have tried to work in have refused to cooperate.
What matters most is adherence to three core principles of the process for identifying families that need services designed for people living in ultra-poverty:
- Consider local dynamics. Walk through the entire community to identify the most marginalized, and get a physical ‘lay of the land’. This identifies community assets, enterprises, and families on the physical and social periphery.
- Engage the community. Involve community members in the process of identifying various levels of community well-being in ways that go beyond involving one or two key informants who might select only their friends and families.
- Verify your information. Cross-reference insights from the community through simple surveys or verification processes.
It is possible to apply these principles even in countries with established national registries and databases that gauge levels of poverty as a basis for anti-poverty programming. For instance, one can leverage these national databases and conduct activities that build community engagement and support, while ensuring that needy households are not omitted. A simple household verification survey can ensure that database information is accurate before including households in these comprehensive graduation approaches.
Is this approach sufficiently cost-effective, scalable, and viable for governments?
The most salient questions about the method we have described are its expense, its scalability, and whether governments can implement it with their own personnel. We do not yet have an example of this approach covering an entire country, or of a government trying it on its own.
But we do have some indications about cost. Some argue that the process is too complicated and staff-intensive to be cost efficient. Karlan and Thuysbaert suggest that in the countries they studied, it costs about US$7.00 per household, just slightly more than using the Progress Out of Poverty Index, a form of PMT.(20)
However, this targeting method allows for many other benefits, such as community acceptance of the decisions and a much deeper knowledge of the community. Ultimately, of course, the cost of the selection methodology depends on the cost-benefit of the intervention for which the selection is being made. While not the topic of this paper, most randomized controlled trials of graduation programs have typically demonstrated their positive cost-benefit.(21)
We also have some positive indications about its scalability. BRAC is assisting the Government of Lesotho in implementing a graduation program, and the Ministry of Social Development has ambitions to cover the entire country using this selection method. Similarly, BRAC is advising the governments of Kenya and the Philippines in applying these programs. According to Aude de Montesquiou and Syed Hashemi, there are some 57 graduation programs underway in nearly 40 countries, of which one-third are led by national governments. (22)
We will not really know whether effective targeting processes can be scaled across whole countries until we try. But we can already draw four important conclusions:
We must immediately and urgently focus on people living in ultra-poverty if we expect to meet the first 2030 SDG.
To do so, we must have a selection process that can transparently and accurately target those households most in need of support. We are in favor of any process that transparently and effectively identifies the poorest of the poor and the graduation selection process developed by BRAC, and used by many others, is the best we know of.
As a community, we must face and overcome the challenge of developing clear, comparable selection criteria, despite what might be dramatic differences in cultural and economic contexts.
Finally, we must hold implementers – whether civil society organizations or governments – accountable. They must be able to show that they are, in fact, reaching families living in ultra-poverty and accompanying them as they strive to make their way out of this inhumane form of poverty.
- World Bank Poverty Overview.
- “Fewer, but still with us,” The Economist, March 30, 2017.
- We define a household in ultra-poverty as one that has a weighted rate of deprivation of 60 percent or more on the Multidimensional Poverty Index developed by the Oxford Poverty and Human Development Institute. Please see more about our methodology [here.]((https://www.ultra-poverty.org/our-methodology/)
- “Living Conditions in Haiti’s Capital Improve, but Rural Communities Remain Very Poor,” World Bank, July 11, 2014, (Link)
- Amartya Sen, “The Political Economy of Targeting,” in Public spending and the poor: Theory and evidence, edited by Dominique Van De Walle and Kimberly Nead (Washington, D.C.: World Bank, 1995), 11-24.
- Stephen Kidd, Bjorn Gelders, and Diloá Bailey-Athias, “Exclusion by design: An assessment of the effectiveness of the proxy means test poverty targeting mechanism.” International Labour Office, (Switzerland: International Labour Organization, 2017). (Link)
- The Development Pathways blog. “Rationing, not targeting,” blog entry by Nicholas Freeland, April 11, 2017. (Link)
- World Bank.
- The Development Pathways blog. “Rationing, not targeting,” blog entry by Nicholas Freeland, April 11, 2017. (Link)
- Dean Karlan and Bram Thuysbaert, “Targeting Ultra-Poor Households in Honduras and Peru,” December 2015. (Link)
- Stephen Kidd, Bjorn Gelders, and Diloa Bailey-Athias, “Exclusion by design: An assessment of the effectiveness of the proxy means test poverty targeting mechanism,” ESS Working Paper #56, (Switzerland: International Labour Office, 2017, 1) (Link)
- Martin Greeley, Liam Kennedy, and Alexandra Stanciu, “Who are the ultra-poor: A Haitian study,” Institute for Development Studies working paper, 2017.
- The Progress Out of Poverty Index, one example of a PMT. The name was recently changed to Poverty Index. For more information, see povertyindex.org.
- Karlan and Thuysbaert, p. 18.
- Vivi Alatas, Abhijit Banerjee, Rema Hanna, Benjamin A. Olken, and Julia Tobias, “Targeting the Poor: Evidence from a Field Experiment in Indonesia,” American Economic Review, 102, no. 4 (2012): 1225, Link
- Greeley, Kennedy, and Stanciu, op. cit.
- Karlan and Thuysbaert, p. 33.
- See, for example, Nathanael Goldberg, “What We Know About Graduation Impacts and What We Need to Find Out”, Policy in Focus, 14, no. 2 (July 2017), 36-39. Link
- See the Insight on this website by Aude de Montesquiou and Syed M. Hashemi, entitled “The Graduation Approach Within Social Protection: Opportunities for Going to Scale.” | <urn:uuid:14e64988-04fa-4583-be97-8eb52e0f17c2> | CC-MAIN-2022-33 | https://www.ultra-poverty.org/blog-post/let-s-urgently-seek-out-people-living-in-ultra-poverty-and-focus-on-them-now/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00497.warc.gz | en | 0.956861 | 5,410 | 3.234375 | 3 |
With the failure of forty years of dietary guidelines to arrest or improve the incidence of diabetes and obesity, new thinking and approaches are needed. Applying an engineering mindset to nutrition has attracted attention as some of the new thinking has emerged using root cause analysis and other engineering tools. This has resulted in new insights for the medical and nutrition communities.
This is not really new, I pay homage to doctors like Dr Bernstein who trained as an engineer first, then as a trained doctor realised how controlling diabetes was like an engineering control problem.
Recently, however, as a recovering type 2 diabetic, I plotted my HbA1c against the results of a long-term vegan ‘cure’ for diabetes study to see how it compared. I was astounded by the superior result and tweeted that it was a fifteen sigma improvement. While not really correct, it got me thinking of my recovery in terms of engineering control theory and quality management.
Putting aside whether a cure is possible (for type 2 diabetes) and considering treatment, what if we view diabetes as an engineering control problem and applied control charting to understand the quality of different management options? Note that while I have type 2 diabetes, the glycaemic control problem is common to type 1 and so much of this analysis also is relevant to them too.
Broken Control System
Glucose comes from sugar and other carbohydrates (carbs) like starch from bread, rice and pasta. Your body uses about 130g of glucose a day (about 33 teaspoons). Normally, there is no more than about one teaspoon of glucose in your blood at any one time. Simply, if there is not enough glucose in your blood, you can black out or die as your vital organs cannot function. As your muscles, brain and other organs consume glucose as fuel, your liver, pancreas and digestive system release hormones including insulin to regulate glucose to a tightly controlled level. That magic number is normally about 5.6 mmol/L (or 100mg/dL depending upon the units you use).
You might wonder, what will happen if you don’t eat any carbohydrate? Fortunately, probably as a result of adaptation, the body is fine as it can make what you need from other sources. This happens mostly in your liver. It is called gluconeogenesis or GNG for short.
Essentially with diabetes, the control system that reduces blood glucose (BG) is broken. The homeostasis (self-regulation) of your BG is ineffective because your body’s response to insulin (which lowers BG) is diminished (called insulin resistance) and/ or your ability to produce insulin in response to carbs is insufficient to lower BG quickly enough. For type one diabetes, insulin production is at or near zero.
Consequently, glucose that your body gets from carbs (or makes in the liver through GNG) will raise your BG and it will only fall slowly because your body is unable to produce or respond to insulin properly. So BG is easily raised but slowly and poorly lowered.
Conventional Diabetes Management
Let’s leave the medical theories about why the system is broken alone for the moment and assume we have to do the best with what we have got.
Conventional diabetes management seeks to lower your BG towards normal but not so that it drops too low. This is done by exercise (to consume glucose), diet and medications that replace insulin, reduce glucose production or eliminate glucose from the body.
In conventional diabetes management, juggling these factors on a daily basis is hard and is the focus for someone with diabetes. Every three months you go to see your doctor to see how you are doing overall and to see if your medication should be adjusted.
Unfortunately, it is hard to achieve and maintain this great juggling job. It is hard to replace a well working system in the body once broken. The typical person with diabetes has BG that, on average, is too high. It may also drop too low with too much medication leading to coma or death. High BG is associated with all of the ill effects that people with diabetes suffer including blindness, kidney disease and amputation. For most, eventually, doctor’s visits mean an inevitable adjustment upwards in medication and higher BG. High BG results in deterioration for a person with diabetes over time, more medication, more complications. Diabetes is therefore regarded as a chronic disease with an inevitable worsening progression.
With that prognosis, it makes little sense discussing getting back to normal BG. It makes little sense to see this as a control process that can be brought under near normal control.
THAT IS (FORTUNATELY) COMPLETELY WRONG!
I mentioned a three monthly visit to your doctor. Your BG changes throughout the day. In order to assess your overall BG control, a test measuring ‘haemoglobin A1c’ (HbA1c or just A1c for short) measures how ‘sticky and sugary’ (glycated) your blood is. As blood cells turn over every three months, A1c gives you about a three month average of your BG control.
I mentioned that if your systems were working properly, your BG would average about 5.6 mmol/L (or 100mg/dL). It turns out that this corresponds to an A1c of 5.1% (or 33 mmol/mol). This is an average for the healthy population or ‘population mean’. Statistically, the standard deviation from the mean is about 0.5% and it is deemed that you have prediabetes if you exceed the mean by one standard deviation or >5.6%. Similarly above about two standard deviations (>6.1%) you are diagnosed as having diabetes. The higher above one standard deviation you go, the worse becomes the health risks of diabetes.
Control charts are a tool used in engineering and management science to help us understand what is happening with a process. Essentially a control chart gives you a measure of how close a controlled system is performing to expected behaviour (the mean or average target for a parameter) when considering its deviation from the desired behaviour. Control charts give you a measure as to the quality of the outcome of a process and should help decide what you may need to do to bring a process back into control.
You can read about using control charts here.
If the aim is for a person with diabetes to approach the health of a ‘normal’ person, then we must restore the control as near as possible to the BG of a healthy person. A control chart type of methodology is used in some glucose monitoring programs to measure the quality of control of daily BG.
So when looking for long term control/ improvement, why not plot the mean of HbA1c and its standard deviations for healthy people? We can then use the control chart methodology as a yardstick to see how various treatments compare and to hopefully gain better BG control towards a cure.
Diabetes Control Chart using HbA1c
I have reproduced the results of a study on diabetes as a control chart. That study looked at about 49 vegans and another 50 people on a conventional diabetes diet. You can read this study here. I have added to that a plot of my history on a low carbohydrate diet. I have added in the bands of standard deviations (s, 2s, 3s, etc) in bands of colour from green to red.
Some points about this control chart in general:
- Excellent control would see points close to 5.1% and ideally in the light green zone within one standard deviation (±s).
- In control chart theory, any data point more than three standard deviations (±3s) is deemed ‘out of control’. Something is really wrong with the system and control process itself for this to occur.
- Not one of the measurements is below the population mean of 5.1%
Conventional Diabetes Diet
This diet was a low fat, calorie deficit diet designed for weight loss. This gave the worst outcome. At the end of the 74 week period, the average A1c results were nearly above where they started. No average A1c was better than 5s. By the end of the trial, only about half of the participants were adhering to the diet. This was despite cooking lessons, weekly meetings with a dietitian and other intensive assistance. This diet was high in carbs as they are 60-70% of total energy.
The vegan diet lacked meat, eggs and dairy but was not calorie restricted. This gave a slightly better outcome. No average A1c reading was better than 4s. By the end of the trial, only 44% were still adherent and the outcome was beyond 5s. This was despite similar intensive assistance to that given on the conventional diet. Probably, as a result, some of the gains in A1c made earlier in the trial were lost and the vegans also deteriorated again. Had the trial and the upward A1c trend continued, it appears that the vegans might also have ended up worse than they started. This diet was very high in carbs being 75% of total energy.
My diet lacked carbs. No sugar, rice, pasta, bread, sugary fruit and starchy vegetables. I also drank alcohol sparingly. Most people with diabetes are advised to eat between 200g and 300g of carbs per day spread out over the day. I aimed at first for less than 50g per day (<10% carbohydrate) and after about three months I was reliably lower than 25g (<5% carbohydrate) per day. This normally would be a ‘keto diet’ however it is hard for people with diabetes to stay in significant ketosis without extended fasting so I prefer to call it LCHF. I also did practise intermittent fasting simply because I was not as hungry as I was with a higher carbohydrate diet. Many people report this. Typically this involved not eating breakfast so that it was 16 hours after the previous night’s dinner before I ate the next meal.
There was no assistance from a dietitian or cooking lessons for me. I did read the free information on the dietdoctor.com website to get the bulk of my nutrition from real food sources (meat, eggs, fish, fruit, vegetables, nuts & dairy) that were low carb. Unlike the diets in the trial, adherence was easy for me, although I had to unlearn a lot of ‘advice’ that dietitians had previously told me on my way to developing diabetes. Unlike the study diets, I ceased three diabetes medications after three months but then began taking one-quarter of the dose of metformin again at that time.
I did no appreciable exercise like running, swimming, cycling but took an occasional walk. In the first six months I easily lost about 12KG of weight, moving from obese to overweight. My weight has been quite stable since then.
Unlike the other diets of the study and my previous diabetes history, all my readings (except baseline) were within 2s and went below s before the year on LCHF was finished. Clinically, below 2s is pre-diabetes and below s is non-diabetic so I have been very happy with that result. The downward trend was recently confirmed as still occurring with a recent estimate of A1c from my glucose meter readings.
Straight away we can say that the study diets are ‘out of control’. With no points less than 3s there is little prospect of either ‘process’ (diet) bringing control to equal the population mean. Further with all points 4s or higher, the mean (goal A1c of 5.1%) will never be reached. Quite simply, something is causing the A1c to be unacceptably high that the process being used cannot overcome. From an engineering standpoint, these are defective processes that cannot achieve the target. The trends were initially towards but end up moving away from the target long term. Management theory would tell you that the individual in the process (person with diabetes) will be powerless to achieve control. It is ridiculous to blame the person with diabetes for this result yet many of us blame ourselves. The theory says that to continue to expect reasonable control to the target wanted is foolish. You must use a different process or make some other significant change to the system.
That is not the case with the LCHF diet. All points are within 2s, some s, and we have a trend that may eventually result in the target being achieved although none of the measurements so far have been below the target.
If I were presented this as a control system problem I would immediately conclude that there was an unaddressed control offset, especially in the study diets. The engineering solution would be to apply ‘Integral Control‘ to attack that offset so that the control range is eventually brought closer to the target. This means relatively slowly increasing or reducing the level of the controlling factor until control can be achieved.
Further, both diets represent a perturbation in the system that slowly corrects back to its original level. Like throwing a stone in a pond. The ripples eventually subside and things head back to what they were- in this case, a level that is too high.
We know that carbs, be they from the liver (GNG) or diet, raise BG and A1c in people with diabetes who do not have enough (or do not respond properly to) insulin. The amount of carbohydrate is the controlling parameter for BG and A1c. It is straightforward that a solution is to reduce carbs permanently- but by how much? For me, the LCHF result shows that even if we drop dietary intake to a minimum, the target would still not be reached quickly due to their production by the liver (from GNG).
So a very apt control analogy is a sink with a small inflow of water from the bottom (GNG), a drain draining away by a controllable flow (insulin action and exercise), and a tap with the ability to put in a variable inflow which by eating carbs could be continuous if spread into small meals, large and rapid if a lot of carbs (say sugar) is consumed or minimised if restricted.
Now if we want to keep the sink at a certain level (say half way) we can exercise to drop BG and eat fewer carbs to lower the level. If we leave the tap running at a rate that exceeds the draining rate or suddenly empty a large bucket of water into it, the sink fills and we will now be permanently above the level we want. This is what we see with conventional and vegan diabetes management in the study. In this situation, it is common sense to turn off the tap- carbohydrate restriction. 200 to 300g of carbohydrates per day is the problem in this control system.
Exercise Helps but Diet Rules
Exercise is a help but consider that the average person must run about 7 km to ‘burn the carbs off’ from a 500ml serve of coca cola. Even if you do run the 7km, in the time between drinking the drink and completing your run, those carbs are giving you high unhealthy BG. Better just not to eat or drink the carbs in the first place. You cannot outrun a bad diet.
All of the diets have too many carbs for the available and effective insulin to bring down BG to normal metabolic levels and that explains why the target was never reached by any of them.
Reaching the Target
Unlike the study diet, we should expect the LCHF diet might reach the target in the next nine months or so if the present trend continues. The simplest course of action for the LCHF diet would be to keep going and see if the system settles to the desired target. If it does not or if a quicker result is wanted, other interventions could be tried to reduce carbohydrate including longer fasting, increasing exercise, upping metformin dosage or looking for another metabolic option. So now, as a vegan doctor (Dr Joel Kahn) commented to me upon looking at my results on Twitter, maybe slow and steady wins the race? That might well be the first of his advice I have ever taken.
Am I a Special Case?
At this point, you may be wondering if carbohydrate restriction might help your diabetes or am I a one off? Let us explore that. My results prior to carbohydrate restriction were consistent with the conventional diet people from the study. The best HbA1c I saw was 7.3% and as you can see below, carbohydrate restriction was the difference beginning around month 31.
The value of a case study is that it shows what CAN happen. There are no guarantees, but given similar circumstances to me, yes this can happen for you. Many other people report that it happens for them. In fact, we would expect it to happen from the biochemistry and control theory I have explained. This is even though everyone with diabetes is a little different. It means your mileage may vary.
Biochem is complex. Perhaps the major appeal of LCHF to an engineering mind is that, based upon engineering theory, it makes perfect sense. Dietitians are constrained by a myriad of epidemiological studies which show increased risk of this or that from doing that or the other thing. If you accept that A1c is a measurable proxy for the underlying health issues of diabetes, clarity to focus on the job of controlling A1c occurs and carbohydrate restriction is obvious. Once that is done, focussing on optimising diet within that constraint is the task. This fits nicely with the theory of constraints as a way to tackle complex systems.
LCHF, Vegan or Conventional Diets?
The vegan diet did perform better than the conventional diet in the study but both were a control chart fail. It is however theoretically possible that one of the 49 vegans achieved similar results to me. My result towards the end shows that my A1c was about fifteen standard deviations below the vegan mean. In other words if we assume a normal distribution and there were 100,000,000,000,000,000,000,000,000,000,000,000,000,000 vegans in the study, we could expect about one to have results as good as mine. Unfortunately, there were only 49 vegans in this study. This is a time when an n=1 (me) is statistically significant.
To be clear I am not saying that a vegan diet could not achieve the same result, but it would have to be low in carbohydrate and total energy so a vegan (or any) starvation or fasting diet would probably also work.
If common sense, the engineering theory, my simple Biochem explanation or my results do not explain why a carbohydrate restricted approach is best then read this paper. An excellent (and more complicated) comparison between the Keto (LCHF) and vegan approaches to managing diabetes is available from Marty Kendall’s website. You will also find a lot of other excellent information on nutrition there should you be concerned that restricting carbs may put you at risk of nutritional deficiency.
The Vegan propaganda machine is fond of saying that restricting carbs (the keto diet) masks the problem by addressing the symptoms whereas only the vegan diet ‘cures the disease’. Based upon the study we looked at, it appears to be an untrue claim. I don’t care whether you eat live chickens or just grass to avoid animal harm, the first thing that someone with diabetes should do is minimise their carbohydrate intake. If you must eat some, then not too many and make sure they are ‘complex’ and unrefined.
Dietitian Says ‘No’
So what if you see a dietitian and they try and dissuade you from a carbohydrate restricted approach. They may have the following objections to which I give you some answers:
- You need carbs and your diet will lack fibre and vital nutrients from foods you will exclude like whole grains.
Answer: Some fats and proteins are essential but carbs are not. Even if you could have zero carbs in your diet, your body makes them (via GNG). If fibre is of concern then eat more low carb vegetables. Vital nutrients? See Marty Kendall’s website. If a dietitian can’t give you a healthy carb restricted eating plan, time to walk!
- It helps some people but people can’t stick to it in the long-term. We also don’t know how safe it is in the long term.
Answer: Well what if a person it can help is me? Shouldn’t I try it? Looking at the conventional and vegan diets in the study, adherence was also less than 50%. Adherence is a matter for any way of eating and it is up to you. You don’t have to be a statistic. Finally, what does the long-term look like if your A1c stays at ~6, 7 or 8% and above? The risks of a high A1c are very well known. If LCHF is a devil, it is the devil you want to know.
- Keto? Low Carb? Control charts? [Insert other doubt raised here]? Do they have any evidence of success from a study in a peer reviewed journal? My clients have excellent success on [insert a diet/ program here] instead.
Answer: Please give me evidence of a study showing [insert their diet/ program] can achieve an A1c approaching 5.1%. Please give me evidence of the success rate of your clients achieving a sub 5.6% A1c.
- On LCHF/ keto you are limited. Studies show that eating [insert food of concern] or not eating [insert dietitian ‘superfood’] will make you die sooner.
Answer: Have you ever seen someone on dialysis or with a diabetic foot? It is your job to give me a diet for normal blood glucose, then we can optimise it for other concerns. Do your friggin’ job and shelve your dogma.
The system is failing all of us. More of us are getting obese and diabetic following the standard way of doing things. I developed diabetes on a near exemplary low-fat diet. I can only encourage you to be a robust health consumer. You should not assume that in the face of the diabetes epidemic that has grown under national eating guidelines and dietetic advice, that the experts have it right. Diabetes takes no prisoners and you shouldn’t compromise your outcome just to be nice to a health professional.
Time for Dr Google?
Dietitian’s organisations lampoon ‘Dr Google’ just like clothing retailers said people would never buy clothing online. Honestly though, if you are seeing a dietitian who is not on board with carb restriction for diabetes, you are wasting your precious time and health.
If you can’t get proper help from a local professional then there are sites like dietdoctor.com, forums like the ketogenic forums and facebook groups like type 2 diabetes straight talk or type one grit. If you are in the US, Virta’s service could be a good choice. Any of these would be preferable to a low carb inexperienced dietitian!
If you DIY then be conscious that some medications that you may be on (notably sulphonylureas and insulin) can be very dangerous to take if you suddenly reduce your dietary carbohydrate. If trying this, you should consult your doctor to clear or adjust your medications appropriately. | <urn:uuid:ab6bc3a6-601f-40c8-b3b8-5546121c35d2> | CC-MAIN-2022-33 | http://macrofour.com/engineering-a-cure-for-type-2-diabetes | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571246.56/warc/CC-MAIN-20220811073058-20220811103058-00294.warc.gz | en | 0.958601 | 4,894 | 2.6875 | 3 |
Dinanath Saura stared at me across the small room of his modest bamboo hut. “My father’s first reaction was one of incredulity. ‘How can a tea labourer become a tea planter? These are the hobbies of kings and emperors,’” he said, gently mocking his late father. “We have no business getting into them.” Saura’s face, creased by years of incessant labour under the intense Assamese sun, broke into a hesitant, questioning smile. “He would be proud today, no?”
“Indeed he would,” I replied.
More likely though, Dinanath’s father – a third generation tea labourer – would be astounded. The Sauras, like thousands of other families, were plucked by the British out of their native village in Odisha sometime in the early 19th century and brought to cultivate the fertile soil of upper Assam. Dinanath had achieved what his forebears could not have imagined even in their wildest dreams. Tea labourers do not get to be planters.
“Actually, we call ourselves ‘growers’, not planters. Small Tea Growers (STGs).” This distinction, to Saura, was more than mere semantics; it was what made him different, what marked out the new from the old. His pride is hardly misplaced. When the British began colonising the lands of Assam in the country’s Northeast, they had no intention of starting a subaltern revolution. In fact, they prescribed laws to ensure quite the opposite. One of the first was that no plantation could be less than two hundred bighas. The poor, the labourers and most of the local Assamese population were immediately out of the game. Since then, the British and their allies – the landowning class – maintained a stranglehold over tea cultivation in Assam. The beautiful tea bungalows, the memsahibs in their pretty dresses, lazy afternoon parties, tennis and Bloody Marys, were all part of a cultural production designed to keep it exclusive. For two centuries, all went according to plan.
A combination of social factors has led to nearly 29 percent of all tea produced in Assam today being grown by STGs in holdings that are, at times, less than an acre in size. Backyards have been turned into tea gardens, and families – husband and wife, son and daughter – armed with shears and dubious fertilisers, are growing tea. No inch of arable land is left uncultivated. Everywhere the eye turns, the short crop reigns supreme. Bamboo cultivation is vanishing, rice and paddy fields have been converted; tea is all consuming.
Small cars and big four-wheel bruisers race across the highways and village roads. Mud huts are being replaced with brick-and-mortar houses. The signs of prosperity in upper Assam are unmistakable. Where once poverty and government apathy bred a generation of insurgents, the tea industry has fostered a consumer revolution.
Friends and enemies
From Dinanath Saura’s dwellings in the village of Melamora in Golaghat we emerge into the muggy August heat. The afternoon is uncomfortably sticky. It rained the night before and my car gets stuck in mud. Three boys are summoned to heave and ho before I finally manage to crawl off. I am headed a short distance away, to the house of Gangadhar Saikia, the man everyone credits with having started the small tea movement in Assam and the neighbouring state of Nagaland.
Almost immediately, I manage to get lost in the area’s awkward bylanes. Asking for directions – a hazardous pursuit anywhere in India – seems to be easy enough here. Everyone, from the very young playing roadside cricket with makeshift bats, to old men sitting under the shade of trees, seems to know Saikia. “That way”, they all say as they point into the distance.
When the British began colonising the lands of Assam in the country’s Northeast, they had no intention of starting a subaltern revolution. In fact, they prescribed laws to ensure quite the opposite.
Saikia’s fame is not surprising: he was the man who took to heart the then Agricultural Minister Soneswar Bora’s 1978 nullification of a longstanding British diktat that allowed only big landowners to cultivate tea. Anyone with ten bighas of land could now plant without fear of prosecution.The announcement, though monumental, would hardly have mattered by itself. Dinanath Saura, who worked all his life with Saikia, told me, “You cannot change entrenched beliefs easily. Villagers do not know the law, nor are they interested. All they knew was that tea plantation was for the rich.”
Gangadhar Saikia led by example. As the headmaster of Melamora’s local school, he took it upon himself to change perceptions within his community by planting a tea garden on no more than an acre of his own land. As I entered the leafy, shaded lane leading towards his house, the signs that his rather meagre acre was nonetheless profitable, were evident. Four cars crowded the garage as I ambled into a traditional Assamese front yard, now converted into a waiting area with several dozen chairs. Inside the house, vitrified flooring and modern steel furniture contrasted with the mud and bamboo affairs that are prevalent here. A gaggle of voices and the occasional patter of children’s feet indicated a large joint family.
Nearly 80 years old, Saikia suffers from Parkinsons disease and his hands shake uncontrollably as he talks. It is difficult to understand him at first, and I have to lean closer. I realise to my embarrassment that the man is speaking in perfect English, though garbled by his physical condition. After about an hour, when his son insists that his father must rest, I have four pages of notes and many more questions.
Mrinal, the second son of Gangadhar Saikia agrees not only to answer them, but also to show me the family garden which lies on the outskirts of the village. I am intrigued by a particular statement of his father: “Land is both our friend and enemy.”
The story of a tea labourer turned planter/grower was an anomaly. The STG ‘revolution’ belongs to the Assamese middle class.
“It is obvious,” Mrinal says. “This is a land which is so fertile that it grows anything. There is no shortage of food; no one dies hungry here. That is why we have so many foreigners residing here illegally. Assam should be for the Assamese.”
I was jolted out of my happy subaltern reverie. This was a conversation I was familiar with, this parochialism, this ‘son of the soil’ argument. Growing up here in the late 1970s, I had been the regular recipient of abuse and blows for being a Bengali, the perennial ‘outsider’ in Assam. “Leave”, I was told every day. “You do not belong here.”
Mrinal parked the car outside a few mud huts opposite a tea garden. “These belong to my labourers,” he said, as an elderly man came from one to greet us. “How much today?” Mrinal asked as they both stepped onto the garden.
In a makeshift bamboo hut at the entrance, a pile of tea leaves had been gathered. The vehicle that was supposed to have arrived earlier to collect the daily pickings was delayed. Mrinal and I sat in two chairs while the old man gave him a run-down of the day’s troubles. It was a language I almost understood, but couldn’t quite grasp – a mixture of Bengali, Hindi and Assamese that the tea labourers had developed over two centuries in upper Assam.
The heat was stifling, and I desperately wanted to join a throng of kids who had jumped into a pond across the field and were now thrashing about, yelling to each other. I tried to keep my focus on Mrinal and saw that he was infuriated. Later, after the man was dismissed, he told me that the tea industry was facing a crisis of labour.
“The problem is you never know how many will turn up each day. Today I asked for twenty extra hands, only six came. Tea leaves have to be plucked at an interval of seven to ten days. Nearly 25 percent of my garden area does not get tended because these damned labourers never show up. They just get drunk at night and cannot get up in the morning. That is the main problem, you see – alcoholism. Yesterday was payday and today they are drunk. This happens each week, a day after they are paid.”
Over the course of that afternoon and for several weeks subsequently, as I started to understand and investigate the many myths, truths and lies that addle the story of tea in Assam, I realised I had celebrated Dinanath Saura’s ‘subaltern revolution’ prematurely. The story of a tea labourer turned planter/grower was an anomaly. The STG ‘revolution’ belongs to the Assamese middle class. It is driven by landowners and is a continuation of the identity politics that condemned this state to half a century of insurgency and violence and, albeit with less frequency, continues to persuade young men, and a few women, to mobilise.
Identity politics in Assam revolves around the core premise that Assam is for the Assamese; its resources, land, oil, coal, culture and language belong to the native Assamese. Its intellectual and moral drive derives from two separate positions: the first is the perceived Bengali dominance over the locals, initiated by the British; the second is the influx of poor Bangladeshi migrants since the creation of Bangladesh in 1971. The perceived injustice of being culturally dominated and economically squeezed gave rise to the anti-foreigners movement; the new-found affluence of tea was a declaration of independence for the locals – different from the brutality of the separatist movement, but a declaration nonetheless.
The Superintendent of Police, Rafiqul Lashkar, says that Golaghat and most parts of upper Assam are now peaceful. “Look, when people are well-fed and contented, they don’t want a revolution. They want peace and quiet. The middle-class Assamese, those that provided the intellectual argument for a revolution, they now send their sons to private schools in Delhi and even abroad. The mainstream becomes all-encompassing. Yes, the poor still go off to the forests in Bhutan and Bangladesh but without the middle-class drive, there is no energy in the movement anymore.”
Ratul Gogoi is one such middle-class, erstwhile revolutionary. Our first meeting is cancelled due to an unfortunate – albeit darkly comical – incident. Ratul, who spent six years as a bomb maker for the most prominent separatist outfit in the region – the United Liberation Front of Assam (ULFA) – had managed to chop off the forefinger on his right hand while fixing the chain of his motorcycle. He alludes to it when we finally meet. “I was taken during Operation All Clear which the Indian army launched in 2003 in Bhutan. I was on the run for four days before I was finally caught. I must have made thousands of bombs in those years. No incident ever. And now this,” he smiled ruefully, showing me the bandaged finger.
Ratul lives in a village about 12 kilometres from Golaghat town. Though his house is modest, as he takes me around the neighbourhood he shows me his tea plantation and the new brick residence he is building. Over four days of conversation, I prompt him intermittently to reveal whether his previous fervour has died; if this new money in tea has tapered his discontent with ‘India’. Ratul denies it, saying that it is not the minor comforts – a TV, motorcycle, a new house – that distance him from his old ideals, but the leadership’s betrayal. “They sold us out, our commanders. You are so indoctrinated when you are in the field, you do not think. But once you are out, you see what they are doing, what they have done and how they live. You realise you have been had, your youth is over and you are just another man with a long police record.”
Ratul Gogoi was a self-described foot soldier. He obeyed commands and made bombs. Over the years, a mind numbed by the routine chores of sifting nails and glass shards, mixing gunpowder and setting up detonating devices, has begun asking questions of the men who sent him to war. One of them, whom I seek out, is Romesh Saikia.
You are so indoctrinated when you are in the field, you do not think. But once you are out, you see what they are doing, what they have done and how they live. You realise you have been had, your youth is over and you are just another man with a long police record.
I had heard the man mentioned in whispers and hushed undertones; always with fear and trepidation mixed with awe and revulsion. He had been the commander of the ULFA in Golaghat and Jorhat districts in the early 1990s, and was also one of the first to surrender. For this, his former comrades ambushed him. He survived the attack and his legend grew. In the tale that is now told of him, he is both Robin Hood and Bill Gates; the priest of high capitalism and also the archetypal rebel. After his surrender and recovery from the ULFA attack in which he took bullets to the leg, the former leader embarked on a mission of enterprise and philanthropy. To this end he bankrolled a new school and college in his village, and freed up much of the land the villagers owed to debtors, while also investing heavily in tea cultivation. Today he owns a thousand acres of land dedicated to tea growing in one of the most remote and beautiful parts of upper Assam. The irony is that he is also the eldest son of Gangadhar Saikia – the man who started the STG movement.
Romesh Saikia now keeps himself behind high gates and security fences. I go to meet him one afternoon, a day after I visited his latest venture, Bor Gos, a monstrous construction passed off as a resort in the now denuded forests of Kaziranga. My feelings towards him are, understandably, less than charitable. The bespectacled and balding man I meet fitted none of the images I had conjured of him. Clad in Bermuda shorts and a worn T-shirt, he looked neither the tough commander of battle-hardened insurgents, nor the boorish, small-town nouveau riche I was meeting all too frequently.
Instead, I found a cautious man of deliberation and reflection. Soft-spoken, Romesh had little time for arguments. I sat with him for over three hours, holding guarded conversations about his days in the ULFA, his rise in business, his uneasy relationship with his brothers, and, unsurprisingly, a lot about Assam’s tea industry. “The tea story is over. We will realise this in about 10 or 15 years. But the growth is done with. Now there is more supply than demand, prices are falling and I am getting out of it and diversifying.” “Tourism,” he says (as I cringe), “is just one of the things I am venturing into.”
I was taken aback at tea cultivation being referred to as a boom and bust industry. Given my Assamese childhood, I had associated it with an old-world, pre-globalisation stability. That world, along with my Enid Blyton’s and Richmal Crompton’s, had quietly vanished.
Two months back, when I had arrived with little planning and a pregnant wife in tow, I had no clue of where to stay and whom to meet. A newspaper report about the newly opened Gymkhana Club had guided me to Deboshyam Barua, the owner. And it was in this old, high-ceilinged wooden mansion with swimming pool, gym and landscaped gardens, that the first clues to this changing world were to be found.
Barua belongs to a different class of tea garden owners, the ones known as planters. His father and grandfather were tea planters. He inherited the gardens and knows of no other life. But over a frosty beer on a Sunday afternoon at his residence, Barua says that he was forced into opening the club. “That was my home. I grew up there, played in those rooms now opened up to strangers. But we knew that lifestyle was unsustainable. Those old ways which my father was so fond of, I could not carry on; the economics of maintenance doesn’t allow such luxuries.”
According to Barua, it was the coming of the International Monetary Fund and structural adjustment in the early 1990s that changed the tea economy. “Earlier, tea was not this massive money-making industry. You made 25 paisa a kilogram, maybe 30. You had modest lifestyles. Nothing really happened. And suddenly, trade opened up, the economy went through the roof and everyone bet on tea. The downturn is coming and we have to adjust accordingly if I am to keep the gardens.”
Both the old-time planters and the new growers are anxious. The last 20 years of high growth has changed the age-old economic mantra of survival and caution. Houses have been built, cars purchased, loans taken. Now, as the debtors come calling, not only is there more tea than can be consumed, there are also associated problems of unregulated plantation. The costs of environmental degradation will be paid by future generations.
The indiscriminate use of pesticide is the most visible and dangerous by-product of sudden growth in the industry. As Rajib Das, the general secretary of the STG Association tells me, its use threatens to devalue Assam Tea as a brand. “You find it difficult to convince the small tea grower that the pesticide which cures your plant of bugs is not a magic medicine. Every time he sprays it, problems disappear. The government, though it likes to tax us heavily, refuses to spend money on educating these growers. And the problem grows bigger by the day.”
Increased pesticide use as a result of the industry’s growth has implications that go beyond tea, however. It also affects livestock. Many lament that much of the region’s meat quality is now dubious while fresh milk, once thought of as nutritious, is making children unwell. Tea has wiped out rice cultivation, creating severe grain shortages. The lack of bamboo has resulted in house prices soaring, meaning that villagers are having to turn to brick- and-mortar constructions that are not only less eco-friendly, but also demand unsustainable lifestyle changes. Mud and bamboo houses kept interiors cool for the summers; now households are investing in fans and finding that the erratic power supply is of little use. Electrical appliances are far more effective when run on generators than on the trickling mercy of the Assam Electricity Board.
Back to the future
Matters are at a crossroad. In the complex matrix of a global economy, the basic laws of supply and demand retain their stubborn constancy. And human beings have short memories. Just two decades before tea became the elixir of good fortune, sugarcane plantation had promised the same in upper Assam. It was Gangadhar Saikia, now the champion of the STG movement, who had exhorted fellow villagers in Melamora to take over fallow tea garden land and plant sugarcane. Despite the backbreaking labour, there was reward. Sugarcane factories were built, orders were placed. Though more land was captured, the inevitable soon happened: one season, there was too much sugarcane and processing factories refused the excess. Plants started to decay and then a disease called red rot vanquished the local industry. In 1982, nearly 25 years after it had started, sugarcane was finally abandoned as a cash crop in Melamora and surrounding villages of upper Assam. Tea cultivation, thankfully, was waiting around the corner and turned out to be a more profitable enterprise.
While the future of Assam’s economy is uncertain, I am, however, convinced of one thing: my initial euphoria for Assam’s subaltern STG ‘revolution’ was misplaced. As industries come and go and identity politics retains its currency, the fate of the labourers whose lives depend on the mercy of their masters is unchanging. Uneducated, underpaid, and with nowhere else to go, they have retained a stability as permanent as hell and as unforgiving. When the next boom industry comes, its legitimacy as a revolution will be measured by their sweat.
~Somnath Batabyal is a lecturer in media and development at the School of Oriental and African Studies. His first novel, The Price You Pay, was published in 2013. | <urn:uuid:4092edf4-5d9c-4b86-bfef-f6e7a5174d8e> | CC-MAIN-2022-33 | https://www.himalmag.com/assams-subaltern-ruse/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572215.27/warc/CC-MAIN-20220815235954-20220816025954-00695.warc.gz | en | 0.979295 | 4,513 | 2.859375 | 3 |
Up-and-Down Designs (UDDs) are a family of statistical experiment designs used in dose-finding experiments in science, engineering, and medical research. Dose-finding experiments have binary responses: each individual outcome can be described as one of two possible values, such as success vs. failure or toxic vs. non-toxic. Mathematically the binary responses are coded as 1 and 0. The goal of dose-finding experiments is to estimate the strength of treatment (i.e., the 'dose') that would trigger the "1" response a pre-specified proportion of the time. This dose can be envisioned as a percentile of the distribution of response thresholds. An example where dose-finding is used: an experiment to estimate the LD50 of some toxic chemical with respect to mice.
Dose-finding designs are sequential and response-adaptive: the dose at a given point in the experiment depends upon previous outcomes, rather than be fixed a priori. Dose-finding designs are generally more efficient for this task than fixed designs, but their properties are harder to analyze, and some require specialized design software. UDDs use a discrete set of doses rather than vary the dose continuously. They are relatively simple to implement, and are also among the best understood dose-finding designs. Despite this simplicity, UDDs generate random walks with intricate properties. The original UDD aimed to find the median threshold by increasing the dose one level after a "0" response, and decreasing it one level after a "1" response. Hence the name "Up-and-Down". Other UDDs break this symmetry in order to estimate percentiles other than the median, or are able to treat groups of subjects rather than one at a time.
UDDs were developed in the 1940s by several research groups independently. The 1950s and 1960s saw rapid diversification with UDDs targeting percentiles other then the median, and expanding into numerous applied fields. The 1970s to early 1990s saw little UDD methods research, even as the design continued to be used extensively. A revival of UDD research since the 1990s has provided deeper understanding of UDDs and their properties, and new and better estimation methods.
UDDs are still used extensively in the two applications for which they were originally developed: psychophysics where they are used to estimate sensory thresholds and are often known as fixed forced-choice staircase procedures, and explosive sensitivity testing, where the median-targeting UDD is often known as the Bruceton test. UDDs are also very popular in toxicity and anesthesiology research. They are also considered a viable choice for Phase I clinical trials.
Let be the sample size of a UDD experiment, and assume for now that subjects are treated one at a time. Then the doses these subjects receive, denoted as random variables , are chosen from a discrete, finite set of increasing dose levels Furthermore, if , then according to simple constant rules based on recent responses. In words, the next subject must be treated one level up, one level down, or at the same level as the current subject; hence the name "Up-and-Down". The responses themselves are denoted hereafter we call the "1" responses positive and "0" negative. The repeated application of the same rules (known as dose-transition rules) over a finite set of dose levels, turns into a random walk over . Different dose-transition rules produce different UDD "flavors", such as the three shown in the figure above.
Despite the experiment using only a discrete set of dose levels, the dose-magnitude variable itself, , is assumed to be continuous, and the probability of positive response is assumed to increase continuously with increasing . The goal of dose-finding experiments is to estimate the dose (on a continuous scale) that would trigger positive responses at a pre-specified target rate ; often known as the "target dose". This problem can be also expressed as estimation of the quantile of a cumulative distribution function describing the dose-toxicity curve . The density function associated with is interpretable as the distribution of response thresholds of the population under study.
The Transition Probability Matrix
Given that a subject receives dose , denote the probability that the next subject receives dose , or , as or , respectively. These transition probabilities obey the constraints and the boundary conditions .
Each specific set of UDD rules enables the symbolic calculation of these probabilities, usually as a function of . Assume for now that transition probabilities are fixed in time, depending only upon the current allocation and its outcome, i.e., upon and through them upon (and possibly on a set of fixed parameters). The probabilities are then best represented via a tri-diagonal transition probability matrix (TPM) :
The Balance Point
Usually, UDD dose-transition rules bring the dose down (or at least bar it from escalating) after positive responses, and vice versa. Therefore, UDD random walks have a central tendency: dose assignments tend to meander back and forth around some dose that can be calculated from the transition rules, when those are expressed as a function of . This dose has often been confused with the experiment's formal target , and the two are often identical - but they do not have to be. The target is the dose that the experiment is tasked with estimating, while , known as the "balance point", is approximately where the UDD's random walk revolves around.
The Stationary Distribution of Dose Allocations
Since UDD random walks are regular Markov chains, they generate a stationary distribution of dose allocations, , once the effect of the manually-chosen starting dose wears off. This means, long-term visit frequencies to the various doses will approximate a steady state described by . According to Markov chain theory the starting-dose effect wears off rather quickly, at a geometric rate. Numerical studies suggest that it would typically take between and subjects for the effect to wear off nearly completely. is also the asymptotic distribution of cumulative dose allocations.
UDD's central tendency ensures that long-term, the most frequently visited dose (i.e., the mode of ) will be one of the two doses closest to the balance point . If is outside the range of allowed doses, then the mode will be on the boundary dose closest to it. Under the original median-finding UDD, the mode will be at the closest dose to in any case. Away from the mode, asymptotic visit frequencies decrease sharply, at a faster-than-geometric rate. Even though a UDD experiment is still a random walk, long excursions away from the region of interest are very unlikely.
Common Up-and-Down Designs
The Original ("Simple" or "Classical") UDD
The original "simple" or "classical" UDD moves the dose up one level upon a negative response, and vice versa. Therefore, the transition probabilities are
We use the original UDD as an example for calculating the balance point . The design's 'up', 'down' functions are We equate them to find :
As stated earlier, the "classical" UDD is designed to find the median threshold. This is a case where
The "classical" UDD can be seen as a special case of each of the more versatile designs described below.
Durham and Flournoy's Biased Coin Design
This UDD shifts the balance point, by adding the option of treating the next subject at the same dose rather than move only up or down. Whether to stay is determined by a random toss of a metaphoric "coin" with probability This biased-coin design (BCD) has two "flavors", one for and one for whose rules are shown below:
The `heads' probability can take any value in. The balance point is
The BCD balance point can made identical to a target rate by setting the `heads' probability to . For example, for set . Setting makes this design identical to the classical UDD, and inverting the rules by imposing the coin toss upon positive rather than negative outcomes, produces above-median balance points. Versions with two coins, one for each outcome, have also been published, but they do not seem to offer an advantage over the simpler single-coin BCD.
Group (Cohort) UDDs
Some dose-finding experiments, such as Phase I trials, require a waiting period of weeks before determining each individual outcome. It may preferable then, to be able treat several subjects at once or in rapid succession. With group UDDs, the transition rules apply rules to cohorts of fixed size rather than to individuals. becomes the dose given to cohort , and is the number of positive responses in the -th cohort, rather than a binary outcome. Given that the -th cohort is treated at on the interior of the -th cohort is assigned to
follow a Binomial distribution conditional on , with parameters and. The `up' and `down' probabilities are the Binomial distribution's tails, and the `stay' probability its center (it is zero if ). A specific choice of parameters can be abbreviated as GUD
Nominally, group UDDs generate -order random walks, since the most recent observations are needed to determine the next allocation. However, with cohorts viewed as single mathematical entities, these designs generate a first-order random walk having a tri-diagonal TPM as above. Some group UDD subfamilies are of interest:
- Symmetric designs with (e.g., GUD) obviously target the median.
- The family GUD encountered in toxicity studies, allows escalation only with zero positive responses, and de-escalate upon any positive response. The escalation probability at is and since this design does not allow for remaining at the same dose, at the balance point it will be exactly . Therefore,
With would be associated with and , respectively. The mirror-image family GUD has its balance points at one minus these probabilities.
For general group UDDs, the balance point can be calculated only numerically, by finding the dose with toxicity rate such that
Any numerical root-finding algorithm, e.g., Newton-Raphson, can be used to solve for .
The -in-a-Row (or "Transformed" or "Geometric") UDD
This is the most commonly used non-median UDD. It was introduced by Wetherill in 1963, and proliferated by him and colleagues shortly thereafter to psychophysics, where it remains one of the standard methods to find sensory thresholds. Wetherill called it "Transformed" UDD; Gezmu who was the first to analyze its random-walk properties, called it "Geometric" UDD in the 1990s; and in the 2000s the more straightforward name "-in-a-row" UDD was adopted. The design's rules are deceptively simple:
In words, every dose escalation requires non-toxicities observed on consecutive data points, all at the current dose, while de-escalation only requires a single toxicity. It closely resembles GUD described above, and indeed shares the same balance point. The difference is that -in-a-row can bail out of a dose level upon the first toxicity, whereas its group UDD sibling might treat the entire cohort at once, and therefore might see more than one toxicity before descending.
The method used in sensory studies is actually the mirror-image of the one defined above, with successive responses required for a de-escalation and only one non-response for escalation, yielding for .
-in-a-row generates a -th order random walk because knowledge of the last responses might be needed. It can be represented as a first-order chain with states, or as a Markov chain with levels, each having internal states labeled to The internal state serves as a counter of the number of immediately recent consecutive non-toxicities observed at the current dose. This description is closer to the physical dose-allocation process, because subjects at different internal states of the level , are all assigned the same dose . Either way, the TPM is (or more precisely, , because the internal counter is meaningless at the highest dose) - and it is not tridiagonal.
Here is the expanded -in-a-row TPM with and , using the abbreviation Each level's internal states are adjacent to each other.
-in-a-row is often considered for clinical trials targeting a low-toxicity dose. In this case, the balance point and the target are not identical; rather, is chosen to aim close to the target rate, e.g., for studies targeting the 30th percentile, and for studies targeting the 20th percentile.
Estimating the Target Dose
Unlike other design approaches, UDDs do not have a specific estimation method "bundled in" with the design as a default choice. Historically, the more common choice has been some weighted average of the doses administered, usually excluding the first few doses to mitigate the starting-point bias. This approach antedates deeper understanding of UDDs' Markov properties, but its success in numerical evaluations relies upon the eventual sampling from , since the latter is centered roughly around
The single most popular among these averaging estimators was introduced by Wetherill et al. in 1966, and only includes reversal points (points where the outcome switches from 0 to 1 or vice versa) in the average. See example on the right. In recent years, the limitations of averaging estimators have come to light, in particular the many sources of bias that are very difficult to mitigate. Reversal estimators suffer from both multiple biases (although there is some inadvertent cancelling out of biases), and increased variance due to using a subsample of doses. However, the knowledge about averaging-estimator limitations has yet to disseminate outside the methodological literature and affect actual practice.
By contrast, regression estimators attempt to approximate the curve describing the dose-response relationship, in particular around the target percentile. The raw data for the regression are the doses on the horizontal axis, and the observed toxicity frequencies,
on the vertical axis. The target estimate is the abscissa of the point where the fitted curve crosses
Probit regression has been used for many decades to estimate UDD targets, although far less commonly than the reversal-averaging estimator. In 2002, Stylianou and Flournoy introduced an interpolated version of isotonic regression to estimate UDD targets and other dose-response data. More recently, a modification called "centered isotonic regression" was developed by Oron and Flournoy, promising substantially better estimation performance than ordinary isotonic regression in most cases, and also offering the first viable interval estimator for isotonic regression in general. Isotonic regression estimators appear to be the most compatible with UDDs, because both approaches are nonparametric and relatively robust.
- Durham, SD; Flournoy, N. "Up-and-down designs. I. Stationary treatment distributions.". In Flournoy, N; Rosenberger, WF (eds.). IMS Lecture Notes Monograph Series. 25: Adaptive Designs. pp. 139–157.
- Dixon, WJ; Mood, AM (1948). "A method for obtaining and analyzing sensitivity data". Journal of the American Statistical Association. 43: 109–126. doi:10.1080/01621459.1948.10483254.
- von Békésy, G (1947). "A new audiometer". Acta Oto-Laryngologica. 35: 411–422. doi:10.3109/00016484709123756.
- Anderson, TW; McCarthy, PJ; Tukey, JW (1946). 'Staircase' method of sensitivity testing (Technical report). Naval Ordnance Report. 65-46.
- Flournoy, N; Oron, AP. "Up-and-Down Designs for Dose-Finding". In Dean, A (ed.). Handbook of Design and Analysis of Experiments. CRC Press. pp. 858–894.
- Stylianou, MP; Flournoy, N (2002). "Dose finding using the biased coin up-and-down design and isotonic regression". Biometrics. 58: 171–177. doi:10.1111/j.0006-341x.2002.00171.x.
- Oron, AP; Flournoy, N (2017). "Centered Isotonic Regression: Point and Interval Estimation for Dose-Response Studies". Statistics in Biopharmaceutical Research. 9: 258–267. doi:10.1080/19466315.2017.1286256.
- Leek, MR (2001). "Adaptive procedures in psychophysical research". Perception and Psychophysics. 63: 1279–1292. doi:10.3758/bf03194543.
- Pace, NL; Stylianou, MP (2007). "Advances in and Limitations of Up-and-down Methodology: A Precis of Clinical Use, Study Design, and Dose Estimation in Anesthesia Research". Anesthesiology. 107: 144–152. doi:10.1097/01.anes.0000267514.42592.2a.
- Oron, AP; Hoff, PD (2013). "Small-Sample Behavior of Novel Phase I Cancer Trial Designs". Clinical Trials. 10: 63–80. doi:10.1177/1740774512469311.
- Oron, AP; Hoff, PD (2009). "The k-in-a-row up-and-down design, revisited". Statistics in Medicine. 28: 1805–1820. doi:10.1002/sim.3590.
- Diaconis, P; Stroock, D (1991). "Geometric bounds for eigenvalues of Markov chain". The Annals of Applied Probability. 1: 36–61. doi:10.1214/aoap/1177005980.
- Gezmu, M; Flournoy, N (2006). "Group up-and-down designs for dose-finding". Journal of Statistical Planning and Inference. 6: 1749–1764.
- Wetherill, GB; Levitt, H (1963). "Sequential estimation of quantal response curves". Journal of the Royal Statistical Society, Series B. 25: 1–48. doi:10.1111/j.2517-6161.1963.tb00481.x.
- Wetherill, GB (1965). "Sequential estimation of points on a Psychometric Function". British Journal of Mathematical and Statistical Psychology. 18: 1–10. doi:10.1111/j.2044-8317.1965.tb00689.x.
- Gezmu, Misrak (1996). The Geometric Up-and-Down Design for Allocating Dosage Levels (PhD). American University.
- Garcia-Perez, MA (1998). "Forced-choice staircases with fixed step sizes: asymptotic and small-sample properties". Vision Research. 38 (12): 1861–81. doi:10.1016/s0042-6989(97)00340-4.
- Wetherill, GB; Chen, H; Vasudeva, RB (1966). "Sequential estimation of quantal response curves: a new method of estimation". Biometrika. 53: 439–454. doi:10.1093/biomet/53.3-4.439. | <urn:uuid:cd8c0eb1-bf7e-4d7f-a3fc-cdddd931e062> | CC-MAIN-2022-33 | https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Up-and-Down_Designs | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00697.warc.gz | en | 0.898253 | 4,120 | 3.390625 | 3 |
Is there a rational way we can move forward and improve COVID outcomes? Dr. Peter McCullough shares his insights here.
- Those at highest risk of dying from COVID-19 are also at highest risk of dying from the COVID shot. The shots are also causing severe heart damage in younger people whose risk of dying from COVID is inconsequential
- While you only get at most six months’ worth of protection from the COVID shot, each injection will cause damage for 15 months as your body continuously produces toxic spike protein
- The spike protein is responsible for COVID-19-related heart and vascular problems, and it has the same effect when produced by your own cells. It causes blood clots, myocarditis and pericarditis, strokes, heart attacks and neurological damage, just to name a few
- The safety signal is very clear, with 19,249 deaths having been reported to the U.S. Vaccine Adverse Events Reporting System as of November 19, 2021. Historically, drugs and vaccines are pulled off the market after about 50 suspected deaths
- Children aged 12 to 17 are five times more likely to be hospitalized with COVID jab-induced myocarditis than they are to be hospitalized for COVID-19 infection
The video above features Dr. Peter McCullough, a cardiologist, internist and epidemiologist, and editor of two peer-review journals, who has been on the media and medical frontlines fighting for early COVID treatment. McCullough has also been outspoken about the potential dangers of the COVID shots, and the lack of necessity for them. Curiously, agencies that are currently calling the shots do not have the authority to dictate how medicine is practiced.
The U.S. Food and Drug Administration, for example, has no power to tell doctors what to do or how to treat patients. The National Institutes of Health are a government research organization and cannot tell doctors how to treat patients.
Ditto for the U.S. Centers for Disease Control and Prevention, which is an epidemiologic analysis organization. It is the job of practicing doctors to identify appropriate and effective treatment protocols, which is precisely what McCullough has been doing since the start of this pandemic.
In August 2020, McCullough’s landmark paper “Pathophysiological Basis and Rationale for Early Outpatient Treatment of SARS-CoV-2 Infection” was published online in the American Journal of Medicine.1
A follow-up paper, “Multifaceted Highly Targeted Sequential Multidrug Treatment of Early Ambulatory High-Risk SARS-CoV-2 Infection (COVID-19)” was published in Reviews in Cardiovascular Medicine in December 2020.2 It became the basis for a home treatment guide.
COVID Shots Are Dangerous and Ineffective
When it comes to the COVID injections, McCullough cites research showing those at highest risk of dying from COVID-19 are also at highest risk of dying from the COVID shot. Additionally, the shots are causing severe heart damage in younger people whose risk of dying from COVID is inconsequential.
He points out the safety signal is very clear, with 19,249 deaths having been reported to the U.S. Vaccine Adverse Events Reporting System as of November 19, 2021.3
The signal is also consistent both internally and externally. A number of side effects are reported in high numbers, and very close to the time of injection, that validate the suspicion that the shots are at fault. The U.S. data are also consistent with data from other countries, such as the Yellow Card system in the U.K.
Despite that, not a single safety review has been conducted to weed out risk factors and the like. “We’re almost a year into the program and there’s been no attempt at risk mitigation,” McCullough says. At the same time, there have been gross attempts to coerce Americans into taking the shots — everything from free beer or a free lap dance, to million-dollar lotteries and paid scholarships to state universities.
Such enticements are an undeniable violation of research ethics that strictly forbid any and all kinds of coercion of human subjects. As suspected and predicted, no sooner had bribery stopped working than government officials started talking about vaccine mandates.
President Biden infamously stated that his patience with “vaccine hesitancy” was “wearing thin.” The insinuation was that if people didn’t get the shot, they’d face serious repercussions, and we’re now seeing those repercussions play out day by day, as people are being fired and kicked out of school for refusing the jab.
Meanwhile, they haven’t even determined which vaccine is the most effective, which is remarkable. If government really wanted to end the pandemic with a vaccine, wouldn’t they determine which shot works the best and promote the use of that? But no, they tell us any shot will do.
“The fact that there’s no safety report, they’re not telling you if you’re taking the best vaccine, the fact that it’s kind of in a distorted way linked to your ability to work and go to school, that we’re violating the Nuremberg Code, violating the declaration of Helsinki — it’s just not adding up. It’s not looking good for those who are promoting the vaccine,” McCullough says.
Add to all that the now-clear finding that the shots offer only limited protection for a very short time — six months at best. According to McCullough, there are more than 20 studies showing efficacy drops to nothing at the six-month mark. They’ve also had very limited effectiveness against the Delta variant, which has been the predominant strain for several months.
Why Booster Treadmill Is Such a Health Hazard
I’ve often stated that, in all likelihood, your risk of side effects will rise with each additional shot. McCullough cites research showing your body will produce the toxic SARS-CoV-2 spike protein for 15 months.
If your body is still producing the spike protein — which is what’s causing the blood clots and cardiovascular damage — and you take an additional shot every six months, there will come a time when your body simply cannot withstand the damage being caused by all the spike protein being produced.
Also consider this: While you only get at most six months’ worth of protection from any given shot, each injection will cause damage for 15 months. If we continue with boosters, eventually, it’s going to be impossible to ever clear out the spike protein.
While the spike protein is the part of the virus chosen as the antigen, the part that triggers an immune response, it’s also the part of the virus that causes the worst disease. The spike protein is responsible for COVID-19-related heart and vascular problems, and it has the same effect when produced by your own cells.
It causes blood clots, myocarditis and pericarditis, strokes, heart attacks and neurological damage, just to name a few. As noted by McCullough, the spike protein of this virus was genetically engineered to be more dangerous to humans than any previous coronavirus, and that is what the COVID shots are programming your cells to produce. “They’re just grossly unsafe for human use,” McCullough says.
Myocarditis Will Likely Be Widespread
He goes on to discuss research from 2017,4 which showed myocarditis in children and youth occurs at a rate of four cases per million per year. Assuming there are 60 million American children, the background rate for myocarditis would be 240 cases a year. How many cases of myocarditis have been reported to VAERS following COVID injection so far? 14,428 as of November 19, 2021.5
“Doctors have never seen so many cases of myocarditis,” McCullough says, citing research showing that among children between the ages of 12 and 17, 87% are hospitalized after receiving the shot. “That’s how dangerous it is,” he says. “It is frequent, and it is severe.”
Yet the FDA claims myocarditis after the COVID shot is “rare and mild.” We’re now also getting reports of fatal cases of myocarditis in adults in their 30s and 40s. “Myocarditis right now looks like an unqualified disaster,” McCullough says, both for younger people and adults.
“Children aged 12 to 17 are five times more likely to be hospitalized with COVID jab-induced myocarditis than they are to be hospitalized for COVID infection.”
Sadly, children also reap no benefit from the shots, so it’s all risk and no benefit for them. McCullough points out there has been no recorded school outbreaks and no child-to-teacher transmission. He estimates 80% of school aged children are already immune, which would explain this.
Meanwhile, research cited in the interview found that children aged 12 to 17 are five times more likely to be hospitalized with COVID jab-induced myocarditis than they are to be hospitalized for COVID infection. These data counter the claim that COVID-induced heart problems are a far greater problem than “vaccine”-induced heart damage.
And let’s not forget, if you take a COVID shot, you have a 100% chance of being exposed to whatever risk is associated with that shot. On the other hand, if you decline the injection, it’s not 100% chance you’ll get COVID-19, let alone die from it. You have a less than 1% chance of being exposed to SARS-CoV-2 and getting sick.
So, it’s 100% deterministic that taking the shot exposes you to the risks of the shot, and less than 1% deterministic that you’ll get COVID if you don’t take the shot.
COVID-19 Unrelated to Vaccination Rates
As noted by McCullough, rates of COVID are higher now in the highest vaccinated areas than they were before the vaccine rollout. That too tells us they aren’t working and not worth the risk.
He cites research6 published September 30, 2021, in the European Journal of Epidemiology, which found no relationship between COVID-19 cases and levels of vaccination in 68 countries worldwide and 2,947 counties in the U.S. If anything, areas with high vaccination rates had slightly higher incidences of COVID-19. According to the authors:7
“[T]he trend line suggests a marginally positive association such that countries with higher percentage of population fully vaccinated have higher COVID-19 cases per 1 million people.”
Iceland and Portugal, for example, where more than 75% of their populations are fully vaccinated, had more COVID-19 cases per 1 million people than Vietnam and South Africa, where only about 10% of the populations are fully vaccinated.8 Data from U.S. counties showed the same thing. New COVID-19 cases per 100,000 people were “largely similar,” regardless of the percentage of a state’s population that was fully vaccinated.
“There … appears to be no significant signaling of COVID-19 cases decreasing with higher percentages of population fully vaccinated,” the authors wrote.9 Notably, out of the five U.S. counties with the highest vaccination rates — ranging from 84.3% to 99.9% fully vaccinated — four of them were on the U.S. Centers for Disease Control and Prevention’s “high transmission” list. Meanwhile, 26.3% of the 57 counties with “low transmission” have vaccination rates below 20%.
The study even accounted for a one-month lag time that could occur among the fully vaccinated, since it’s said that it takes two weeks after the final dose for “full immunity” to occur. Still, “no discernable association between COVID-19 cases and levels of fully vaccinated” was observed.10
Hospitalization rates for severe COVID infection have also risen, from 0.01% in January 2021 to 9% in May 2021, and the COVID death rate rose from zero percent to 15.1% in that same timeframe.11 In short, everything is getting worse, not better, the more people get these shots.
Allowing natural immunity to build is really the only rational way forward. But then again, the COVID jabs aren’t about protecting public health. They’re about ushering in a socio-economic control system via vaccine passports, which is something McCullough doesn’t discuss in this interview. Nothing makes sense if you look at it from a medical standpoint. It only makes sense if you see it for what it is, which is a control system.
Natural Immunity Is ‘Infinitely Better’ Than Vaccine Immunity
According to McCullough, “natural immunity is infinitely better than vaccine immunity,” and studies have borne that out time and again. The reason natural immunity is superior to vaccine-induced immunity is because viruses contain five different proteins.
The COVID shot induces antibodies against just one of those proteins, the spike protein, and no T cell immunity. When you’re infected with the whole virus, you develop antibodies against all parts of the virus, plus memory T cells.
This also means natural immunity offers better protection against variants, as it recognizes several parts of the virus. If there are significant alternations to the spike protein, as with the Delta variant, vaccine-induced immunity can be evaded. Not so with natural immunity, as the other proteins are still recognized and attacked.
Here’s a sampling of scholarly publications that have investigated natural immunity as it pertains to SARS-CoV-2 infection. There are several more in addition to these:12
- Science Immunology October 202013 found that “RBD-targeted antibodies are excellent markers of previous and recent infection, that differential isotype measurements can help distinguish between recent and older infections, and that IgG responses persist over the first few months after infection and are highly correlated with neutralizing antibodies.”
- The BMJ January 202114 concluded that “Of 11, 000 health care workers who had proved evidence of infection during the first wave of the pandemic in the U.K. between March and April 2020, none had symptomatic reinfection in the second wave of the virus between October and November 2020.”
- Science February 202115 reported that “Substantial immune memory is generated after COVID-19, involving all four major types of immune memory [antibodies, memory B cells, memory CD8+ T cells, and memory CD4+ T cells].About 95% of subjects retained immune memory at ~6 months after infection. Circulating antibody titers were not predictive of T cell memory. Thus, simple serological tests for SARS-CoV-2 antibodies do not reflect the richness and durability of immune memory to SARS-CoV-2.”A 2,800-person study found no symptomatic reinfections over a ~118-day window, and a 1,246-person study observed no symptomatic reinfections over 6 months.
- A February 2021 study posted on the prepublication server medRxiv16 concluded that “Natural infection appears to elicit strong protection against reinfection with an efficacy ~95% for at least seven months.”
- An April 2021 study posted on medRxiv17 reported “the overall estimated level of protection from prior SARS-CoV-2 infection for documented infection is 94.8%; hospitalization 94.1%; and severe illness 96·4%. Our results question the need to vaccinate previously-infected individuals.”
- Another April 2021 study posted on the preprint server BioRxiv18 concluded that “following a typical case of mild COVID-19, SARS-CoV-2-specific CD8+ T cells not only persist but continuously differentiate in a coordinated fashion well into convalescence, into a state characteristic of long-lived, self-renewing memory.”
- A May 2020 report in the journal Immunity19 confirmed that SARS-CoV-2-specific neutralizing antibodies are detected in COVID-19 convalescent subjects, as well as cellular immune responses. Here, they found that neutralizing antibody titers do correlate with the number of virus-specific T cells.
- A May 2021 Nature article20 found SARS-CoV-2 infection induces long-lived bone marrow plasma cells, which are a crucial source of protective antibodies. Even after mild infection, anti-SARS-CoV-2 spike protein antibodies were detectable beyond 11 months’ post-infection.
- A May 2021 study in E Clinical Medicine21 found “antibody detection is possible for almost a year post-natural infection of COVID-19.” According to the authors, “Based on current evidence, we hypothesize that antibodies to both S and N-proteins after natural infection may persist for longer than previously thought, thereby providing evidence of sustainability that may influence post-pandemic planning.”
- Cure-Hub data22 confirm that while COVID shots can generate higher antibody levels than natural infection, this does not mean vaccine-induced immunity is more protective. Importantly, natural immunity confers much wider protection as your body recognizes all five proteins of the virus and not just one. With the COVID shot, your body only recognizes one of these proteins, the spike protein.
- A June 2021 Nature article23 points out that “Wang et al. show that, between six and 12 months after infection, the concentration of neutralizing antibodies remains unchanged. That the acute immune reaction extends even beyond six months is suggested by the authors’ analysis of SARS-CoV-2-specific memory B cells in the blood of the convalescent individuals over the course of the year.These memory B cells continuously enhance the reactivity of their SARS-CoV-2-specific antibodies through a process known as somatic hypermutation. The good news is that the evidence thus far predicts that infection with SARS-CoV-2 induces long-term immunity in most individuals.”
Reinfection Is Very Rare
McCullough stresses there is also no need to worry about reinfection if you’ve already had COVID once. The fact is, while breakthrough cases continue among those who have gotten one or more COVID-19 injections, it’s extremely rare to get COVID-19 after you’ve recovered from the infection.
How rare? Researchers from Ireland conducted a systematic review including 615,777 people who had recovered from COVID-19, with a maximum duration of follow-up of more than 10 months.24
“Reinfection was an uncommon event,” they noted, “with no study reporting an increase in the risk of reinfection over time.” The absolute reinfection rate ranged from 0% to 1.1%, while the median reinfection rate was just 0.27%.25 26 27
Another study revealed similarly reassuring results. It followed 43,044 SARS-CoV-2 antibody-positive people for up to 35 weeks, and only 0.7% were reinfected. When genome sequencing was applied to estimate population-level risk of reinfection, the risk was estimated at 0.1%.28
There was no indication of waning immunity over seven months of follow-up, unlike with the COVID-19 injection, which led the researchers to conclude that “Reinfection is rare. Natural infection appears to elicit strong protection against reinfection with an efficacy >90% for at least seven months.”29
“It’s a one-and-done,” McCullough says. If you’ve had it once, you won’t get it again. He also advises against using PCR testing after you’ve had confirmed COVID-19 once, as any subsequent positive tests are just going to be false positives.
Early Treatment Options
In closing, should you get COVID-19, know there are several very effective early treatment options, and early treatment is key, both for preventing severe infection and preventing “long-haul COVID.” Here are a few suggestions:
- Oral-nasal decontamination — The virus, especially the Delta variant, replicates rapidly in the nasal cavity and mouth for three to five days before spreading to the rest of the body, so you want to strike where it’s most likely to be found right from the start.Research30 has demonstrated that irrigating your nasal passages with 2.5 milliliters of 10% povidone-iodine (an antimicrobial) and standard saline, twice a day, is an effective remedy.Another option that was slightly less effective was using a mixture of saline with half a teaspoon of sodium bicarbonate (an alkalizer). You can also gargle with these to kill viruses in your mouth and throat. When done routinely, it can be a very effective preventive strategy. You can find printable treatment guides on TruthForHealth.org.
- Nebulized peroxide — A similar strategy is to use nebulized hydrogen peroxide, diluted with saline to a 0.1% solution. Both hydrogen peroxide and saline31 32 have antiviral effects.In a May 10, 2021, Orthomolecular Medicine press release,33 Dr. Thomas E. Levy — board-certified in internal medicine and cardiology — discussed the use of this treatment for COVID-19 specifically. Levy has in fact written an entire book on peroxide nebulization called “Rapid Virus Recovery,” which you can download for free from MedFox Publishing.
- Vitamin D optimization — Research has shown having a vitamin D level above 50 ng/mL brings the risk of COVID mortality down to near-zero.34
- Other key nutraceuticals — Vitamin C, zinc, quercetin and NAC all have scientific backing.
- Key drugs — For acute infection, ivermectin, hydroxychloroquine or monoclonal antibodies can be used. While monoclonal antibodies and hydroxychloroquine must be used early on in the disease process, ivermectin has been shown to be effective in all stages of the infection.Doxycycline or azithromycin are typically added as well, to address any secondary bacterial infection, as well as inhaled budesonide (a steroid). Oral steroids are used on and after the fifth day for pulmonary weakness and aspirin or NAC can be added to reduce the risk of clotting. In the interview, McCullough discusses the use of each of these, and other, drugs.One drug I disagree with is full-strength aspirin. I believe a potentially better, at least safer, alternative would be to use the enzymes lumbrokinase and serrapeptase, as they help break down and prevent blood clots naturally.
Access this content 48 hours faster by subscribing to the FREE Mercola Health Newsletter today.
Disclaimer: The entire contents of this website are based upon the opinions of Dr. Mercola, unless otherwise noted. Individual articles are based upon the opinions of the respective author, who retains copyright as marked.
The information on this website is not intended to replace a one-on-one relationship with a qualified health care professional and is not intended as medical advice. It is intended as a sharing of knowledge and information from the research and experience of Dr. Mercola and his community. Dr. Mercola encourages you to make your own health care decisions based upon your research and in partnership with a qualified health care professional. The subscription fee being requested is for access to the articles and information posted on this site, and is not being paid for any individual medical advice.
If you are pregnant, nursing, taking medication, or have a medical condition, consult your health care professional before using products based on this content.1 | <urn:uuid:bf93e22d-36d0-4037-9de8-dca73dcd49aa> | CC-MAIN-2022-33 | https://takeaction4freedom.com/the-covid-shots-are-killing-people/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00697.warc.gz | en | 0.954381 | 5,100 | 2.609375 | 3 |
Last updated: November 2020
Table of Contents
Many interactions with the Libraries or library resources result in data about you being recorded. This policy will let you know what may be collected, how it is used and protected, and when it may be shared. We try to collect only the information which is necessary to provide you with library services, and will give you as many options as possible to control your own data.
We do our best to safeguard your privacy as a user of our systems; however, there are also steps you can take to minimize the creation of personally identifiable information in your general online interactions. To learn more about your privacy generally, and find tools to help safeguard it, these resources are a good place to start: Library Freedom Project, Electronic Frontier Foundation, Cookies and You. The Libraries also maintain visitor computers with access to most library resources, which can provide a more anonymous access option.
2 Data Collection
The MIT Libraries tries to minimize the amount of personal information necessary to use library services. Many library services can be accessed without personal information being collected.
The patron data that the MIT Libraries do collect is classified using guidelines published by MIT’s Written Information Security Program (WISP), which determine how long it is retained and how it is stored. This classification places systems into categories of low, medium, or high risk. Information about this program, the risk categories, and required security steps can be found at infoprotect.mit.edu. Low risk data is data which is already public or which wouldn’t be harmful if released. High risk data is data which is subject to legal requirements or would cause serious safety, financial, or operational harm if released, and medium risk data is in between. The Libraries treat all information which identifies the intellectual pursuits of students – such as search logs and reference questions – as high risk, consistent with the MIT FERPA policy. Any information related to unpublished research as well as identifying information paired with an MIT ID is generally categorized as medium risk or higher. Additional guidance on risk determinations for specific information comes from MIT Risk Management.
Sections 2.1 through 2.5 below describe the sources from which we receive information about patrons. Generally, information at all risk levels may be received through all of these means.
2.1 Data provided directly by the patron
Patrons provide information to the Libraries when they hand over their MIT ID to check out materials, search for materials via a web-based discovery tool like the Barton catalog or BartonPlus, submit a web form, interact with Libraries staff directly or by chat or email, or attend Libraries events.
2.2 Data provided by the software used by the patron
The tools used by patrons when using Library services provide certain information without the patron’s intervention. An example of this type of information includes a computer’s IP address, and the user agent string identifying the patron’s web browser. This collection can occur when a patron accesses a website or web application maintained by the Libraries.
2.3 Data provided by other systems
The Libraries receive information about patrons from other systems. This includes records from MIT, information received from Touchstone (if a user logs into a library website), and payment transaction IDs from MIT’s payment processor.
We also receive aggregate data about the use of library resources when those resources are provided by third-party vendors, such as the number of total visitors to a third-party service or total download counts of specific content. This data is provided in aggregate and wherever possible in accordance with the Project COUNTER Code of Practice, which does not allow us to identify individual users.
2.4 Data generated by the Libraries
When a patron uses resources such as the Barton catalog or the QuickSearch application, a unique identifier is generated for use by that application which is used during that particular access session. Use of library-provided computers and equipment, for example in the GIS Lab or at library access stations, may also generate use logs.
Security cameras are in place in certain locations within the libraries, including the 24×7 spaces in Barker, Dewey and Hayden Libraries, and the Distinctive Collections Reading Room. These cameras are in place for the safety and security of library users, collections, and equipment. Security camera data is only viewable by MIT Police. It is kept as long as MIT Police determine it is needed and MIT Police make the decision if footage should be reviewed or shared.
Gate counters are installed at each library entrance and 24×7 entrance to count aggregate visits, and entry to 24×7 spaces requires use of an MIT ID. Gate counters do not capture images of individuals but rather count the “people-like shapes” passing through. MIT Physical Security manages access to this data and provides the MIT Libraries with reports of total entries per day and time, on a monthly basis.
2.5 Data collection by third parties
Additional examples of third-party systems present in Libraries’ services include:
- Submitting payment information to MIT’s designated payment service.
2.6 Public access computers
The Libraries have public access computers in Hayden, Barker, Dewey, Rotch, and Lewis Music libraries, as well as the Department of Distinctive Collections Reading Room. Some computers are Athena Clusters maintained by MIT IS&T, and require a Kerberos ID for access. Additional computers provide access for library patrons unaffiliated with MIT (Open Access computers) and at quick lookup kiosks; these computers do not require authentication, but have access to a limited subset of library electronic resources (based on our license agreements for that content). All of our public access computers delete user data daily, and data specific to your browsing session is deleted either when you close the internet browser (Open Access computers and quick lookup kiosks), or according to your account settings (when authentication is required).
If you are particularly privacy-sensitive, using the public access computers can provide an additional layer of anonymity to your use. Browsing activity on personal devices can sometimes be linked to you, even when you haven’t volunteered your personal information. Using a public computer reduces that risk, and you can consider further anonymizing yourself by using privacy-protective tools and practices.
3 Who at MIT has access to the data we collect
User data categorized as high risk is only accessible to staff who need access to the information in order to perform the function for which it was collected – for example, if you contact the libraries for research help, library staff will use the information you provide in order to answer your questions and provide you with library resources, but will not disclose your research questions to anyone else. This includes others at MIT – so, for example, we will not share research questions identifiable to you with faculty and staff elsewhere at the Institute without your prior consent.
Data which is higher risk when directly associated with an individual user may become lower risk once de-identified (Section 7.3 describes our de-identification practices), and handled accordingly. For example, de-identified records of reference questions are retained for continued staff use (assessment of our services, identifying frequent questions in order to develop additional resources, staff training) and are then treated as medium risk data. Medium risk data may be accessible to all Libraries staff, or may be limited to specific individuals or groups as appropriate.
Low risk data may be reasonably disclosed publicly by the Libraries. Low risk data includes information made public by the originator (for example submissions to an idea bank), and could also include aggregated, de-identified information, such as library statistics we report to the Institute. In general, even if the data is low risk, it will be de-identified before public release, unless you’ve otherwise indicated that you are ok with public identification with the data. Low risk data may also be handled as if it were higher risk if stored alongside higher risk data.
Some Libraries records are required to be maintained as a permanent record of the Institute for insurance and security purposes. These records are only accessible to staff who need access for a legitimate purpose, as authorized by the head of the Libraries unit responsible for those records or if required by law. For example, access logs for the Distinctive Collections Reading Room are retained as permanent records in accordance with ACRL/RBMS guidelines and accepted archival practice. The permanent records of many MIT departments, labs, and centers (including the Libraries) enter the MIT Institute Archives as historical records once they are no longer actively in use, and may include PII. Access to archival records is governed by the Institute Records Access Policy.
4 Sharing data with third parties
4.1 Authentication for your visit to a third-party site
For library services where you interact directly with a third-party platform requiring MIT authentication, MIT’s authentication system will pass some information to the third-party vendor in order to enable your access. For third parties that have met the criteria for the InCommon Federation’s Research and Scholarship category, the information released is described by their policies. For other parties, we release only those attributes that are needed for that service – typically less than those available via InCommon.
4.2 Government Requests for Library Records
Information about individual library patrons will not be made available to any agency of state, federal, or local government except pursuant to such process, order, or subpoena as may be authorized under the authority of, and pursuant to, federal, state, or local law relating to civil, criminal, or administrative discovery procedures or investigatory powers.
In the case of court orders or subpoenas for information about an individual, that individual will ordinarily be notified of the request as soon as possible, unless a court order prohibits such notification, e.g., the USA Patriot Act. Information requested by subpoena or court order may only be released by an authorized officer of the Institute. For the Libraries, the Institute’s authorized officer is the Director of Libraries.
4.3 Non-personally identifiable information
De-identified data about the use of our collections may be shared externally. Such data generally describes the Libraries overall activities. An example of this are the statistics which we provide to the annual survey of the Association of Research Libraries.
5 What we do with your data
The Libraries use the data which we have collected for a targeted set of purposes. Primarily, we use this information to provide the services which you request (such as looking up the list of materials you have currently checked out, or allowing you to renew those items).
We also use the information we receive to improve our existing services (for example, to troubleshoot reported problems). We pursue these types of improvements using de-identified records wherever possible, although in some cases the work may involve access to records before that de-identification occurs.
The Libraries do not sell the information that we have collected to any other organization.
6 Data Retention
The Libraries retain the data we receive according to a life cycle that is informed by the data’s risk classification and the operational need for that data. This life cycle starts when the information is recorded and is then in active use as long as it is needed (some information, if in frequent or continuous use, may therefore remain active for long periods of time). Information is defined as in active use while it is still needed for the purpose it was provided for.
Once the active need for the information has expired, we retain it for a specified period before it is deleted. These retention periods are informed by the risk category of the information and the ultimate disposition of Institute records is governed by records retention policies at MIT in accordance with MIT policy 13.4 and the records retention schedule of the MIT Libraries.
Data classified as high risk is, unless otherwise described below, retained by the MIT Libraries for 30 days or fewer after active use. For example, we retain logs of application use in order to monitor our programs for problems and optimize their performance via a third-party vendor, and this data is purged in their system after two weeks. General collections circulation records (records of what you check out) are kept for seven days before being de-identified.
Data which is considered high risk when associated with a personal identifier such as your name or email address may be retained for longer than 30 days after being disassociated with PII if there is an ongoing need for their retention (for example, records of reference questions received by the libraries may be deidentified and retained in order to assess the ongoing reference needs of the MIT community and for staff training purposes). If such data could, in theory, be re-identified with a particular individual, it may be reclassified as medium risk, and treated accordingly. If such data would be impossible to re-identify (for example, aggregate use statistics of library resources), then it may be reclassified as low risk.
Medium risk data is retained no longer than 5 years after active use. Low risk data may be retained indefinitely or discarded according to Institute retention schedules.
Exceptions to the above-stated retention periods may be warranted in specific cases. If you have questions about the retention period of specific types of data, please contact us at email@example.com.
6.1.1 Financial Records
Records which include financial information are retained, regardless of risk classification, in accordance with MIT VPF Policy.
6.1.2 Distinctive Collections
Records of access to MIT Distinctive Collections and the Distinctive Collections Reading Room are maintained for security and insurance purposes, and may become permanent records according to the records retention and disposition schedule of the MIT Libraries.
7 Data Integrity & Security
The Libraries protect the privacy of patron data through multiple avenues that are mutually reinforcing. First, we try to collect the most minimal set of information needed to provide the requested service. Second, we follow relevant security recommendations to ensure the security of the data that we do collect. Third, we discard your personal information as soon as possible and feasible.
7.1 Minimal Information
The services that you request can sometimes be provided with little or no information that can uniquely identify you. Many resources grant access solely because you are connected via the MIT network. The only information known about you from this access method is the MIT-based IP address of your device. IP addresses are sometimes traceable back to an individual user by MIT IS&T, however that information is not transmitted to the Libraries or the third party website.
Other Libraries-provided services may require you to log in via MIT’s Touchstone service rather than connecting you automatically or having you log in directly to the website itself. This results in the website knowing less about you than if you had created an account on your own. For more information about how this approach can protect your personal information, please visit “How Shibboleth Works.”
7.1.1 Library equipment
7.2 Security Recommendations
The Libraries follow current recommended practices to ensure the security and integrity of the data that we collect, including relevant encryption standards and prompt updates to address system vulnerabilities. The extent of these practices is informed by the risk classification applicable for each type of data.
The Libraries also take care when selecting companies that provide the services which we use. When selecting vendors to contribute to our digital infrastructure, the Libraries require companies to comply with the same practices which we would follow when building a service locally.
Additionally, we avoid combining patron data unnecessarily. There is no single system of record for data about library patrons. For example, records about which items a patron has checked out are never combined with information about the web searches that the patron conducts, and both are kept separate from the reference questions that a patron asks.
7.2.1 Using cloud services
We use cloud-based service providers, where appropriate, after vetting their storage procedures and security arrangements. For example, the Libraries chose Logz.io as a log analysis service after confirming that patron data would not be accessible by Logz staff.
When records need to be kept for reporting or analysis but no longer need to be identified with you, the Libraries take appropriate steps to remove that identification. The specific steps taken vary depending on how the records will be used as well as relevant best practices. These steps may include removal of values, the generation of aggregate summaries, or the deletion of the record. The timeframe over which this occurs is described in section 6.
9 Your rights with respect to your data
The MIT Libraries is committed to protecting your data, and providing you with transparent insight into how data about you is used and protected. To the extent we can, we minimize the data that is collected and stored about you, as described above. When we do store data about you, you have the right to:
- Access: you have the right to obtain a copy of data about you which we store
- Rectification: you have the right to correct inaccurate information or complete incomplete information
- Erasure: you have the right to have your personal data deleted upon request (unless certain circumstances apply)
- Restriction or objection to processing: you can request that we limit the processing of your personal information, or cease processing your personal information (under certain conditions)
- Data portability: you can request that we transfer data collected to another or directly to you.
Upon a request to erase information, we will maintain a core set of personal data to ensure we do not contact you inadvertently in the future. We may also need to retain some financial information for legal purposes, including US IRS compliance. In the event of an actual or threatened legal claim, we may retain your information for purposes of establishing, defending against, or exercising our rights with respect to such claim. De-identified data may also be retained, as described in this policy.
You can also contact MIT data protection by emailing firstname.lastname@example.org.
The MIT Libraries will review and make any necessary updates to this policy and its implementation annually, or as necessary for changes in law or MIT policy. The policy is also subject to review by the MIT Audit Division. If a data breach is known or suspected, the Libraries will work with MIT’s Infoprotect personnel through their policies and procedures.
To maintain our compliance with this policy the MIT Libraries maintains an inventory of systems that contain information about patrons in order to keep track of what information is stored. As described in Section 2 each system is classified as containing high, medium, or low risk data and each risk category has an applicable checklist describing the activities required to be compliant with the Infoprotect guidelines. Each system has a designated technical owner who is responsible for ongoing application of best practices and compliance. The data inventory will be updated as necessary and at least annually. Additionally, the Libraries will periodically review systems and practices for privacy concerns and address new threats, controls, and expectations. Libraries staff who have frequent contact with patron data also receive privacy training, and are responsible for maintaining personal practices in compliance with this policy.
Any significant changes to this policy will be accompanied by a prominent notice on the Libraries websites. If you would like to receive an email notification of policy changes or if you have any questions, concerns, comments about your privacy through the MIT Libraries you can email email@example.com. This policy shall apply from the date of approval and forward, although measures protective of your privacy may be applied retroactively to data currently in our systems upon approval.
Appendix A: Personally Identifiable Information Definition and Examples
We have adopted the definition of “Personal Data” from the General Data Protection Regulation (GDPR) as the basis of the Personally Identifiable Information (PII) we collect at the Libraries. That definition is “any information relating to an identified or identifiable natural person; an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.”
Modern data capabilities make it surprisingly – disturbingly – easy to identify someone from minimal information and link that information to a preexisting profile. Because of that, we consider a broad range of information as potentially identifying.
Examples of PII (not all of which is collected or accessible to the Libraries) which could individually or in conjunction identify you include the following: names, student or employee ID numbers, email addresses, physical addresses (local and permanent), telephone numbers, dates of birth, social security numbers, race, gender, prefix or title, sexual orientation, accessibility status, names of family members or relatives, emergency contacts, driver’s license numbers, credit card numbers, bank account numbers, passport numbers, citizenship status, income, financial information (e.g. fines, tuition, financial aid), transaction logs, content of transactions (e.g. emails), student coursework (anything prepared by a student for a class), library circulation records, log entries generated by a single user, or IP addresses. These categories are frequently PII, however the scope of PII covered by this policy also includes any other information that could potentially be linked back to you. | <urn:uuid:fb7ff55b-f6eb-4af2-a9d2-0d00017b97eb> | CC-MAIN-2022-33 | https://libraries.mit.edu/about/policies/privacy-policy/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00495.warc.gz | en | 0.925301 | 5,329 | 2.609375 | 3 |
Is it time to teach your child about verbs? If so, you might be feeling a little anxious about where to start–but don’t worry because I’m here to help! Watch my video to discover the easiest and most memorable way to teach your children all about this important part of speech.
When it comes to Montessori Elementary language work, nothing feels quite as overwhelming to homeschooling parents as the Grammar Box Materials! That’s why I am dedicated to showing you that this amazing resource is really not as complicated as it seems! Before you get started with any material for your children ages 6-12, I recommend reading a good theory album like the one from Keys of the Universe. This will give you a good understanding of the method so that you can better decide which materials will best suit your individual children.
This post is long and detailed because I wanted to make sure I explained everything for those who have asked me to! But if you feel too overwhelmed by all this information, head over to YouTube to watch my latest video where I explain these materials in depth, using the version that I created for our home environment. I’ll also be giving you just a quick look at our homeschool space to show you how we display these materials at our house.
Grammar Boxes: What are they exactly?
The Montessori Elementary Grammar Boxes are a series of…well, boxes, which are used to house and sort the Grammar Card Materials. The idea is that once a child has a lesson (often called a Key Experience) to teach a part of speech, the Grammar Boxes are used to give the child more practice with the concept by building phrases or sentences using the parts of speech they have learned. The series begins with box 2, which houses two parts of speech–article and noun. There are three types of boxes in the Grammar Boxes set, so let’s talk about the role of each.
The first are called the Grammar Filling Boxes. These are 36 wooden boxes which are color coded according to the part of speech that the child will focus on. Each box has a progressively more complex set of phrases and sentences and individual word cards, which are used to rebuild the sentences (more about that later).
The set that I make is a little different. Instead of all those wooden boxes, I make 8 organic canvas envelopes, each with an embroidered number on the front, which tells you the number of parts of speech studied in that set. Each Envelope is like a small folder with labeled pockets inside. So for example, instead of 4 wooden boxes for the noun-article set (Box 2), I created one envelope with four pockets. I also make a canvas bin for these to neatly sit inside on your shelf.
The second type are the Sorting Boxes,these are used by the child to sort out the card material when using the materials found in one of the Filling Boxes. There are 8 of these boxes, each one adds a new part of speech. The first Sorting Box has two parts of speech (noun and article) and so it’s commonly called Box 2. There is no Box 1, because if there were, it would be the study of nouns and we can’t make phrases/sentences with only one part of speech.
Instead of large and bulky boxes, I created a set of Grammar Box Mats, which are made with organic cotton. These are just as beautiful as the wooden version, but can be stacked on the shelf, and take up much less space! I also make a printable version of the Grammar Box Mats for those who are on a tight budget.
The third type are the Command Boxes. These boxes are open on the top and hold Command Cards and Exercises, which are fun activities/actions for the child to do to practice each part of speech through commands/actions. In other words, they tell the child to act out a part of speech in some way. There are typically nine of these boxes, one for each part of speech, and two each for the adjectives and verbs.
Instead of wooden boxes, I made Command Box Envelopes. I left these open on the top to mimic the original box design, and used the traditional colors in vibrant organic wool felt. I also make a small canvas bin for these envelopes so that they are easy to display on your shelf. My children LOVE these.
Grammar Card Material: What does it include?
Now that we understand all the boxes, let’s talk about what goes inside of them! Each Filling Box contains a set of phrase/sentence cards AND cards which have the individual words contained in those phrases/sentences. The word cards each have a different color depending on which part of speech they are.
As I said above, after a child has had a Key Experience Lesson (this is a creative introductory lesson) on a part of speech, they use the Grammar Box materials to practice building sentences and phrases using the parts of speech that they know about. As they progress through the series of boxes, the exercises get more complex and include more parts of speech.
Need help teaching the key experience lessons? Check out my grammar tutorials! I show you exactly how I taught them to my children to give you an idea of what might work for yours.
The first is that although I based the cards on what is included in The Advanced Montessori Method, I modernized the language and changed all the references to Montessori Materials to common objects so that they are useable by everyone–not just fully equipped Montessori schools. So for example, if a Command Card asks a child to do something like move a part of the Pink Tower or Brown Stair, I changed the noun to be something else that everyone will have on hand, like a wooden block.
The second way that my Grammar Card Material is different is that it is not as extensive. I cover all the same topics of study, but there are fewer exercises for each concept. I think you’ll find that you will have plenty of material to work with (even this abridged version is 219 pages), and your children will love the blank forms I included for them to use to make up their own sentences and commands. (These were my children’s favorite part, they loved customizing their set with beloved stuffed animals and other special objects from our home.)
Whether you choose to buy the traditional wooden materials or my fabric set, I hope that this information helps you to feel more confident about teaching grammar to your children at home! If you have questions, please leave me a comment!
If your child can count 0-10, understands the number symbols for these qualities, and can match the quantities to their symbol, they are ready to use my FAVORITE Montessori Math material–the Golden Bead Material.
Head on over to YouTube to watch my tutorial for this foundational lesson and let me know if you have any questions!
Hey Homeschoolers! Head over to my Youtube channel to watch my tutorial on how to introduce nouns to your children the Montessori way! This is the first tutorial in my continuing series, featuring videos that explain exactly how we taught our children all about the parts of speech.
Montessori grammar work is simple, hands-on and memorable.
I love the Montessori approach to grammar because the concise, simple and sensory-rich lessons really stick in a child’s mind. So get ready for a hands-on, not-boring, easy-to-understand, simple-to-teach grammar lesson that’s based on the scientific research of Dr. Maria Montessori. After watching my tutorial, I hope you’ll feel confident about how to present the materials to your children, and great about helping them to have a concrete understanding of grammar from a young age.
You can also find the written instructions for my tutorials via digital download in my shops on Teachers Pay Teachers and Etsy.
And here’s your handy, linked material list for the noun lessons:
Hey all you DIY Darlings! Although I make gorgeous 3-part card envelopes in my shop, I totally understand that these organic embroidered versions are not always in the budget. So I’ve created this sweet little felt pattern for those of you who want to make your own. Yes I’m talking to you, even if you are not super comfortable with sewing yet!
You can find the pattern here and here is a linked (some are affiliate) list of the materials I used, which you might not have on hand.
Hey Homeschoolers! Feeling overwhelmed about which materials you actually need to teach Montessori Math at home? I’m here to help!
Here is my list (with convenient links, some which are affiliate links) containing all the materials we used in our home to complete the three-year primary work cycle. Keep in mind that you absolutely do not need to buy this all at one time. Observe your child and let that guide you to which materials you should purchase, and when.
Also, this list consolidates all the bead materials needed to save you space and money. If you buy these items, you should have everything you need for all the bead work without redundancy, but you won’t be able to have every work available at once on the shelves. I have found this to work perfectly for homeschooling my two.
Did I leave something out that your family loves? Feel free to tell me about it in the comments!
Homeschool Montessori math list for primary children ages 3-6, and those who are working at this level
Spindle boxes (you could also use a cardboard box or fabric version–check Etsy– and wooden dowels)
Number cards (use these cards with the number rods rods also) and counters.
Hundred board –this link is the traditional wooden material for those who are looking for it. Some children really love this, mine did not. I recommend buying this printable pack instead with fun extensions to make it more interesting. I ended up buying both after my children snubbed the wooden version because it wasn’t their thing.
Decanomial Bead Bar Box –buy this and you will have all the beads you need for the bead work with the colored beads in primary–all in one box and for less money. And as a bonus, later you can buy the number tiles to use this as your checkerboard box for early elementary work. Not in the budget? You can also find a printable version of these beads from my girl Zainab over at Mathessori!
Elementary Negative Snake game (make sure it’s not the primary version, you need all these elements for the addition and subtraction snakes in primary, and later, negative snake games in elementary.)
Complete Bead Material(this is the short and long chains with squares and cubes). We chose to forgo the bead cabinet to save money, but it’s a nice extra if you can afford it and have space! You’ll also want arrows, which you can easily DIY if you’re inclined or check on Etsy, I bet someone has made them for you already.
Golden Bead Material You’ll use this material to teach a concrete understanding of place value in our decimal system (based on 10). These are used for a very long time, alongside other work, to do addition, subtraction, multiplication and division even into elementary. It’s a staple. Another option is to buy these as a kit from mathessori.com, she has amazing prices and lesson plans for you!
35 additional wooden or paper 1,000 cubes–you’ll need these if you want to do the birds-eye/45-layout–which my children did over and over and LOVED. (I chose to use just plain wooden craft blocks from a craft supply store for my extra 35 cubes–measure your cubes and find some to match at your local shop.)
Racks and Tubes (this set includes the multiplication and division board and is used later in elementary work, so if you’re planning to continue after primary, you may as well buy it this way) or if you’re not sure, just buy the Multiplication and Division boards (it’s less of a commitment and you can always sell them later!)
Every good Montessorian will tell you to allow your children to do things for themselves. After all, Maria Montessori famously said, “Never help a child with a task at which he feels he can succeed!”
You’ve probably seen the beautiful Instagram and Pinterest images that include things like low cupboards for dishes and step stools at sinks. These accommodations are a hallmark of the prepared Montessori environment, sure, but their implementation doesn’t always look like the perfectly curated images you find on social media platforms.
So while it’s wonderful to watch children doing things for themselves, if you’re a parent trying to follow the child at home, it can be disheartening to see these perfect pictures. The truth is that following the child is messy, and sometimes hard.
There is an actual mess that comes from letting children practice independence. And sometimes you need superhero-level restraint to let them try and fail and try again without intervention.
With this in mind, here are five of my tried- and- true pro tips for keeping your patience in tact and the mess under control.
Tip One: Rubber Bands OH MY GOSH, the water glasses. Every time a child got a glass of water in this house, they used a new cup. And of course they never finished the water either–just took a sip and left it on the counter, only to forget it was theirs and get a new one 15 minutes later. And yes, of course the children can and should be involved in washing said glasses, however, until your child is able to really notice things like dirt, you’re still going to be rewashing all those cups.
So here is my brilliant solution. It actually works and it’s so simple and cheap; it’s rubber bands. I bought multiple colors of cheap rubber bands from an office supply store and store them in small dish next to the drinking glasses. Each person in our home picked a color and puts one on their glass when they get it out of the cupboard. At the end of the day, we slip off the band, rinse the glass and stick that baby in the dishwasher! Kids have friends over? No problem, everyone gets a color! No more asking whose glass is whose.
Tip Two: Hand Broom and Dust pan Kids get crumbs everywhere. They just do. I bought a hand broom and keep it next to the practical life work. When the children are done eating or doing a messy work (sensory bins anyone?), one child gets the broom and sweeps up before the other wipes the surface with a damp cloth.
Tip Three: A Place for Everything Take a cue from the minimalists and make sure that you don’t have too many options available for your children, and that everything has a specific home to return to. If you want children to help you to keep the space tidy, they must be able to actually put things away. For example, if you want your children to empty the dishwasher, they will need to be able to put the dishes away without help. So if putting away dishes in your home requires a Jenga-like stacking experience, your child won’t be able to help effectively. Do yourself a favor and share some of your excess so that you and your child can easily put away what you really use and need. Same goes for toys, books and clothing. If it’s too difficult to put things away, you will be the one doing all the clean up.
Tip Four: Practice Patience When we are waiting for children to accomplish what we are able to more efficiently do ourselves, it can be hard to hang back and let them flounder. My advice is to practice. In our world of hurry, it doesn’t come naturally to slow down and wait for a child, but that is exactly what you need to learn to do. Teach yourself patience by practicing it. It’s okay to give instruction when needed, but let the child do what he or she is capable of doing without intervening, even if it takes longer–and it will. I often say aloud, “I’ll wait for you.” This simple phrase reassures my kids that I don’t mind waiting, and it reminds me of my goal–I want them to do it themselves and to experience the joy of accomplishment.
Tip Five: Plan More Time This relates to tip four, and it’s really essential to the successfully patient parent. If you need to leave the house or eat a meal or start a lesson at a specific time, plan to give your child more time than you need to do the same tasks. For children ages three to six years, I recommend doubling the time it takes for you to the same activities. For toddlers, triple it. So for example, if it takes you 15 minutes to put on your shoes and coat and gather your things to get in the car, you should expect your toddler to take 45 minutes and your 3-6 year old child to take 30. Seems like a long time doesn’t it? But that’s how long it realistically takes for you to patiently guide them and let them do it themselves.
I hope these tips are helpful to you! Happy learning and enjoy the journey!
Do you have a child who is reluctant to learn? Have you spent hours working on engaging lesson plans only to be completely, and repeatedly, rejected by this child?
And I do mean rejected, because that is exactly how it feels when you put your heart and soul into a lesson plan–one that you are so sure is going to make learning FUN– only to experience flat refusal from said child.
Whether you are a seasoned educator, a brand new teacher or a homeschooling parent, that sinking feeling is the exact same. But let me encourage you to set aside those feelings of rejection, and focus instead on the child who is doing the rejecting. What is going on with this child?
The answer may not be about your careful lesson plans at all. The rejection may be happening for a number of reasons, and the only way to solve the mystery is to go back to square one and observe the child for a while. Here are a few reasons that children resist learning, which I’ve discovered in my observations of this issue:
1. The child feels pressure to perform perfectly. 2. The child is very dreamy and is not interested in anything remotely related to what they perceive as “school.” (Sometimes this child is simply too young for formal education.) 3. The child has had a bad experience with education and feels defensive about learning.
Do any of these examples fit with the child in your life? If so, I have great news for you–I have a trick up my sleeve that works 99.99 percent of the time. I call them learning traps, and they are very effective at grabbing the attention of reluctant learners.
Setting the Trap Learning traps are strategically and sneakily placed, sticky learning materials, which are sure to grab the attention of a specific child. To set the trap, you must first really get to know this kid. What is interesting to her, what does he love, how does she think? Watch the child carefully for at least a week, and take notes. When do they engage? At what point do they tune out? Remember, you’re going to make this learning opportunity super sticky and irresistibly inviting, so don’t skip ahead of the observing step. If you do, you’re sure to fall into the pit of rejection again, and no one is happy in that pit.
Once you’ve collected your data you’re ready to start setting the trap. The next step is crucial. Do not set this activity up the way you ordinarily would, everything about this activity must be novel and interesting. So if you use the Montessori method, throw caution to the wind and put the trays away. Don’t set things up all tidy on the shelf and hope that the child will pick it up–that wasn’t working, remember? Traditional educators, don’t set up a learning center like you usually do. We have to think outside of our usual boxes for the sake of the child! Fear not, we are going to break all the rules (or at least the ones that are not working), but we are going to maintain our principles.
I can’t tell you exactly how to set your trap because it will vary for each child. However I can give you some pointers to help you get started. Here are some things that have worked for us in the past:
Place your learning materials in a highly visible area, where the child can’t help but run across it. It may take a few tries to find the perfect spot in your home or classroom. Don’t be discouraged, these things take time.
Sit quietly and do the work yourself, narrating as you go. Make obvious mistakes and puzzle over them–some children can’t resist being “teachers” and are very helpful to their poor learning guides who can’t seem to figure out the activity on their own.
Leave very detailed instructions and examples of how the work should be done, either written or with photo sequencing for the perfectionist child. Be extra careful never to praise this child for their perfect work, but instead praise them for the process.
Leave the activity half-way complete, some children love to finish what someone else has begun.
For the child who “hates school” make the invitation to learn low-pressure and playful. Very young children may just not yet be ready for structured, academic learning–go against the grain and be OK with this! Celebrate that this child is determined to protect her childhood. Make the trap extra sticky by making it play-based. Practical life skills and gross motor activities are often very appealing to a child with this mindset.
Take the work outdoors. Most children let down their guard when they are outside in nature! If you can incorporate natural items from your environment, even better.
And don’t forget to watch and take notes. If the child ignores your trap, you haven’t made it sticky enough. If they engage briefly, celebrate the small victory and capitalize on whatever part of the learning material they interacted with. Build on the small victories until you know just what will catch this child’s attention.
So there you have it, my fool-proof, sure-fire, sticky and strategic method for catching even the most reluctant learners. Don’t give up on these children, they need you to gently guide them into a lifelong love of learning. If you carefully observe and prepare, you’re sure to catch a little learner of your own.
Questions? Comments? Need help brainstorming solutions for your child? I’m happy to help. Leave a comment, email me or find me on facebook and IG @branchtobloom.
Watch Carefully It’s a new calendar year and this is the mid-way point for our school year. This is one of the times that I like to spend time really reevaluating our learning environment.
We are homeschooling our children, and I love it. But I haven’t always been a homeschooler. Before my children were born, I was fortunate to study and work in a Montessori school. And one of the most important lessons I learned as a Montessori guide was the power of observing the children.
Before you add anything beyond the basics to your classroom, I urge you to take a note from Maria Montessori and spend some time really getting to know what makes them tick. If you are a homeschooler like me, you’re probably thinking that you already know your children like the back of your hand. But if you take some time to quietly watch them, and take notes, you may be surprised!
Watch for This
Ask yourself what they are most excited about. Don’t just think about which subject areas they love, but focus on which parts of that subject excite them. What do they struggle with, and why?
And one other thing. Take the advice of Dr. Montessori herself and be as silent and still as possible while you observe. Don’t let the children know what you are doing, and don’t interrupt them. Now is not the time to help or correct them, just watch carefully. Unless they are in danger, stand down!
If you have smaller children get down on your knees and observe the learning environment from their perspective. Your goal is to make your space as easy for them independently access as possible. That may mean adding step stools or placing dishes on a lower shelf in the kitchen, for example. How about adding a hamper that they can carry to the washing machine themselves?
After you have spent at least a week on this careful study, sit down with your notes and start to brainstorm about how to create a rich environment that will spark their enthusiasm for learning. Then, carefully prepare the learning space in a way that is easy for them to navigate.
Your time observing is by no means finished. Continue to watch the way that your children interact with their learning environment and make adjustments accordingly. I keep a notebook in my apron pocket to jot down notes for myself. You could use a smart phone of course, as long as it doesn’t distract your children!
One of the hallmarks of a Montessori learning environment is its tidiness. To the casual observer it can appear cold, almost sterile. However it has a purpose, which is to prevent distraction and draw attention to the beautiful materials. Observe your children carefully and you will be able to tell if you need to remove some distracting items from your space as well. But keep in mind that your home doesn’t need to look like a Montessori school. Trying to perfectly emulate a school environment is neither practical nor optimal for homeschooling families.
I can tell you that our classroom at home is much different from the classroom that I previously worked in, but a home environment has advantages as well. Observe your children, and you will soon discover what works best for them! | <urn:uuid:b01743c7-c2e5-4f4d-af55-1932a5eb4c70> | CC-MAIN-2022-33 | https://branchtobloom.com/tag/montessori-homeschooling/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.78/warc/CC-MAIN-20220817001643-20220817031643-00297.warc.gz | en | 0.954973 | 5,564 | 3.453125 | 3 |
Started in 49 BCE, the civil war from the very beginning was characterized by offensive actions of Caesar, who brought down his forces from Transalpine Gaul, quickly marched along the eastern shores of Italy capturing every city he came across. The man from whom he encountered stronger resistance was Domitius Ahenobarbus, but he was also defeated. Nevertheless, Caesar failed to destroy Pompey’s army in Italy by blocking the port of Brundisium.
Within two months, the slayer of the Gauls became the ruler of Italy, with almost no loss! Meanwhile, in 49 BCE Caesar suffered a defeat in Africa. The Roman governor installed there, P. Attius Warus spoke against him. As well as that, Juba I, the king of Numidia also held a hostile attitude towards Caesarians. Caesar underestimated his opponent sending an unexperienced commander Scribonius Curio at the head of legions composed largely of former Pompeians against him. After initial success, Curio was ambushed by the Numidians and his two legions were destroyed. The defeat influenced the further course of the war and deprived the empire’s capital city of one of the sources of grain supply, which was compensated by the seizures of Sicily and Sardinia by the Caesarians.
In 49 BCE Caesar set off to Spain to fight against Pompey’s legates – Afranius and Petreius. Within 40 days, he forced their large and well-trained army to surrender. Another Pompeian legate in Spain, Varro also capitulated. After capturing Massilia and a short stay in Rome, the slayer of the Gauls crossed the Adriatic to land on the other side of the sea where Pompey had gathered 11 legions supported by numerous cavalry and fleet. After the failure to block his forces near Dyrrahium, Caesar’s troops withdrew to Thessaly. The next clash took place in August 48 BCE at Pharsalus, where Caesar defeated Pompey’s more numerous army. Roman aristocrats fighting for Pompey who survived the rout gathered the remnants of the army in Greece and in the Balkans and went with them to Africa. If the winner from Alesia had prevented this concentration, the war would have ended just after the battle of Pharsalus. However, Caesar chased Pompey who went to Egypt and was soon murdered by King Ptolemy’s ministers. Caesar became an arbiter in the dispute between Cleopatra and her brother Ptolemy XIII. The way in which he settled it led to a situation where he was besieged in the palace district of Alexandria in the winter of 48/47 BCE. It was troops from Cilicia and Syria under Mithridates of Pergamon and Antipater of Jerusalem that enabled Caesar to emerge from this difficult situation. After defeating the enemy army at the western arm of the Nile, the winner of Gaul bestowed royal power on Cleopatra. Then Caesar set off to Asia Minor, where Farnakes, a son of Mithridates the Great, took over the former kingdom of Pontus. In a short, five-day’s campaign, Caesar defeated his opponent and after settling the political issues in those areas, returned to Rome.
During his short stay in the capital, Caesar supervised the elections for the officials who were to hold power for the end of 47 BCE. and brought order to the capital. He issued orders to improve the economic situation, rewarded his supporters and showed mercy to the Pompeians who joined him. On top of that, he prepared an expedition to Africa to defeat his political opponents’s troops gathered there. At the end of 47 BCE he marched off to join the invasion army in Sicily. Troops which were to take part in operations in Africa, as well as supplies, were gathered at Lilybeum. Caesar lacked vessels, mainly transport ships, so he had to transport his troops to Africa in several stages. Within the first weeks he gathered 6 legions on the island – XXV, XXVI, XVIII, XXIX, XXX and V Alaudae, composed of Transalpine Gaul’s citizens who were granted Roman citizenship. Only the soldiers of the last unit were experienced. The rest of the legions, largely composed of Pompeian soldiers, had been formed during the war. They were accompanied by 2,000 cavalrymen. In addition to these units, only the necessary luggage was taken to ships, and most of the feed, food and animals was expected to be obtained in Africa. The expedition was not well prepared – Caesar’s staff did not have accurate information about suitable landing places. The situation was complicated further by windy weather. Caesar’s fleet departed from Lilybeum on December 25 and after three days some crews saw the shores of Africa. They landed near Hadrumentum – it was only the first throw of the army, composed of 3,500 infantry and 150 cavalry. In the following days most of the remaining transport ships together with the rest of invasion army joined this group. The same as during the landing in the Balkans in 49 BCE, at this stage of the campaign military operations of Caesar were facilitated by surprise effect. The enemy had not expected that his offensive could take place in the winter and that he could take rapid offensive actions just after going ashore. Pompeian legions in Africa were scattered across the country and it took some time to put them into one army. Reports estimated the number of these troops at 10 incomplete and inexperienced legions supported by numerous cavalry and 4 legions of king Juba organized in the Roman way. Additionally, this group of troops commanded by Q. Metellus Scipio had 120 war elephants at its disposal.
Battle of Ruspina
Initially Caesar avoided forays inland in search of food, because he feared enemy counterattack, Moreover, he did not want to lose touch with the coast where the rest of his troops was landing. The land was stripped of supplies, so the slayer of the Gauls sent requests for grain to be delivered from Sardinia and other provinces. After an unsuccessful attempt to capture Hadrumentum, the Caesarians set up a base in Ruspina and entered Leptis, admitted to the city by its residents themselves. After leaving 6 cohorts in the city, Caesar returned to his base. In order to obtain supplies, at the head of 30 cohorts he set off on expedition across the area, but 4.5 km. from his camp he came across an enemy. Caesar sent for 400 cavalry and 150 archers and moved forward with his legions himself. The enemy troops previously spotted consisted of 8,000 Numidian cavalry, 1,600 Gallic and Germanic horsemen and a great number of infantrymen. The man who took command of the whole group was the former Caesar’s subordinate Labienus, The sight of their narrow ranks confused Caesar’s scouts, who consequently considered them as infantry. Because he was afraid of being outflanked, he deployed his troops into a single line. He divided the few horsemen between the two wings, the archers took positions at the front. Labienus moved against Caesar’s forces – masses of his infantry attacked the few Caesarian riders, while in the center Pompeians’ light infantry kept attacking Caesar’s legionaries, then they were retreating, firing at the enemy at the same time. Caesarian forces were surrounded. To prevent his legionaries from moving too far away and the danger of being cut off from the rest of the army, Caesar forbade his infantrymen along the entire line to go further than five paces from every cohort’s main group. The pressure of Labienus’ men was great, and the legionaries repelling their attacks were mostly inexperienced and prone to nervous behavior. Caesar therefore tried to comfort them.
When his subordinates began to gather together in small groups and thus to expose themselves to the enemy’s more effective fire, the commander ordered them to loosen formation. He ordered every other cohort to turn its back on the rest of the army to fight back against the enemy’s cavalry at the back, while the rest of soldiers were to engage Pompeian infantry. Formed in such a way, the two lines of troops launched an attack on the enemy, throwing their pila at him. When the opponent was repulsed, Caesar ordered his troops to retreat to the camp. Unfortunately for Caesarians, at that moment a Pompeian commander Petreius at the head of 1,600 cavalry and large infantry forces appeared on the battlefield. Having grown in number Pompeians began to harass Caesar’s army again. Although both sides were already tired after the whole day of fighting, the slayer of the Gauls inspired his legionaries to attack the enemy once more, which led to the enemy’s being repulsed across the nearby hills. Owing to this, the Caesarians managed to withdraw into the camp, but they had not managed to ensure food supply for themselves. Caesar ordered to fortify the camp better than before and to deliver the food supply to his subordinates. He turned his ships crews into light infantry and ordered his artisans to produce javelins and missiles for slings. Meanwhile, the Pompeian army was joined by Metellus Scipio, who merged with them 1.5 km. from Caesar’s position. However, King Juba, Caesar’s personal enemy, was not able to join with them, because his lands were attacked by the troops of his rival Bokchus of Mauretania. The desertions in the ranks of Caesar’s opponents were mulitplying. Apparently they themselves behaved with brutality which alienated locals from Pompeians.
In the next days there were skirmishes between the two armies, but neither side risked a major battle. Metellus Scipio tried to provoke Caesar to take up the fight, placing his troops in front of his camp and when his opponent did not react for a long time, he approached his fortifications. He did not want to attempt to take them, seeing their powerful construction, towers manned with soldiers and equipped with artillery. The conqueror of Gaul was cautious – he withdrew army foragers and patrols distant from the main base and allowed outposts to retreat only in the face of strong enemy pressure. Soon after that, Caesar’s reinforcements composed of XIII and XIV legion, 1,000 light-armed infantry and 800 Gallic cavalry arrived from Sicily. They brought enough grain with them, which was enough to meet the most urgent needs. At the end of January 46 BCE Caesar led most of his troops out of the camp to launch an offensive. The column bypassed Ruspina, moving away from the enemy to suddenly turn back and capture the chain of nearby hills, thus threatening the Pompeians’ camp. They fought against Caesar for the hills he occupied, and the next day there was a fight in which a numerous Numidian cavalry under Labienus was defeated. Germanic and Gallic warriors who supported them, were left defenceless on the battlefield and many of them were killed.
Battle of Thapsus 6 April 46 BCE
Caesar set off to the city of Usitta, the main source of water supply for the enemy. However, he did not accept the challenge of battle made to him by Metellus Scipio, who formed battle formation ready for a fight. Meanwhile, Scipio also received support – King Juba’s troops with the strength of three legions organized in Roman fashion, 800 heavy cavalry, masses of light Numidian cavalry and light infantry. Both sides conducted military operations around Usitta to take control of the mountains situated between their positions. Pompeians failed in their attempt to ambush the vanguard of Caesar’s army because some of their soldiers were not disciplined. Caesar’s men managed to set up camp on the hill. They continued to fight enemy forces and began to build fortifications to cut off Usitta from the outside world and hinder the opponent’s movements. Meanwhile, more reinforcements with the strength of IX and X legion joined them, increasing Caesar’s army to 10 legions, half of which were veteran troops. Also, Pompeian deserters came to his camp. Caesar skillfully prompted chiefs of Gaetuli to revolt against Juba, which reduced his contingents supporting Labienus and Scipio. When his fortifications around the city were almost finished, both armies stood a short distance opposite each other. However, there was no general battle apart from the clash of cavalry and light infantry of both sides.
Meanwhile, Caesar received information about the approach of his next legions to the African coast. This information also reached the Pompeians, who destroyed or captured some of the ships protecting the transport of these troops at the final stage of the journey. Having been informed about this, Caesar set off to the coast and defeated the enemy fleet. However, it is possible that the information about the reinforcements for Caesar was false, and VII and VIII legion reached the main army only after the campaign was decided. Then the slayer of Gaul sent two legions in search of food which was buried underground according to the local tradition. When he found out from deserters that Labienus planned to set an ambush on his men, he sent other groups of soldiers along the same route for several days, and then sent 3 legions of veterans supported by cavalry against the expected ambush. Despite the destruction of the ambush set by enemy, Caesar still lacked food for his growing army. Unable to force the opponent to fight in conditions favorable to him or to quickly capture Usitta, he broke camp and marched around the city of Aggar. From there he sent resupply groups that imported significant amounts of barley. After an unsuccessful attempt to attack the enemy’s provisioning units. Caesar ordered a retreat, harassed by Numidian cavalry. He sent most of his cavalry to the back of his column to protect the retreat. Thanks to this, he finally managed to reach a place suitable for setting up a camp. After some time he set up a battle line, but when the opponent was reluctant to fight, Caesar moved on. Caesar ordered units taken from every legion and composed of 300 soldiers each to maintain battle formation during the march and help their own cavalry repel the attacks of the Numidians harassing his column.
Soon the city of Sarsura was captured along with large reserves of grain left there by the Pompeians. When the next city he encountered proved impossible to be taken in a short siege, Caesar returned to Aggar, where his legions set up camp. It was not possible to force the enemy to accept battle – the Pompeians did not want to leave their favorable position on the hill. On the 4th of April 46 BCE, Caesarian army set out very early in the morning and after covering 25 km. they reached the coastal city of Thapsus. Scipio, who marched after them, divided his army into two camps set up 12 km. away from the city. Thapsus lay on a cape separated from mainland by two isthmuses, running on both sides of a vast lagoon, hence access to the city led through two narrow passages. Caesar blocked the route of the march most convenient for the enemy, building a fort in the right place. Therefore, his opponent moved to Thapsus through the northern isthmus, which is a strip of land only 2 km wide. On the morning of the 6th of April, Scipio’s army stood outside the city, opposite Caesar’s legions. The rest of the Pompeian army, which was led by Afranius and Juba, was stationed elsewhere to divert Caesar’s attention from Scipio. Caesar sent two legions of recruits to besiege the city – the rest of the army formed into the classic triplex acies (an array of three lines) was set up opposite Scipio’s army. Flanks were formed by experienced legions – IX and X took the right wing, XIII and XIV held the left. On both flanks these units were accompanied by slingers and archers. Legion V Alaudae was divided between both wings of the army where some of its cohorts formed the fourth line – they were to provide protection, especially against enemy war elephants. The center was occupied by three inexperienced legions whose numbers are not recorded by our sources. The cavalry stood on both flanks, although due to the narrowness of the battle ground its possibilities of maneuver were retricted. When it comes to the number of enemy troops, we do not have reliable information on that nor is there any precise data on their deployment during the battle. They were probably arrange according to the Roman military tradition – into a three-line formation with their numerous cavalry on the wings. War elephants were probably deployed in front of the flank troops. The terrain favored Caesar – the narrow area of the battlefield forced him to form a tight army formation convenient for his veterans.
Already at the beginning of the battle the winner from Alesia sent some of his ships to go through the channel to the rear of the enemy. Caesar’s legionaries were enthusiastic about the upcoming battle and willing to face the enemy as soon as possible. Their officers urged the commander to give orders to attack, but he consistently refused their requests, considering such action as inappropriate. Meanwhile, on the right wing, soldiers forced the trumpeter (tubicen) to give the signal to attack. Despite the insistence of centurions trying to stop this insubordination, Caesarian cohorts on the right wing spontaneously attacked the enemy. On seeing this, Caesar finally issued the battle slogan Felicitas and threw himself into the ranks of the enemy. The attack of cohorts on the right flank immediately broke the line of the Pompeian forces, who were forced to retreat. Plutarch presents a slightly different version of events, informing that Caesar had an epilepsy seizure, which was supposed to cause him to stop commanding the army. Other sources, however, do not mention similar attacks of the commander’s disease on the battlefield. The attack of war elephants in the left wing of the Pompeian army ended in defeat, because Caesar’s skirmishers in this section of the battle line drove the animals away by throwing missiles at them what caused the startled animals to trample their people. Scipio’s left wing collapsed and fled. The battle turned into a slaughter.
Caesarians were killing surrendering enemy soldiers because they wanted to end the war as soon as possible. Some officers of the victorious army, advocating forbearance towards the enemy, fell victim to their subordinates. This cruelty contrasts with Caesar’s gentleness towards the opponents at the beginning of the civil war. His policy in this matter was consistent – he forgave those who opposed him for the first time. However, when they continued to fight against him and were taken prisoner again, they could no longer count on his mercy. It is estimated that 10,000 Pompeans were killed in the battle, Caesarian losses were negligible – over 50 soldiers! Many officers of Scipion’s army managed to escape from the battlefield, but most of them would lost their lives in the coming weeks anyway. Some of them were sentenced to death by Caesar – Afranius and Faustus, son of Sulla at the request of the soldiers themselves. Even Caesar’s relative Lucius was executed. Afranius and Juba fought to death and life and the winner was to commit suicide – the man who survived the duel was Afranius. The Pompeian commander-in-chief at the battle of Thapsus Q. Metellus Scipio committed suicide on board of his ship when it was captured by Caesar’s fleet. Labienus was one of the few survivors from the battle who fled to Spain. Caesar’s bitter enemy – Cato at the news of defeat committed suicide because he refused to live at the mercy of the winner. His death began the fashion of honorable suicide committed by Roman nobles.
The importance of the battle
Thapsus was the next stage in the civil war, having a key impact on the further history of the Roman Empire. The battle meant the loss of many senior officers and leaders in the Pompeian party. After completion of the battle the process of extermination of Pompeian elites would find its final stage at Munda one year later. The battle itself is a testimony to the great atmosphere prevailing in Caesar’s army – soldiers eager for fight decided its outcome. As in previous battles, it turned out that the army’s fighting spirit is one of basic factors of success on the battlefield. It should be noted that at the battle of Thapsus these moods could also be due to the desire to “finish off” the enemy to end the conflict quickly. | <urn:uuid:3dedf2a3-0917-4fe8-93ed-825e0e2f2d26> | CC-MAIN-2022-33 | https://imperiumromanum.pl/en/battles/battle-of-ruspina-and-thapsus/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571982.99/warc/CC-MAIN-20220813172349-20220813202349-00097.warc.gz | en | 0.983958 | 4,297 | 4.0625 | 4 |
Overview of Striatal Circuits
The striatum is a major brain nuclei in the basal ganglia (BG) system. The BG consists of set of corticobasal ganglia-cortical loops, which are a series of parallel projection loops that convey limbic, associative, and sensorimotor information. In this circuit, cortical neurons send input to striatum, which conveys output through various BG nuclei, relaying information to thalamus and then ultimately back to cortex. The striatum consists of the dorsal striatum (dStr, caudate and putamen in humans), which regulates actions and habits, and the ventral striatum (a.k.a. nucleus accumbens [NAc]), which is involved in motivation and reinforcement. These striatal areas have distinct projections through the BG output nuclei, consisting of two distinct pathways (often referred to as the direct and indirect pathways) and they were originally proposed to play antagonistic but balancing roles on BG output and behavior. The two pathways can be resolved at a cellular level in the main projection neurons of the striatum. The projection neurons, which comprise 90%–95% of all neurons in the striatum, are medium spiny neurons (MSNs), which are divided into two morphologically identical and heterogeneously distributed cell types. The MSNs in striatum are subdivided into two subtypes based on their axonal targets. MSNs that are considered part of the direct pathway project to globus pallidus internal (GPi), ventral pallidum (VP), and midbrain regions including substantia nigra (SN) and ventral tegmental area (VTA); whereas the indirect pathway MSNs project to the globus pallidus external (GPe) and VP ( Fig. 9.1 ). However, it is important to note that MSN projections from dStr appear more segregated from those in NAc. The dStr MSN subtypes have distinct projections with minimal overlap to BG nuclei, whereas the NAc MSN subtypes both send input to VP. Thus this ventral BG circuit does not quite represent the classical direct and indirect pathways (see Fig. 9.1 ). Due to this overlap in NAc MSN subtype projections, we refer to these two neuron subtypes based on their enrichment of dopamine receptors 1 versus 2, with D1-MSNs being part of the classical direct pathway and D2-MSNs part of the indirect pathway. Although both D1-MSNs and D2-MSNs in NAc project to VP, the NAc D1-MSNs also send projections to classical direct pathway nuclei including GPi, SN, and VTA (see Fig. 9.1 ).
Along with their enrichment of D1 versus D2 receptors, the two MSN subtypes are further distinguished by their differential expression of several other genes, most notably G-protein–coupled receptors and neuropeptides. D1-MSNs express muscarinic receptor 4, substance P, and dynorphin, whereas D2-MSNs express adenosine receptor 2a, G-protein–coupled receptor 6, and enkephalin ( Fig. 9.2 ). Through the two BG pathways the D1-MSNs versus D2-MSNs have been demonstrated to display differential behavioral output. Activity in the D1-MSNs is implicated in movement initiation, reinforcement, and reward seeking, whereas activity in the D2-MSNs antagonizes the D1-MSN pathway, thus inhibiting movement, promoting punishment or avoidance, and inhibiting reward seeking. a
a References 23, 25, 39, 46, 48, 52, 55.However, there are some studies that support a role for coordinated activity in these two neurons in actions and natural reward behaviors. Studies on animal models of addiction and depression have demonstrated distinct roles of these MSN subtypes in striatal circuits in these motivational diseases. This chapter discusses these current findings and the overlap between these striatal circuits in addiction and depression.
Striatal Circuit Activity in Animal Models of Addiction and Depression
Striatal MSN Subtype Activity in Addictive Drug Exposure and Behavior
Much of the evidence for the differential roles of D1-MSNs and D2-MSNs in addiction is based on studies examining cocaine-induced behaviors in rodents, using neuron-subtype–specific techniques to activate or inhibit these MSN subtypes. Enhanced activity in D1-MSNs underlies the reinforcing and sensitizing effects of cocaine. Likewise, blocking activity in D2-MSNs results in similar outcomes. b
b References 8, 11, 23, 39, 52, 70.The first insight into MSN-subtype participation in psychostimulant-mediated behavior involved NAc D2-MSN ablation. Ablating these MSNs increased psychostimulant-induced conditioned place preference without altering normal locomotion. Subsequent studies demonstrated an opposite role for D1-MSNs versus D2-MSNs in psychostimulant-mediated behavior. Optogenetic stimulation, using the blue light–activated channelrhodopsin-2 (ChR2) of NAc D1-MSNs enhances the rewarding properties of cocaine, and NAc D2-MSN optogenetic stimulation reduces this outcome. In addition, after repeated exposure to cocaine the optogenetic activation of NAc D1-MSNs resulted in enhanced locomotor activity. This implicates that cocaine primes these MSN subtypes to display a sensitized response to other stimuli, in this case artificial activation. The selective blockade of neurotransmission in D1-MSNs reduces cocaine-induced locomotor sensitization and conditioned place preference. Conversely, using optogenetics or chemogenetics, the latter using designer receptor activated by designer drugs (DREADDs), inhibition of D1-MSNs or activation of D2-MSNs reduces psychostimulant-induced locomotor sensitization, while the inhibition of D2-MSNs increases this behavior. Furthermore, chemogenetic inhibition of D2-MSNs, in cocaine self-administration, enhanced the motivation to obtain cocaine, whereas optogenetic activation of D2-MSNs suppressed cocaine self-administration. Finally, a recent study using in vivo fiber photometry with the calcium indicator, gCamp6f, confirmed the MSN subtype activity manipulation studies described above. This study showed that acute cocaine exposure enhanced D1-MSN and suppressed D2-MSN activity, and that cocaine-induced D1-MSN activity is required for formation of cocaine–context associations. In addition, MSN subtype–specific signaling encodes contextual information about the cocaine environment such that increased D1-MSN activity precedes entry into a cocaine-paired environment, while decreased D2-MSN activity occurred after entering the cocaine-paired environment. Finally, inhibiting this D1-MSN calcium signal by DREADD inhibition blocked the cocaine-conditioned preference. Altogether, these findings show that a circuit imbalance of these D1-MSN versus D2-MSN pathways occurs upon cocaine exposure, leading to an enhanced D1-MSN pathway, thus promoting cocaine-seeking, intake, and sensitization behaviors ( Fig. 9.3 ).
Electrophysiology studies examining psychostimulant-induced plasticity in the MSN subtypes corroborate with the activity studies described earlier. Excitatory synaptic potentiation occurs at D1-MSNs after repeated cocaine exposure or cocaine self-administration. Of interest, mice that display poor cocaine intake display enhanced excitatory synaptic input at D2-MSNs. Consistent with this, increased dendritic spine remodeling occurs in D1-MSNs after repeated injections (i.p) of cocaine. Evidence demonstrates that the increased spines in D1-MSNs are thin or immature spines, characterized as silent synapses, since they consist of N -methyl- d -aspartate (NMDAR) receptors but lack α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor (AMPAR) receptors. The silent synapses, which are typical throughout the immature brain, can either retract or develop into fully functional synapses to induce new neural circuits, after periods of cocaine withdrawal. It is likely that these new neural circuits mediate enduring behaviors in response to cocaine, such as relapse behavior. Future studies examining MSN subtypes in relapse behavior will be important for understanding their role in the long-term effects of cocaine and the transition from early drug taking to the addictive state. Finally, examination into MSN subtype output in the VP, the one region receiving dense innervation from both MSN subtypes, demonstrates potentiated output of D1-MSNs but weakened output of D2-MSNs after repeated cocaine exposure. This study further showed that optogenetic depotentiation of D1-MSN output to the VP abolished cocaine locomotor sensitization; however, restoring D2-MSN transmission to VP did not alter this behavior.
As described in the preceding text, much of the data examining the striatal MSN circuits in drug abuse are from studies performed with cocaine. However, a small number of studies examine striatal circuits in morphine-mediated behaviors. Similar to the cocaine studies, optogenetic activation of NAc D1-MSNs enhanced morphine-conditioned place preference, whereas optogenetic activation of NAc D2-MSNs blunted this behavior. Of interest, examination of plasticity in these MSNs reveals a different outcome compared to cocaine, since silent synapses are induced in D2-MSNs after repeated morphine exposure. Finally, examination of analgesic tolerance demonstrated that optogenetic activation of D1-MSNs facilitates the development of morphine tolerance, whereas activation of D2-MSNs did not affect the development of tolerance. Additional studies examining these MSN subtypes in opiate-mediated behaviors are needed to uncover the mechanisms accounting for differences between cocaine and morphine.
Striatal MSN Subtype Activity in Depression-Like Behavior
There are sparse studies examining activity in MSN subtypes in animal models of depression. In contrast to the cocaine studies described earlier (which show enhanced excitatory synaptic input onto D1-MSNs and reduced input onto D2-MSNs), stress models, showing depression-like behavior, display reduced excitatory input onto D1-MSNs and/or enhanced input onto D2-MSNs. The D2-MSN data are in line with those of previous studies demonstrating enhanced excitatory input onto MSNs, which correlates with increased mushroom-shaped spines in MSNs, using an animal model of stress-induced depression, chronic social defeat stress. Use of optogenetics or DREADDs in mice that underwent chronic social defeat stress uncovered a bidirectional role for MSNs in depression-like behavior. Repeated high-frequency optogenetic activation of D1-MSNs, in mice that display depression-like behavior to chronic social defeat stress, resulted in an antidepressant phenotype. In contrast, repeated DREADD inhibition of D1-MSNs in mice displaying resilient behavior (lack of depression-like behavior) after chronic social defeat stress shifted these mice to a susceptible, depression-like state. Altering activity in D2-MSNs after stress did not alter behavioral outcomes to chronic social defeat stress. However, priming D2-MSNs with repeated activity prior to stress induced a depression-like outcome to a subthreshold social defeat stress. These data are in line with the BG model of activity in D1-MSNs promoting reward, while activity in D2-MSNs promotes avoidance or punishment.
Molecular Mechanisms in Striatal Circuits in Addiction
MSN Subtype Signaling Mechanisms in Addictive Drug Exposure and Behavior
D1-MSNs and D2-MSNs display different molecular adaptations in response to cocaine. This potentially occurs via differential signaling through dopamine receptors. Enhanced dopamine levels, occurring with exposure to drugs of abuse, can positively modulate excitatory glutamatergic input in D1-MSNs through activation of D1-receptor signaling via G s or G olf , which stimulate adenylyl cyclase, leading to increased protein kinase A (PKA) activity. In contrast, dopamine negatively modulates D2-MSNs through D2-receptor signaling via G i and G o , which inhibit adenylyl cyclase causing decreased PKA activity. This can lead to differential phosphorylation of the dopamine- and cAMP-regulated neuronal phosphoprotein (DARPP-32) in MSN subtypes after cocaine exposure. As a result, the deletion of DARPP-32 from D1-MSNs decreases cocaine-induced locomotion, while its deletion from D2-MSNs increases locomotion. In addition, brain-derived neurotrophic factor (BDNF) signaling has been shown to exert opposing roles on MSN subtypes. Deletion of the BDNF receptor, tropomyosin receptor kinase B (TrkB), from D1-MSNs increases cocaine-conditioned place preference and locomotor sensitization, while TrkB deletion from D2-MSNs reduces these behaviors. Of interest, the D2-MSN results are consistent with those of previous studies that used non-cell-type specific deletion of TrkB from NAc, demonstrating that the main effects of cocaine on BDNF might be occurring through D2-MSNs. However, assessment of morphine-conditioned place preference in these TrkB MSN subtype lines showed enhanced morphine place preference with deletion in D1-MSNs but no altered behavior with deletion in D2-MSNs. Investigation of dopamine- and BDNF-signaling targets, with repeated cocaine exposure, demonstrated activate extracellular signal-regulated kinase (pERK) associated with a downregulation of its direct nuclear target mitogen- and stress-activated kinase-1 (pMSK1) in D1-MSNs exclusively. Finally, the cell-type-specific silencing of p11 (S100A10), a protein linked with the transport of neurotransmitters and receptors to the plasma membrane, on D1-MSNs increases cocaine-conditioned place preference.
Transcription Factors in MSN Subtypes in Addictive Drug Exposure and Behavior
Overall, molecular adaptations occur in both MSN subtypes in response to drugs of abuse, such as cocaine. However, many experiments highlight major molecular alterations in the D1-MSN pathway, confirming its predominant role in cocaine-mediated behaviors. This predominant role of D1-MSNs has been well documented with immediate early gene transcription factors. Early studies examining immediate early genes provided the first insight into how the MSN subtypes respond to psychostimulants. Previous studies, in rats, demonstrate c-Fos induction in both MSN subtypes when a psychostimulant is given in a novel environment. Using D1-GFP and D2-GFP reporter mice, researchers demonstrate that c-Fos induction by cocaine in a novel environment occurs primarily in D1-GFP MSNs throughout striatum with a small induction in D2-GFP MSNs in dorsal striatum. Isolation and molecular profiling of active striatal neurons in context-dependent cocaine locomotor sensitization, using a c-Fos reporter rat line, demonstrated that these neuronal ensembles express both D1-MSN and D2-MSN markers. However, they express higher levels of a D1-MSN enriched gene, dynorphin, and lower levels of D2-MSN enriched genes, D2 and adenosine 2A receptor, suggesting a greater number of D1-MSNs in this population. c-Fos deletion in D1-MSNs, blunted cocaine-induced locomotor sensitization and MSN dendritic spine formation. Of interest, c-Fos deletion in D1 neurons did not alter cocaine-conditioned place preference but it did prevent the extinction of this contextual association. These data illustrate a dynamic role for c-Fos induction in D1-MSNs; however, one cannot rule out the differential behavioral effects as being mediated by other brain regions that express the D1 receptor.
The immediate early gene (IEG), FosB, has been well studied in MSN subtypes in addiction. FBJ murine osteosarcoma viral oncogene homolog B (FosB) is induced in striatum by acute cocaine, but the long-lasting ΔFosB, generated from the FosB primary transcript, persistently accumulates after chronic psychostimulant exposure. This long-lasting induction of ΔFosB by cocaine is dependent on D1-receptor signaling, and use of a D1-GFP reporter lines confirmed that ΔFosB induction occurs primarily in D1-MSNs after chronic cocaine. Consistent with these findings, FosB messenger RNA (mRNA) was induced in D1-MSNs with acute and chronic injection (i.p.) of cocaine using a ribosomal tagging approach.
Initial studies using a transgenic line with preferential overexpression of ΔFosB D1-MSNs resulted in enhanced locomotor and conditioned place preference responses to cocaine. In addition, this D1-MSN ΔFosB line shows facilitated acquisition to cocaine self-administration at low-threshold doses and enhanced effort to maintain self-administration of higher doses on a progressive ratio schedule of reinforcement. These behaviors are occurring potentially through enhanced structural plasticity in D1-MSNs, since adenoassociated virus (AAV)–mediated ΔFosB overexpression in NAc enhances MSN structural plasticity. Use of Cre-inducible herpes simplex virus (HSV) to overexpress ΔFosB in D1-MSNs in the NAc of D1-Cre mice confirmed the enhanced cocaine-mediated behavioral responses and showed that ΔFosB alone can enhance immature spine formation and reduce AMPAR/NMDAR ratios in D1-MSNs. These structural and synaptic plasticity changes by ΔFosB are an indication of enhanced silent synapses, which are characteristic of cocaine effects on D1-MSNs. . Thus, ΔFosB may set the stage for long-term cocaine abuse by regulating the establishment of silent synapses in D1-MSNs during the initial stage of drug exposure. Finally, investigation of ΔFosB overexpression in D2-MSNs had no effect on cocaine-induced behaviors or spine formation but did enhance AMPAR/NMDAR ratios, suggesting that ΔFosB in these MSNs might play a role in mature spine formation. A mechanistic role of ΔFosB in promoting behavioral and structural plasticity after cocaine has been examined. The D1-MSN ΔFosB line displayed enhanced expression of GluR2 in NAc, and GluR2 overexpression in NAc enhances cocaine conditioned place preference. In addition, ΔFosB increased CAMKIIα gene expression in NAc of the D1-MSN ΔFosB line and the enhanced cocaine-mediated behavioral and structural plasticity effects of ΔFosB in NAc are CAMKIIα dependent. ΔFosB also transcriptionally regulates a number of genes in NAc by chronic cocaine. Future studies using neuronal subtype chromatin immunoprecipitation to examine FosB enrichment on target genes can provide improved understanding into the MSN subtype transcriptional role of ΔFosB in cocaine action.
ΔFosB induction has been examined in other drugs of abuse including THC, ethanol, and opioids. Similar to the cocaine studies, repeated THC and ethanol leads to increased ΔFosB in D1-MSNs. Of interest, chronic morphine and heroin self-administration resulted in increased ΔFosB in both MSN subtypes. This could reflect induction in D1-MSNs in response to the rewarding effects of morphine and induction in D2-MSNs during the aversive, withdrawal phase of opioids. However, the D1-MSN-specific ΔFosB line displayed enhanced place preference for morphine, reduced morphine analgesia, and accelerated morphine tolerance, whereas a D2-MSN-specific ΔFosB line did not show any altered behavioral responses to morphine.
Another transcription factor examined is the early growth response (Egr) family member, Egr3. A modest decrease in Egr3 in total NAc tissue was observed after repeated cocaine exposure and cocaine self-administration. However, use of the RiboTag methodology to isolate ribosome-associated mRNA from each MSN subtype, demonstrated an enrichment of Egr3 mRNA in D1-MSNs, with a decrease occurring in D2-MSNs. Overexpressing Egr3 in D1-MSNs and knocking down Egr3 in D2-MSNs enhanced cocaine-conditioned place preference and locomotor sensitization, while reducing Egr3 in D1-MSNs and enhancing it in D2-MSNs blunted these behaviors, confirming the opposing role of Egr3 in both MSN subtypes. These results further support the predominant role for D1-MSNs in cocaine-mediated behaviors; however, the cell-type-specific study demonstrated that the molecular changes in D2-MSNs also account for critical aspects of the responses to cocaine. Taken together, the above studies show that changes in transcription factor regulation are pivotal in cocaine-related behaviors.
MSN Subtype Epigenetic and Posttranscriptional Modifications in Addictive Drug Exposure and Behavior
In recent years, a growing number of studies have evaluated epigenetic changes induced by cocaine. Repeated cocaine exposure can induce stable changes in gene expression that may underlie addiction. However, only a few studies examined cell-type-specific epigenetic changes after cocaine exposure. For instance, in D1-GFP versus D2-GFP mice, an increase in phosphorylation of histone 3 on Ser-10 was found after acute and chronic cocaine injections (i.p). Using ribosome-associated mRNA profiling, a recent study found cocaine-induced decrease of G9a (a repressive histone methyltransferase) in both D1- and D2-MSNs. However, developmental knockout of G9a from D1-MSNs decreased cocaine-conditioned place preference and locomotor sensitization, while knockout from D2-MSNs had the opposite effect. Surprisingly, the G9a knockout from D2-MSNs induced a partial-phenotypic switch, making D2-MSNs more similar to D1-MSNs, providing insight on the epigenetic mechanisms, as well as potential developmental mechanisms contributing to cocaine abuse. Recently the histone arginine methylation enzyme, protein-R-methyltransferase-6 (Prmt6), was examined in MSN subtypes after repeated cocaine exposure. Ribosome-associated mRNA profiling revealed a downregulation of Prmt6 in D2-MSNs after repeated cocaine, which was consistent with reduced Prmt6 levels in total NAc in this condition. In contrast, Prmt6 was upregulated in D1-MSNs. The decreased Prmt6 levels led to a reduction of the repressive mark H3R2me2a on the Src kinase signaling inhibitor 1 ( Srcin1 ) gene, which resulted in increased Scrin1 protein in NAc after repeated cocaine. Overexpression of Prmt6 in D2-MSNs or total NAc enhanced cocaine-conditioned place preference, while overexpression in D1-MSNs reduced this behavior. Consistent with reduced Prmt6 resulting in increased Srcin1, the overexpression of Srcin1 in D2-MSNs or total NAc reduced cocaine-conditioned place preference, with opposite effects observed with D1-MSN overexpression. These results suggest that the effects of reduced Prmt6 in D2-MSNs counteracts the rewarding effects of cocaine through enhancement of Srcin1 in these neurons. Srcin1 is an endogenous inhibitor that constrains the activity of the Src family of protein tyrosine kinases. Further examination in this pathway in D2-MSNs could uncover improved information into the role of D2-MSNs in cocaine action. Other work has shown cell-type-specific and time-dependent epigenetic modifications after cocaine. For instance, H3K5 acetylation was steadily increased in D1-MSNs while only transiently in D2-MSNs, whereas H3K14 increased after acute cocaine in D1-MSNs and after chronic cocaine in D2-MSNs. This type of study further points out the importance of examining cell-type-specific patterns of histone modifications, since epigenetic changes may differ with drug-exposure time and have distinct effects on gene transcription.
In addition to transcriptional and epigenetic studies in cocaine abuse, researchers are beginning to examine posttranscriptional adaptations. Reduction in Argonaute 2 (Ago 2), which plays a role in micoRNA (miRNA) generation and miRNA gene silencing, in D2-MSNs reduces the motivation to self-administer cocaine. Furthermore, this study demonstrated a number of miRNAs enriched in D2-MSNs after cocaine exposure that are also downregulated in Ago 2–deficient striatum. Collectively, identifying transcriptional and posttranscriptional changes, such as chromatin modifications and miRNA functions, in striatal circuits in cocaine addiction will be important for better understanding of the complex molecular networks underlying addiction. | <urn:uuid:eb33635c-b914-47b1-a10b-1b1a9465939b> | CC-MAIN-2022-33 | https://basicmedicalkey.com/overlapping-striatal-circuits-and-molecular-mechanisms-in-rodent-models-of-addiction-and-depression/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572304.13/warc/CC-MAIN-20220816120802-20220816150802-00497.warc.gz | en | 0.893075 | 5,331 | 3.046875 | 3 |
FIREDEFINITIONS:Structure Fire – a fire of natural or human-caused origin that results in the uncontrolleddestruction of homes, businesses, and other structures in populated, urban or suburban areas.Wildland Fire – a fire of natural or human-caused origin that results in the uncontrolleddestruction of forests, field crops and grasslands.Wildland-Urban Interface – a fire of natural or human-caused origin that occurs in or nearforest or grassland areas where isolated homes, subdivisions, and small communities are alsolocated.BACKGROUND INFORMATION:Skagit County experiences three types of fire threats: structure fires, wildland fires, andwildland-urban interface fires. Structure fires do not typically pose a great threat to the community except when the fire spreads to other nearby structures and quickly expands to a size that could threaten large numbers of people and overwhelm local fire resources. Wildland fires are a natural part of the ecosystem in Washington State. However, wildfires can present a substantial hazard to life and property. Statistics show that on an annual basis, an average of 905 wildland fires burn 6,488 acres resulting in a resource loss of $2,103,884 in Washington State. Most wildland fires are started by human causes including discarded cigarettes, the discharge of fireworks, outdoor burning and deliberate acts of arson. Many of these fires are usually extinguished in their initial stages being less thanJordan Creek Fire – Skagit County, 1998 one acre in area. Depending upon temperature,Photograph by Randy Warnock wind, topography, and other factors, wildland fires can spread rapidly to over 100,000 acres and mayrequire thousands of firefighters working several weeks to extinguish.One challenge Skagit County faces regarding the wildfire hazard is from the increasing numberof homes being built in the urban/rural fringe (known as the wildland-urban interface) as wellSkagit County Natural Hazards Mitigation Plan September 2003 Section II – Page 26 as the industrial forest. Due to a growing population and the desire of some persons to live inrural or isolated areas or on forested hillsides with scenic views, development continues toexpand further and further into traditional forest resource lands.Wildfires occur primarily in undeveloped areas;these natural lands contain dense vegetation suchas forest, grasslands or agricultural croplands.Because of their distance from firefightingresources and personnel, these fires can be difficultto contain and can cause a great deal ofdestruction. Lightning and human carelessness arethe primary causes of wildland fires. Fortunately,due to the proximity of advanced fire protectioncapabilities and our normally wet climate, large-scale wildland fires are rare in Skagit County.On occasion, individual fires will spread and mergetogether to form a firestorm covering vast amountsof area. The involved area becomes so hot that allcombustible materials ignite, even if they are notexposed directly to flames. As the fire becomeslarger, it has the capacity to create its own localweather as superheated air and hot combustion Jordan Creek Fire – Skagit County, 1998gases rise upward over the fire zone, drawing Photograph by Randy Warnocksurface winds from all sides, often at velocitiesapproaching 50 miles per hour. In exceptionallylarge events, the rising column of heated air and combustion gases carries enough soot andparticulate matter into the upper atmosphere to create a locally intense thunderstorm therebyincreasing the possibility of additional lightning strikes.HISTORY:Washington State has experienced several disastrous fire seasons in recent years. In 1994, aseries of dry lightening strikes created numerous fires in the north-central portion of the statewith major fires occurring in near Lake Chelan, Entiat, and Leavenworth. During the fireseasons of 2001 and 2002, lightning again caused numerous fires in Washington and Oregon.In some cases, two or more fires merged together thereby overwhelming resources andcreating fires so large and complex that some were not fully extinguished until cooler, dampautumn weather moved into the region.Although Skagit County typically has numerous fires that occur in forest lands each year, almostall of these fires are extremely small (less than .2 acres in size) and remain so due to therelative high moisture content in fire fuels. The majority of these fires involve minimalresources and response costs are typically less than $500 per fire.According to Washington State Department of Natural Resources records, 638reported wildland fires occurred in Skagit County from 1970 through 2001. Thelargest of these fires (the Jordan Creek Fire) occurred near the community of Marblemount inSkagit County Natural Hazards Mitigation Plan September 2003 Section II – Page 27 1998 and burnt 1,162 acres of forest land and threatened several homes in the area. Costs tofight this fire were in excess of 3 million dollars.HAZARD IDENTIFICATION:Unlike other disaster events, the direct effects of even a large fire are generally limited to theimmediate area where the fire occurred. However, the community’s normal as well asemergency services may be affected as large numbers of agencies and individual respondersfocus their efforts on the fire. Adjacent fire agencies may be asked for assistance in one formor another and access to a city’s business district may be restricted or closed and the influx ofsightseers and media personnel can further add to the disruption. Furthermore, since most firefighters in Skagit County are volunteers, large fire events could significantly affect not only theirlives but their source of employment should economic impacts continue.Evacuation of a fire zone is one of the first tasks that may need to be undertaken by emergencyresponders. Depending upon the size of the fire zone, the population density of the area, andthe number of persons needingemergency shelter, evacuation effortsmay have a significant effect onother parts of the community.The fire season in Skagit County canbegin as early as mid-May andcontinue through October thoughunusually dry periods can extend thefire season. The possibility of a Rocky Hull Fire – Okanogan County, 2000wildland fire depends on fuel Washington State Department of Natural Resources Photographavailability, topography, the time ofyear, weather, and activities such asdebris burning, land clearing, camping, and recreation. In Washington State, wildland fires startmost often in lawns, fields or other open areas, along transportation routes, and forested areas.Due to their size and complexity, large fires can put a tremendous strain on a wide variety ofagencies and jurisdictions within the area that the fire occurs and local resources could bequickly overwhelmed in dealing with the impacts of a large fire.Those persons living or doing business in the area of a large fire could be affected in severalways. Access to the area will probably be controlled or entry may be denied entirely. If arecreational area is involved, this closure may have a severe impact on tourist industry businessand logging operations. In many cases, evacuations may be necessary if the fire directlythreatens residential or commercial areas or in the event health issues could result from heavyvolumes of smoke associated with large fires.The Jordan Creek Fire near Marblemount in 1998 quickly overwhelmed local fire districtpersonnel who initially responded to the fire. Several homes in the immediate area of the firewere threatened; mutual aid provided by adjacent fire districts and a quick response by aDepartment of Natural Resources initiated Unified Command using multiple agencies preventedthe loss of several homes and other structures. Had the wind been blowing in a differentSkagit County Natural Hazards Mitigation Plan September 2003 Section II – Page 28 direction, the fire could have directly threatened the community of Marblemount and local fireresources, already overwhelmed, would have had great difficulty in extinguishing multiplestructure fires in close proximity to each other.The following list is a compilation of comments and suggestions made by variousstakeholders and the public regarding possible problems that could result from awildland or wildland-urban interface fire.In addition to damaging timber lands, agricultural crops, homes, businesses, property, and theenvironment, a wildland or wildland-urban interface fire in Skagit County could potentially resultin the following: Sinclair Island and Cypress Island are particularly vulnerable to wildland fires as there is no fire service on these islands and a response by Washington State Department of Natural Resources crews would be significantly delayed because there is no ferry service to these islands. Fidalgo Island and Guemes Island are very susceptible to wildland-urban interface fires due to the lack of rainfall during the summer months and the large number of homes that are located in or very near heavily timbered areas. Fire hydrants in these areas are typically supplied with water from private water systems that may have inadequate supplies of water for firefighting because of a lack of summer rainfall or long-term drought conditions. All areas of Skagit County are susceptible to wildland or wildland-urban interface fires caused by fireworks and/or human recklessness.VULNERABILITY ASSESSMENT:Those persons living in forested areas or interface areas are most vulnerable towildland or wildland-urban interface fires.Within Skagit County, approximately 25 % of the land area is zoned industrial forest andapproximately 7 % of the land area is zoned agricultural; these areas are vulnerable to wildland orwildland-urban interface fires. However, the potential for large forest fires in Skagit County isnormally small. Improved fire spotting techniques, better equipment, and trained personnel aremajor factors, as are Skagit County’s normally wet climate and high fuel moisture levels. Most ofthe industrial forest areas of Skagit County receive in excess of 50 inches of rainfall annually withsome areas receiving as much as 100 inches or more rainfall annually. This wet climate and theinfrequent occurrence of strong, dry winds, normally prevents natural fire fuels from reaching acombustible state. However, warm summer temperatures coupled with seasonal low rainfallamounts sometimes lead to summer drought conditions in the industrial forest. These conditionsare reached more often than most people realize. Luckily, there has been a lack of ignition duringtimes of serious fire danger in Skagit County.The United States Forest Service and/or the Washington State Department of Natural Resourcesmanage most of the forest lands in Skagit County. The excellent fire prevention and controlcapabilities of these two agencies are partially responsible for the lack of large wildland andSkagit County Natural Hazards Mitigation Plan September 2003 Section II – Page 29 wildland-urban interface fires experienced by Skagit County. However, the absence of large firescoupled with reduced burning has also resulted in greater fuel loading which could lead to acatastrophic fire given the right set of conditions.Should a wildland fire or wildland-urban interface fire occur, the impacts of the fire would varygreatly with the size and location of the fire, the weather, and time of year. It is unlikely that amajor wildland or wildland- urban interface fire would seriously impact Skagit County as a whole.In the event of a large wildland or wildland-urban interface fire, additional resources could be requested through activation of the Northwest Region Fire Mobilization Plan and/or the Washington State Fire Mobilization Plan in addition to other state and federal fire resources. While there have always been a certain number of people that have built homes in wooded areas, in recent years, the numbers of peopleBitterroot Valley, Montana - 2000 choosing to build in or very near forest areas has increased dramatically as citylimits have expanded into previously unpopulated and forested areas. As the population of SkagitCounty increases and people desire to live in more rural or isolated areas outside of the floodplain,development in the wildland-urban interface will continue to expand thereby increasing thepotential risk to lives and property from wildland and wildland urban-interface fires.Should a large wildland or wildland-urban interface fire occur in Skagit County, the effects of suchan event would not be limited to just the loss of valuable timber, wildlife and habitat, andrecreational areas. The loss of large amounts of timber on steep slopes would increase the risk oflandslides and mudslides during the winter months and the depositing of large amounts of mudand debris in streams and river channels could threaten valuable fish habitat for many years. Inaddition, the loss of timber would severely impact the watershed of the Skagit River and coulddrastically increase the vulnerability to flooding for many years.The loss of large amounts of timber in the industrial forest areas of Skagit County could severelyimpact the logging industry and possibly the overall economy of the county for many years. With afixed number of acres of timber land available for harvest, timber owners must limit the acresharvested each year in order to properly manage their timber holdings and maintain a continualand sustainable supply of timber. The immediate loss of several hundred or thousands of acres oftimber could potentially equal several years of timber harvest acreage.If a significant portion of the business area has been affected, the loss to the community can beoverwhelming. Reduction of payrolls and long-term layoffs during recovery from a large firecould have a serious impact on the buying power of a large sector of the population. A long-term business closure could also have a large impact to the community’s tax base.Skagit County Natural Hazards Mitigation Plan September 2003 Section II – Page 30 The Washington State Department of Natural Resources, Northwest Region, hasconducted a region-wide wildland fire hazard assessment utilizing the followingmethod:1. R.A.M.S (Risk Assessment and Mitigation Strategies) was developed for fire managers to be an all-inclusive approach to analyzing wildland FUELS, HAZARD, RISK, VALUE, and SUPPRESSION CAPABILITY. It considers the effects of fire on unit ecosystems by taking a coordinated approach to planning at a landscape level. The steps involved in this process include:a. The identification of spatial compartments for assessment purposes:i. Skagit County (county # 29) was subdivided into 3 risk assessment compartments based on IFPL (Industrial Fire Precaution Level) Shutdown Zones. Zone 653 represents the islands and tidal lowlands; Zone 656 represents the interior lowlands - roughly the Interstate 5 corridor; Zone 658 represents the uplands to the Cascade Crest (roughly 1500 feet elevation and above). Skagit County risk assessment compartments are numbered utilizing the county number (29) combined with the shutdown zone number. Using this scheme, the three risk assessment compartments within Skagit County are numbered 29653, 29656 & 29658.b. The assessment of significant issues within each compartment are then related to: i. Fuels Hazards ~ The assessment of FUEL HAZARDS deal with identifying areas of like fire behavior based on fuel and topography. Given a normal fire season, how intense (as measured by flame length) would a fire burn? Under average fire season conditions, fire intensity is largely a product of fuel and topography.ii. Protection Capability ~ Initial attack capability will be evaluated on the following criteria. Determining fire PROTECTION CAPABILITY for the purpose of this assessment involves estimating the actual response times for initial attack forces and how complex the actual suppression action may be once they arrive because of access, fuel profile, existence of natural or human-made barriers to fire spread, presence of structures and predicted fire behavior. a. Initial Attack Capability - actual time of first suppression resource. b. Suppression Complexity - access, fuel conditions, structure density, and so forth.iii. Ignition Risk ~ Ignition risk evaluation will be completed for each compartment. Ignition risks are defined as those human activities or natural events which have the potential to result in an ignition. Wherever there are concentrations of people or activity, the potential for a human-caused ignition exists. After assessing the risks within an area, it is helpful to look at historical fires to validate the risk assessment. Historical fires alone, however, are not an accurateSkagit County Natural Hazards Mitigation Plan September 2003 Section II – Page 31 reflection of the risks within a given area. The objective of this effort is to determine the degree of risk within given areas. 1. Compartment Ignition Risk is based on the following: a. Population Density b. Power Lines – distribution as well as transmission c. Industrial Operations - timber sale, construction project, fire use, mining, and so forth d. Recreation - dispersed, developed, OHV, hunting, fishing e. Flammables f. Other - fireworks, children, shooting, incendiary, cultural, power equipment g. Railroads h. Transportation Systems - state, federal, public access i. Commercial Development - camps, resorts, businesses, schoolsiv. Fire History ~ Fire history will be completed for each compartment. The history will reflect the following: 1. Fire location 2. Cause 3. Average annual acres burned 4. Average annual number of fire by causev. Catastrophic Fire Potential ~ An evaluation of fire history reflects the potential for an event to occur. An example is if large damaging fires occur every 20 years and it has been 18 years since the last occurrence, this would reflect a priority for fire prevention management actions. 1. Evaluate large fire history 2. What are the odds of a stand replacement type fire occurrence in that compartment? a. Unlikely b. Possible c. Likelyvi. Values ~ A value assessment will be conducted for each compartment. Values are defined as natural or developed areas where loss or destruction by fire would be unacceptable. The value elements include: 1. Recreation - undeveloped/developed 2. Administrative sites 3. Wildlife/Fisheries - habitat existing 4. Range Use 5. Watershed 6. Timber / Woodland 7. Plantations 8. Private Property 9. Cultural Resources 10. Special Interest Areas 11. Visual Resources 12. T & E Species 13. Soils 14. Airshed 15. Other necessary elementsSkagit County Natural Hazards Mitigation Plan September 2003 Section II – Page 32 This evaluation process provides the basis for determining the Skagit County Wildland-Urban Interface Fire Risk Assessment Compartments map. Additional informationregarding the results of this process can be found in Appendix A, Excerpts from theWashington State Department of Natural Resources Northwest Region R.A.M.S.Assessment.2. R.A.M.S risk assessment compartments were further broken down to identify Wildland-Urban Interface Hazards. Using 2000 Census data, individual areas were identified in the Wildland-Urban Interface and assessed using the N.F.P.A. (National Fire Protection Association) 299, Wildfire Hazard Assessment. The results of this assessment are depicted in the Skagit County Wildland-Urban Interface Fire Risk Assessment Based On NFPA 299 Risk Assessment map.PROBABILITY and RISK:Based on historical evidence, there is a low probability of a large wildland or wildland-urbaninterface fire occurring in Skagit County and a low risk to people and property in Skagit Countyas a result of a large wildland or wildland-urban interface fire.However, based upon the newly developed wildland fire hazard assessments conducted by theWashington State Department of Natural Resources utilizing R.A.M.S. and N.F.P.A. 299, there isSkagit County Natural Hazards Mitigation Plan September 2003 Section II – Page 33 a moderate to high potential for a large wildland fire to occur in Skagit County with thepotential for moderate to high (with isolated areas of extreme) risk to people andproperty as a result of a catastrophic wildland or wildland-urban interface fire.CONCLUSION:Skagit County’s typical moist marine climate and low frequency of lightning provide naturalprotection against large wildland or wildland urban-interface fires experienced in EasternWashington, California, and other portions of the United States. While wildland and wildlandurban-interface fires do occur in Skagit County on a fairly regular basis during the warmsummer months, these fires are typically very small and are usually extinguished with personneland equipment.Approximately 32 % of the land in Skagit County is comprised of industrial forest or agriculturalland that is vulnerable to wildland or wildland urban-interface fires. Current zoning regulationslimit minimum lot size to 80 acres in the industrial forest and 40 acres in agricultural areas. Inaddition, much of the industrial forest lands are located outside the boundaries of establishedfire districts. Building homes or other structures in or near forested areas increases the risk ofloss from fires. In the past, structures were often built with minimal awareness regarding therisks associated with wildland or wildland urban-interface fires.According to Skagit County Code 14.04.190, new single family dwellings and/or accessorybuildings constructed in areas outside of a fire district are required to meet the followingrequirements: 1. The lot must be a legal lot of record prior to June 11, 1990. 2. Approved non-combustible roofing materials must be used. 3. All slash must be abated within 200 feet of any portion of the exterior of the structure, or to the maximum extent possible if 200 feet cannot be achieved due to lot size. 4. A safety zone must be cleared of flammable vegetation for a distance of 30 feet from any portion of the exterior of the structure on level ground and for a distance of 100 feet downhill on sloped ground. If these dimensions cannot be achieved due to lot size, then dimensions are to be achieved to the maximum extent possible. 5. Any structure greater than 800 square feet in area must have building sprinklers installed that meet National Fire Protection Association 13D standards.With the completion of the recent wildland fire hazard assessments conducted by theWashington State Department of Natural Resources, we now have a better idea of those areaswithin Skagit County that are most susceptible to wildland-urban interface fires. Thisinformation will hopefully provide an incentive for local government to implement new and/oradditional fire education programs such as FIREWISE as well as provide the basis for newand/or additional building regulations in those areas of Skagit County that have been identifiedby this assessment as having high-fire hazard and/or extreme fire hazard.Information regarding what steps homeowners can take tohelp safeguard against wildland-urban interface fires can befound at http://www/firewise.org/.Skagit County Natural Hazards Mitigation Plan September 2003 Section II – Page 34
FIRE - Skagit County, Washington
Description: largest of these fires (the Jordan Creek Fire) occurred near the community of Marblemount in Jordan Creek Fire – Skagit County, 1998 Photograph by Randy Warnock .
Read the Text Version
No Text Content! | <urn:uuid:6f77eb13-98e6-44ca-ad06-6c0e159b8a70> | CC-MAIN-2022-33 | https://fliphtml5.com/krcj/ixjq/basic | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573533.87/warc/CC-MAIN-20220818215509-20220819005509-00295.warc.gz | en | 0.923343 | 4,716 | 3.578125 | 4 |
Georgia ranks in the nation’s top 10 in cowpea (southern pea, Vigna unguiculata) production, with estimates of more than 4,900 acres grown in approximately 49 of 159 counties in the state in the 2014 production season. Colquitt County, located in southwest Georgia, leads the state in production with 1,900 acres. According to a 2014 U.S. Department of Agriculture census, there were 19,289 total acres of cowpeas harvested in the U.S., with Georgia ranking fifth nationally. However, the census of agriculture likely underestimates production, as other sources put production in the Unites States between 60,000 and 80,000 acres (Quinn and Myers, 2002). Cowpeas are one of the few dry beans traditionally produced in the southern United States. The majority of dry bean production occurs in the upper-Midwestern United States, with North Dakota producing more than 30 percent of dry beans in the United States out of approximately 1.8 to 2 million acres of beans (http://www.usdrybeans.com/resources/production/production-facts/). Although large numbers of small farmers produce cowpeas for fresh market sales, the majority of commercial acres are devoted to frozen or processed cowpeas.
Typical yields for shelled peas range from 1,000 to 2,000 pounds per acre, depending on variety and whether they are machine or hand harvested, while green pod yields are typically 2,500 to 4,000 pounds per acre (Brandenberger et al., 2007). Cowpeas are often sold shelled in 10 lb plastic bags, which are the rough equivalent of a shelled bushel. A USDA bushel of green peas in the hulls is approximately 25 pounds (Perkins-Veazie and Buckely, 2014).
Production costs for machine-harvested cowpeas are typically $350 per acre with an additional $750 per acre for harvests and marketing costs (University of Kentucky, 2012). Although prices vary depending on market, an average of $1.30 per pound is often used when developing budgets for machine-harvested cowpeas. Assuming a yield of 1,500 pounds per acre, most budgets estimate a return of approximately $1,000 to $1,500 per acre above variable costs and a total return approaching $3,000 per acre (Clemson University estimates, 2016). Using these estimates, the value of the cowpea industry in Georgia is approximately $9.9 million.
Figure 1. Cowpea/Southern Pea acreage in Georgia, 2014, by county. Sources: D. Riley (mapping) and 2014 Georgia Farm Gate.
Most cowpeas are sold domestically, particularly in the Southern United States, where they are commonly consumed as part of the region’s traditional diet.
Much of the commercial production of cowpea in Georgia occurs in the southwestern portion of the state. As described previously, Colquitt County is the leading county in the state with nearly 1,900 acres grown. Crisp County is ranked second with 758 acres, followed by Tift County with 520 acres. Marion, Grady, Worth, Seminole, and Decatur counties contribute another 893 acres of production. The plantings in these counties are typically machine-harvested on a large scale and sold in the wholesale market, namely, to a processing facility. Additional commercial acreage is also grown in eastern Georgia. Most of the remaining acres in the northern region of the state are sold for fresh market or retail and are generally harvested by hand.
Cowpeas are well adapted to grow under dry conditions. However, most commercial acreage in southwest Georgia is grown with irrigation, and the majority of that acreage is irrigated using a center-pivot system, which waters crops using sprinkler equipment that rotates from a midpoint. Planting dates vary, but in southern Georgia, dates range from mid-March for a spring planting to late August for a fall planting. Because of their adaptability to heat, cowpeas will set seed even under the high of mid-summer when other horticultural beans may suffer due to poor pollination. Typically, it is suggested that growers wait until soil temperatures reach 60 oF to plant, otherwise seeds may germinate poorly. In Georgia, plants are usually seeded at four to six seed per foot for bush types and two to four seed per foot for vining types. Seed are usually planted ¾ to 1¼ inches deep. Most rows are spaced 30 or 36 inches apart.
Due to their adaptability, cowpeas can be grown on relatively poor soils and need minimum fertility. Generally, growers will apply no more than 40 to 50 lb/acre of nitrogen to produce a crop. Excessive nitrogen applications can cause excessive growth, render harvest difficult, and increase disease susceptibility. Cowpeas can be successfully grown in conventional or conservation tillage. The tillage regime is usually dependent on the other crops that will be grown in rotation with the cowpeas.
As a legume, cowpeas are excellent rotation partners for a wide range of vegetables as well as traditional agronomic crops. All cowpeas are direct seeded, and most are machine-harvested. “Fresh frozen” peas are often machine-harvested when partially dry and rehydrated before packaging and cooling. Unlike very labor intensive vegetable crops, virtually no hand labor is used for cowpea harvest except for small you-pick acreage or household plantings. Thus, for commercial production, re-entry intervals (REIs) of pesticides are not a significant concern.
It should also be noted that growers routinely use cowpeas as warm-season cover crops. The ‘Iron and Clay’ mix of cowpeas have become very popular for growers looking to put in a drought-tolerant, nitrogen-fixing, summer cover crop. Although they may be harvested for seeds to be replanted for feed plots or cover crops, these are generally not harvested for commercial edible production.
Cowpeas are afflicted by various pests in Georgia, most notably the cowpea curculio, which can be production limiting. This insect pest is fairly widespread, but not all production sites experience the same degree of crop damage on any given year or production season. For example, certain fall plantings of cowpea can experience less damage, even if located in a historically cowpea-curculio-infested region.
For Georgia, the general ranking of pest categories by importance from high to low is: one, insects; two, plant pathogens; and three, weeds, mainly due to the perceived ease of control with registered pesticides. This crop seems to be amenable to more biologically based pest management in the absence of key pests like the cowpea curculio. The following is a brief summary of the major pests of cowpea in Georgia.
The following insects are ranked from the most important and common pests in Georgia, with an emphasis on southern Georgia, where the majority of the production occurs. All insect pests but the cowpea curculio, have satisfactory control options available to growers. The cowpea curculio is the main production-limiting key pest where it occurs in the Southeastern United States.
Cowpea curculio, Chalcodermus aeneus (Coleoptera), is a weevil (Figure 2), that seems to have originated from the Caribbean and Central American regions. It has been reported as the major pest of southern peas where it occurs in the Southeastern United States for well over a century. The distribution of the weevil in the Southeast has been reported roughly in the triangle from southern Texas to North Carolina and south to Florida.
However, with the tremendous decline in southern pea acreage over the last 50 years, the distribution of this weevil is more scattered and tends to be reported more in traditional southern pea production areas of Alabama, Georgia, and South Carolina in recent years. Both larval and adult feeding causes damage to the pea and can make it unmarketable. Feeding and egg laying occurs in the developing pods producing a distinct, dark spot lesion or “sting” on the outside of the pod. Heavy feeding by adults can reduce the amount of flowering, therefore suppressing fruit set in the crop. The grub develops inside of the pod, feeding directly on the seeds and producing frass (insect excrement) inside of the pod (Figure 2). As much as 40 to 60 percent yield loss can be typical (Arant, 1938). The main cowpea plant resistance trait has been the thickness of the pod wall, such as in ‘Green Acre’ varieties, which also have a small pea and lower shell-out weights than a black-eyed pea or pink-eyed purple hull (Chalfant, 1997). Our recent data indicated that as little as 10 percent of “stung” peas resulted in losses of 42.6 bushels per acre based on an average of 150 bushels per acre expected yield. Above 30 percent of “stung” peas resulted in no marketable southern pea yield. The main control is frequent foliar sprays of pyrethroid insecticides, beginning at first bloom through harvest, so pollinators can be negatively affected. Currently, there are no labeled insecticides that provide adequate control if the curculio infestation is heavy. The only partially effective biological control tested to date has been Beauvaria bassiana commercial fungal sprays drenched into the soil during the soil phase of the curculio, but it is applied after harvest and after the damage has already occurred (Riley, unpublished data). For this reason, acreage is currently declining in heavily affected areas, and production is moving to regions that don’t currently have curculios present. Unfortunately, curculios have been documented to establish throughout Georgia and across all Southeastern states if cowpeas are grown consistently in high acreage.
Stink bugs, specifically Southern green stink bug, Nezara viridula, and brown stink bug, Euschistus servus (Hemiptera), are common pests of cowpeas, feeding mainly on the pods during seed development (Figure 3).
Figure 3. Southern green stink bug (left) and brown stink bug (right). Source: Bugwood.
The external damage appears as a small lesion or “sting,” smaller than that caused by the curculio, and the internal damage results in reduced seed weights, but no frass will be present. Stink bugs are highly seasonal and only cause significant damage when they occur in high numbers for short periods during the spring and Figure 2. Cowpea curculio adult (right) and grub (left) with damage to peas. Source: D. Riley, UGA. Pests UGA Cooperative Extension Bulletin 1480 • Crop Profile for Cowpeas in Georgia 5 summer. Stink bugs are relatively easy to control with insecticides, which can be timed to scouting reports, eliminating any need for calendar sprays. Thus, the impact of control on pollinators can be much less than for curculio spray programs. Southern green stink bugs reach damaging levels at 4 stink bugs foot of row of southern peas in Georgia (Nilakhe et al., 1981). Brown stink bugs are very common in cowpeas and according to McPherson (1982), two subspecies of E. servus exist, with E. s. servus (Say) being the most important in the Southeast. Brown stink bug can emerge as early as March in the Southeast. After wheat is harvested and E. servus moves to corn, it has already completed a generation, typically completing two generations a year (Herbert and Toews, 2012). Biological control is generally not used for these insects in cowpeas.
Figure 4. Beet armyworm on pigweed. Source: UT Crops.com.
Armyworms, Spodoptera spp. (Lepidoptera), and in particular, beet armyworm, S. exigua, can cause noticeable damage to the foliage of cowpeas, generally during the summer (Figure 4). It has not been documented whether armyworm damage actually results in significant yield loss because the cowpea plant tends to compensate for some foliar damage, but the assumption is that the treatment threshold is around 15 percent foliage loss from two weeks prior to flowering and until pods have filled, similar to what is recommended for soybean. Thus, scouting and the limited use of insecticides greatly reduce possible negative impacts of insecticide sprays on pollinators. Bacillus thuringiensis insecticides can further reduce impacts on beneficial arthropods in the cowpea crop.
Figure 5. Cowpea weevil, stored dry seed pest. Source: D. Riley, UGA.
Cowpea weevil, Callosobruchus maculatus (Coleoptera), is a stored-grain pest of cowpea that only affects the dried cowpea seed in Georgia, not the fresh frozen product or any field production (Figure 5). Thus, there are currently no field recommendations for control. Storing dried seed at near-freezing temperatures can eliminate the weevil in the seed bags.
Other insects that can cause damage to the plant, but generally occur in low levels, are the banded cucumber beetle, Diabrotica balteata (Coleoptera) which is a defoliating pest; kudzu bug, Megacopta cribraria (Hemiptera), which can feed on stems and pods like stink bugs; cowpea aphid, Aphis craccivora (Hemiptera), which can transmit mosaic viruses and rarely cause damage on their own without the virus; leafhoppers, Empoasca spp. (Hemiptera), a sporadic pest; corn earworm, Helicoverpa zea (Lepidoptera), which rarely causes problems in cowpeas in Georgia; American serpentine leafminer, Liriomyza trifolii (Diptera), which causes mining in the leaves, but rarely warrants control in Georgia; and chrysomelid beetle, Cerotoma ruficornis (Coleoptera), which is sporadic.
The cowpea plant pathogens (Patel, 1985) listed here are ranked from the most important and common diseases in Georgia, with an emphasis on southern Georgia, where the majority of the production occurs. Most pathogens currently have good chemical control options in Georgia cowpeas.
Cercospora leaf spot (Mycosphaerella cruenta) causes circular leaf spots that are not generally restricted by veins (Figure 6). Lesions often have light-brown to gray centers with a reddish border. In time, chlorosis of the entire leaf occurs and blighted areas coalesce to become necrotic. The primary source of inoculum is crop debris or susceptible legumes in the region, as spores are airborne. Although there are effective fungicides available, the low profit margin of cowpeas makes the use of fungicide an unattractive option in Georgia.
Figure 6. Cercospera leaf spot. Source: ICAR Research Complex, India.
Choanephora pod rot (Choanephora cucurbitarum) generally follows cowpea curculio or other physical damage on the pods. Initial symptoms are darkened, water-soaked lesions on the pots (Figure 7). In time, developing seeds and the entire pod succumb to a rather wet, slimy rot. Fungal hyphae will develop and produce dark sporangia and sporangiospores, giving the infected area a “fuzzy” appearance. The disease is similar to Choanephora rot of squash and other cucurbits. Damage by cowpea curculio predispose the pods for Choanephora infection. Hence, curculio management helps in managing this pathogen.
Figure 7. Choanephora pod rot. Source: Bugwood.
Cowpea mosaic virus can be seedborne and transmitted mechanically by aphids. The virus has a wide host range and infects many members of the Chenopodiaceae and Fabaceae families. Symptom expression can vary depending on the host infected. In cowpea symptoms are typical of those caused by mosaic viruses, namely chlorotic spots with rings. When severe, leaf distortion, necrosis, and plant collapse can occur. Some varieties develop necrotic local lesions. No widespread outbreaks of the virus have been observed in recent years, so no current control actions have been needed.
Bacterial blight and canker of cowpea (Xanthomonas axonopodis pv. vignicola) symptoms range from angular, vein-restricted lesions to large wedge- or pie-shaped blighted areas that extend to the leaf margin. Lesions often have a chlorotic (yellowing) halo. Infections produce abundant levels of ethylene, which leads to leaf abscission (shedding) and defoliation (loss of aboveground plant material). In Georgia, the cream and crowder types are more prone to developing stem cankers and plant lodging. The primary source of inoculum is contaminated seed. This was of major importance when Georgia growers produced their own seed, as regional environmental conditions favored seedborne development.
Southern blight (Sclerotium rolfsii) is a soilborne fungus that has an extremely wide host range, at greater than 500 plant species. The fungus is favored by warm, humid, or wet conditions. Infections generally occur in lower stems near the soil surface. Soft rot symptoms develop, and the fungus girdles the stem. Infected plants wilt, lodge, and eventually die. Characteristic signs of infection include a fluffy, white mycelial mat and the presence of mustard-seed like sclerotia clustering on infected tissues.
Rhizoctonia stem canker or damp-off is another soilborne fungus favored by warm, wet conditions. In Georgia, Rhizoctonia is generally associated with damping-off of young seedlings. In older plants, reddish-brown stem cankers can appear and can result in plant lodging.
Other diseases to look out for include root-knot nematodes, Meloidogyne spp.; cowpea severe mosaic virus; anthracnose; Colletotrichum; Fusarium wilt, Fusarium oxysporum; cowpea chlorotic mottle virus; Pythium stem rot, Pythium spp.; Pod rot, Botrytis spp.; Septoria leaf spot, Septoria spp.; Rust, Uromyces appendiculatus; Powdery mildew, Ersyphe polygoni; and target spot, Corynespora cassicola.
Weeds are a major pest group of any vegetable crops, and cowpeas grown as vegetable crops are no exceptions. Since cowpeas are typically grown in bareground production systems during the warmest part of the year, similar to snap beans, the types of weeds affecting cowpeas are similar to the weed complex in other summer legume crops in the Southeastern United States. These include morning glory, pigweed, nutsedge, sicklepod (Figure 8), and others. Growers usually cultivate southern peas until the plants become too large to pass easily through the cultivator. Later in the growing season, if weed control is still needed, herbicides become the main tool for weed management. Weed management of grasses and broadleaf weed species are common, but yellow nutsedge and larger-seeded broad leaves
Figure 8. Sicklepod in abandoned cowpea field.
might be needed. A pictorial guide to common weeds can be found on the University of Florida Extension website (http://nfbfg.ifas.ufl.edu/documents/WeedsoftheSouthernUnitedStates.pdf, accessed April 2017). The months of cowpea production tend to be between April and October, so winter weeds are almost never an issue. For specific recommendations beans for the Southeast, see the “Crop Profile for Beans (Snap) in Georgia.”
Review pesticides recommended for insect, disease and weed management in the Georgia Pest Management Handbook (accessed April 2017).
The only biological controls that might be useful for cowpea pest management are the use of foliar-sprayed Bacillus thuringiensis products for the control of Lepidopteran foliage feeders, such as armyworms, and post-harvest soil treatment with the fungal insecticide Beauveria bassiana strain gha products to reduce the survival of cowpea curculio stages in the soil phase (late instar larvae, pupae and newly emerged adults). Even though recent research shows there may be some promise to biological control techniques, neither of the above biological controls has been proven using commercial-sized plots and a partial budget analysis to be economically viable in the current cowpea production regions of the Southeast. Insecticides, such as pyrethroids, that are harsh on beneficial soil arthropods like predacious ants and beetles, should be avoided because they may increase the survival of cowpea curculio stages in the soil phase.
Planting dates and resistant varieties are the main cultural practices with potential to reduce pest pressure on southern peas. Planting dates can affect the types of weeds present in the crop and can potentially affect the impact of insects and diseases. For example, morning glory and nutsedge are generally more prevalent in the late summer than in the early spring. Cowpea growers have observed less damage from cowpea curculio in the fall, even though the populations of cowpea curculio are always higher in August than in late spring. This is likely due to the weevil going into diapause, which limits reproduction in the pods (Riley, unpublished data). Host-plant resistance to viruses (e.g., cowpea mosaic virus, cowpea severe mosaic virus, cowpea aphid-borne mosaic virus, cowpea chlorotic mottle virus) is critical to production in many parts of the world (Hampton et al., 1997, Lima et al., 2011). Fortunately, we typically don’t see mosaic viruses affecting cowpeas in the field in Georgia, so resistant cultivars are not needed. Cowpea resistance mechanisms to the cowpea curculio have included pod-wall thickness (Cuthbert and Fery, 1975), pod-wall toughness (Chambliss and Rymal 1980), and volatile substances that affect curculio behavior (Chambliss and Rymal 1982), but these were never proven to be economically viable. The newest approach to host-plant resistance in V. unguiculata is through the development of genetically modified (GM) organisms. Genetic engineering has been used to transfer the gene coding for the α-amylase inhibitor α AI-1, a bruchid beetle resistance factor from the common bean (Phaseolus vulgaris L.), into other grain legumes including pea (Pisum sativum L.), azuki bean (Vigna angularis or Wildenow), chickpea (Cicer arietinum L.) and cowpea (Lüthi et al., 2013). α –Amylases are important enzymes for starch digestion and have been shown to be vital for weevil development (Napoleão et al., 2013). Recently, a GM line expressing high amylase inhibitor effects was developed by Higgins et al. (2013). However, this line has not been reported to be tested against the cowpea curculio.
Physical and Post-Harvest Controls
The only physical control program that is potentially relevant for the cowpea is the destruction of pest-contaminated residue. This is true in cases where cowpeas are being mechanically harvested, and pests like the cowpea curculio are prevalent in the post-harvest trash produced after peas are cleaned and bagged (Figure 9). In this case, the recommended practice would be to burn the residue to try to eliminate the weevils coming out of this material, which serves as a pest source for subsequent plantings of the crop. This also pertains to post-harvest pest control activities to try to reduce overwintering sources of pests and pest-contaminated residue. Post-harvest controls for cowpea curculio include the physical elimination of residue, but without controlling the post-harvest curculios that have entered the soil, the benefit of this approach is very limited for reduction of this field pest. The stored seed postharvest pest that can require treatment in Georgia is the cowpea weevil. As previously stated, storing dried seed at near freezing temperatures can eliminate the weevil in the seed bags. Fumigation is rarely used.
Figure 9. Cowpea curculio in packing house trash bin.
For extremely difficult-to-control, production-limiting pests, like the cowpea curculio, the eventual goal is to conduct regional eradication programs. However, this would require more effective pest management tools and greater cowpea grower coordination than is presently available to attempt such an approach in Georgia. Regional coordination and support for cowpea pest management is needed.
Status and Revision History
Published on Aug 24, 2017 | <urn:uuid:2c673f66-5c02-4055-8792-177e329bb6c7> | CC-MAIN-2022-33 | https://extension.uga.edu/publications/detail.html?number=B1480 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00497.warc.gz | en | 0.920169 | 5,331 | 2.640625 | 3 |
Sorry for the long lag since my last post, I’m back in the saddle and will be posting monthly. Please share with your friends!
There are seven hobbies that science suggests will make you smarter. The full article by Christina Baldassarre in Entrepreneur magazine is worth reading: tinyurl.com/on7c2cy):
1. Play a musical instrument. It works the part of your brain that improves executive skills, memory, problem solving and overall brain function, regardless of how old you are.
2. Read anything. Reading reduces stress and helps with problem solving, detecting patterns and understanding processes.
3. Exercise regularly. Exercising floods your cells with BDNF, a protein that helps with memory, learning, focus, concentration and understanding.
4. Learn a new language. People who are bilingual are better at solving puzzles than people who speak only one language.
5. Test your cumulative learning. Keep a journal of noteworthy bits of knowledge you acquire and observations.
6. Work out your brain. Sudoku, puzzles, riddles, board games, video games and card games increase neuroplasticity.
7. Meditate. Different activities stimulate different areas of your brain and you can meditate on your strengths and weaknesses.
• • •
Successful kids have eight specific (somewhat unfair) things in common, according to studies cited in a recent issue of Business Insider:
1. Parents who teach them social skills. Socially competent children are far more likely to finish college and obtain employment.
2. Parents who have high expectations for their children (expectations lead to attainment).
3. Moms who work. Daughters of working mothers are more likely to have more responsibility and higher pay than peers raised by stay-at-home mothers.
4. They have a higher socioeconomic status. The achievement gap between high- and low-income families is growing.
5. More educated mothers. Mothers who finish college are more likely to raise kids that do the same.
6. Parents who teach them math early on. Mastery of early math skills predicts future success in school.
7. Parents who develop a relationship with their kids. Children with healthy parental relationships show greater academic attainment in their 30s.
8. Parents who value effort over avoiding failure. Whether kids perceive their success comes from smarts or effort also predicts their attainment.
I recommend the entire article to read about the science behind the theories. businessinsider.com/set-your-kids-up-for-success-2015-8
• • •
The Smart Parent blog has compiled a list of the “top 20 new children’s books to read with your kids in 2015.” These aren’t classics, they are new releases across a wide range of ages and topics. The complete list appears at tinyurl.com/oxfkwac but here are five to get you started:
1. “The Rechargeables: Eat Move Sleep.” An engaging story about making healthy choices.
2. “Goodnight, Goodnight Construction Site.” Kids love learning about their favorite big machines.
3. “Incredible You! 10 Ways to Let Your Greatness Shine Through.” With Q & A to encourage conversations with your kids about their feelings.
4. “The Day Crayons Quit.” My teen worked in a children’s bookstore this summer and this was her favorite.
5. “Little Magic Books.” This combination book/smart phone app is a big hit with kids.
• • •
I just finished interviewing Jessica Lahey about her new best-selling book, “The Gift of Failure: How the best parents learn to let go so their children can succeed.” The link to my article is here. This is perhaps the best education/parenting book that I have ever read. The most important thing that she wants parents to take away from her book is that parents today need to “parent for the long term, not the moment. “Inserting yourself into your child’s homework to avoid tears or so their grade doesn’t suffer solves today’s problem, but isn’t helping you raise a better kid.”
• • •
There is good news for families of high school students needing financial aid for college. The U.S. Department of Education has simplified the FAFSA financial aid form, and this year for the first time, families can fill out the form starting Thursday, Oct. 1, instead of having to wait until their tax information is ready in January. This change will make it easier for families to project if a college will be affordable, and will likely encourage families to fill out the form. Families can start with the government’s new College Scorecard to get a sense of affordability and follow up with the FAFSA to predict the aid they will receive. fafsa.ed.gov and collegematch.ed.gov.
• • •
Is knowing how to properly say hello one of the lost skills of childhood? Kinstantly.com writer Paula Spencer Scott worries that kids’ screen-based culture means that they are losing the ability to read non-verbal communications, like what to do with an outstretched hand. Scott suggests that you make sure your children see you saying hello (to the supermarket bagger, etc.) and then point out the basics:
• When someone says hello, we say hello back.
• No pressure to come up with anything original. “Hi, I’m Jake” or “Hi, how are you/I’m fine, thanks” is sufficient.
• It’s not dorky. It’s what civilized people do.
• It won’t kill you. And it feels nice – for you and the person you’re talking to.
Also show your child a basic handshake. She recommends that you not force greetings but applaud their effort, and give gentle reminders when they forget. Kids who know how to say hello, and make the effort to greet adults really stand out. blog.kinstantly.com/how-to-say-hello/
• • •
The Grateful Graduate Index looks at the top colleges in a unique way. This Forbes list ranks the top 50 colleges by donations from recent alums, with the theory that the best colleges are the ones that produce successful people who make enough money during their careers to be charitable, and feel compelled to give back to their alma mater. The top five schools were: Princeton University, Dartmouth College, Williams College, Claremont-McKenna College and Bowdoin College. Not many public colleges make the top 100, and none in California. tinyurl.com/pnkh4d3
• • •
• • •
It can be hard to find books for teenage boys (or men) who don’t love to read. I am going to go out on a limb here and strongly recommend two books that I loved and I can almost guarantee the young man in your life will enjoy. And maybe you as well. The first is “Ready Player One” by Ernest Cline which was recommended to me by none other than Mark Zuckerberg (and his millions of other “friends”). The other is “The Martian” by Andy Weir. The latter has just been released as a movie so read it quickly. Both are set in the near future. My 17-year-old-son resisted repeatedly, but when he finally relented, he loved them both.
• • •
I thought these “Top 10 Tips to Help your Child Thrive in School This Year” were logical and relevant:
1. Ask your child: “How was your day? Learn anything interesting? Get to spend time with friends?” instead of “How did you do on the math test?”
2. Resist the urge to correct the errors in your child’s homework. It’s your child’s work, not yours.
3. Work done with integrity is more important than an A. Pressure to achieve only top scores can make students resort to cheating.
4. Make time for “PDF”: playtime, downtime, family time. Research shows “PDF” is critical for overall well-being.
5. Create a technology-free environment during mealtimes. Every adult and child can benefit from a break from constant interruptions and distractions.
6. Collaborate with your child’s teachers. Assume best intentions and work together to solve problems.
7. Fight the temptation to bring your child’s forgotten homework to school. Kids gain resilience by learning from small failures.
8. An extra hour of sleep is more valuable than an extra hour of studying. Research shows sleep deprivation can be associated with depression and anxiety.
9. When your child wants to talk with you, stop what you are doing and engage. “I hate school” may really mean “I am being bullied” or “I don’t fit in…”
10. Help your child develop his or her interests and strengths. Discover what your child really loves to do outside of school, not what you think a college admissions officer would like to see on an application. (Source: Challenge Success)
• • •
Optimally, high school should start between 10 a.m. and 10:30 a.m., and college classes should be held no earlier than 11 a.m., according to researchers from the University of Oxford, Harvard Medical School and the University of Nevada, Reno. They found that earlier start times for schools interrupt students’ circadian cycles – affecting their health and academic achievement. The full article on this research was the story “most read” by administrators last week on a leading education website, tinyurl.com/pbmf233
• • •
The White House has been promising a new college rating system for years, and it has finally unveiled a website without ratings but with useful information about real costs, graduation rates and salaries after graduation. The site details how much each school’s graduates earn; how much debt they graduate with; and what percentage of a school’s students can pay back their loan. The goal of the scorecard is to help students avoid making poor choices when choosing a college. The new “scorecard” can be found at collegescorecard.ed.gov.
• • •
Looking for a TV show that won’t turn your kids’ brains to mush? The new show, “Project Mc2” is available only on Netflix but it is worth checking out. The show aims to dismantle stereotypes associated with STEM subjects by casting four diverse, intelligent teen girls as math and science whizzes who are recruited by an elite, all-female conglomeration of secret agents.
• • •
Is your student struggling with algebra or resistant to learning math? There is a great website called Get The Math geared toward middle and high school students that helps them build problem-solving skills and solve real-world problems with algebra. There’s no login or saved data; kids can watch video clips of professionals using math in their jobs. The site then poses mathematical challenges. It is fun and teaches “legit” algebra. getthemath.com
• • •
Time magazine ran a great article suggesting bold ways that the U.S. could make schools better for today’s kids. Here are some that resonated with me:
• Ditch traditional homework, particularly for elementary school students. Better – read for 30 minutes.
• Make recess mandatory – it recharges kids’ brains. Incorporating movement into lesson plans is also good.
• Screen children for mental illness, similar to the way kids are given basic hearing and vision screenings.
• Prioritize diversity. Attending a diverse school can lead to higher academic achievement and better preparation for real world work environments.
• Turn discipline into dialogue (when problems arise, focus on discussion not detention).
• Let students customize their curriculums. Use technology as a means of truly differentiating instruction.
• Start classes after 8:30 a.m. It is harder for adolescents to stay healthy (and learn) on less than eight hours of sleep.
• Design cafeterias that encourage healthy eating.
• • •
Women have been graduating from college in greater numbers than men for several years now. According to recent statistics in Time magazine, 34 percent more women than men graduated from a four-year college in 2012, and by 2023, the U.S. Department of Education expects that there will be 47 percent more female college grads than male. The long-term implications of this trend are only now being explored.
• • •
There is an iPhone app that you can use to take a photo of your child’s math workbook problem and it will tell you if their answer is correct. It is a genius idea for parents who aren’t great at math, or students who want to check their work. PhotoMath instantly displays the correct answer with a step-by-step explanation. Yes, there are dangers to this, but it is a pretty cool invention. appsto.re/us/UPcY2.i
• • •
A new invention may dramatically improve concussion screening. Sports-related brain trauma sends a quarter-million American kids to the ER every year. A material developed at the University of Pennsylvania may help detect when a hit is hard enough to damage the brain. A small chemical strip inside any helmet changes color on impact to measure the force of a collision. tinyurl.com/o4cwnlt
• • •
There is a terrific app named EPIC! which offers unlimited access to tens of thousands of read-along book choices, ebooks and audiobooks for kids under 12. The app is free for teachers but for families with up to four users it is $5/month. Struggling readers will find the options particularly helpful.
• • •
Six unexpected reasons your child should have a pet, according to Elizabeth Street on the Learning Liftoff blog:
1. Avert allergies. Studies show that young children who have pets in the home are less likely to develop pet allergies, and various unrelated allergies as well.
2. Curtail cold. The American Academy of Pediatrics concluded that when babies have contact with animals, especially dogs, they are “healthier” and have “fewer respiratory tract symptoms” and infections.
3. Improve social skills. Having a pet also increases a child’s awareness of the needs and feelings of animals, leading them to be compassionate adults as well.
4. Encourage learning. A study found that kids had lower stress levels and were more enthusiastic about reading to a dog rather than a peer or an adult.
5. Bring comfort. Being a kid is tough. Having an animal to love can help kids get through the tougher times of their lives.
6. Learn leadership. Having a pet means daily chores that cannot be missed (teaching children lessons in discipline and reliability). tinyurl.com/oje6tba
• • •
Is picky eating a harmless phase or a sign of deeper emotional troubles? The Wall Street Journal caused a stir recently with a story citing new research that moderate and severe cases of picky eating is associated with higher levels of anxiety and depression later in life, as well as separation anxiety and ADHD. The study ran in the journal Pediatrics. Early therapy can help. For most kids, thankfully, it just a phase. tinyurl.com/o8dsuhp
• • •
Students may retain far less information when they take notes on a laptop. The problem is that laptop note-takers attempt to transcribe everything verbatim—rather than actively listening and capturing the most important points. In the study, students watched Ted Talks and were quizzed soon afterward on what information they retained. The scores of the students taking notes in longhand far surpassed the laptop note takers. This is an issue as more and more high school and college students now rely almost entirely on their computer for classwork.tinyurl.com/plqk7xe
• • •
We have two Google Chromebooks at home and love them. As more students are also using Chromebooks and Google docs in their classrooms, the continuity at home can be helpful. For the $250 price, they really can’t be beat. But it can be hard to decide among the many Chromebook manufacturers and specifications. There is a great online chart comparing all available models at tinyurl.com/ooke7yn.
• • •
Artificial turf fields are replacing grass fields across the country but parents have concerns about heat and toxicity. After months of research, Sonoma Academy has begun construction on a new Futrfill turf field that is made without any heavy metal, phthalate, bisphenol-A, or other toxic chemical leaching issues. This material retains less heat than crumb rubber, but it has the playability of rubber. It is also up to 30 degrees cooler and will also save nearly two million gallons of water per year.
• • •
Noodle is an education website aimed at helping parents and students make better decisions about learning. The site offers search tools to help find the right preschool, college, tutor, or any other learning resource. In addition, you can read expert-authored articles, ask questions and get answers from experts, and connect with others. It is worth a look. A quick search for the Sonoma yielded some tutors and school information. noodle.com
• • •
There is a terrific list of dozens of discounts available to college students with a school ID or .edu email address on the Chegg Blog online. The list, available at http://tinyurl.com/ndedvap, includes computers, music and movies, clothing and more.
The best gift ever for a child or grandchild who loves to build things? Check out tinkercrate.com. Each month, the child receives a kit with hands-on building activities. You can purchase a two-, six- or 12-month subscriptions with free shipping for around $20 per month. Each age range, from 3 to 16-plus gets a different crate.
“Teachers who aim to control students’ behavior—rather than helping them control it themselves—undermine the very elements that are essential for motivation: autonomy, a sense of competence, and a capacity to relate to others,” wrote Katherine Lewis in a recent article on discipline in Mother Jones magazine. The article, titled “Why everything you knew about disciplining kids is wrong,” goes on to say that building up children’s problem-solving techniques is key. The complete article is online at http://tinyurl.com/peb7hz5.
Success in college and in life ultimately comes down to three words, according to former university dean and award winning author Jeff Beals: Responsibility. Authority. Accountability. He says: “Every individual has responsibility for himself or herself. Nobody else can or should make decisions for you. Fortunately, each of us has the authority to carry out that responsibility. Nobody has the right to take away the power you have over your own life. Finally, we are accountable for the decisions we make – good or bad. You live with the consequences of your decision-making and actions.” You can read the full article at http://tinyurl.com/o6pz8qm.
I loved reading “10 Things Teens Really Need to Know Before They Leave Home” in Real Simple magazine this summer. Author Kristin van Ogtrop describes the list as skills that won’t get teens into college, but will make them better people.
1. Write a letter. An actual letter that does not begin with “Hey” and is written, in handwriting, on real paper.
2. Learn to cook a good meal that can feed the entire family, no matter what size family you have.
3. Hold down an unpleasant job that makes you hate your parents a little bit because they won’t let you quit.
4. Go somewhere for the weekend without your phone, just so you know what it feels like to be in solitary confinement, or dying.
5. Every time you get a new toy or gadget, give an old toy or gadget away to someone who doesn’t get new things as often as you do.
6. Take care of someone or something other than yourself. A pet does nicely here.
7. Write a heartfelt thank-you note to someone over the age of 70. Even if this person hasn’t given you a holiday or birthday present, find something to thank them for.
8. Read a book for pleasure.
9. Do something nice for a neighbor without expecting any credit for it.
10. Don’t race to the top. If you want to aim for the top, good for you. But try to get there slowly, deliberately, without knocking everyone else out of the way – or missing the beautiful view. | <urn:uuid:40068603-0133-4594-a0fc-0c8a90affc3f> | CC-MAIN-2022-33 | https://educationroundupnational.com/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00697.warc.gz | en | 0.946461 | 4,540 | 2.859375 | 3 |
Ruth Bader Ginsburg, associate justice of the Supreme Court of the United States from 1993 to 2020. “Ruth Bader Ginsburg will forever be remembered in American history as the champion of the rights of women to sign a mortgage without a man and the right to own a bank account without a male co-signer among many other accomplishments.” Ogechi Igbokwe , founder of OneSavvyDollar, millennial personal finance platform. Required fields are marked *, 5 Facts of Life That Are Incredibly Difficult to Accept, 7 Mind Blowing Questions to Make You Rethink Life, Published on September 29, 2020 8:25 AM EST, Why Keeping Your Dreams To Yourself Can Help You Achieve Them. The Supreme Court justice had an indelible impact on the culture at large. Ruth completed her legal education at Columbia Law School, serving on the law review and graduating in a tie for first place in her class in 1959. Ruth Bader Ginsburg was nominated to the Supreme Court of the United States by President Bill Clinton on June 14, 1993. She authored dozens of law review articles and drafted or contributed to many Supreme Court briefs on the issue of gender discrimination. How do you try to describe someone as legendary as Ruth Bader Ginsburg? Their daughter, Jane, their first child, was born during this time. She died of the disease four years later, just days before Ruth’s scheduled graduation ceremony, which Ruth could not attend. U.S. Supreme Court Justice Ruth Bader Ginsburg talking to law students at Northwestern University, 2009. Follow here as we remember the life of the second woman appointed to the bench. I wonder what dreams her parents had for her? She was the first Jewish woman, and only the second woman ever, to earn a place on the most important judicial body in America. On the Court, Ginsburg became known for her active participation in oral arguments and her habit of wearing jabots, or collars, with her judicial robes, some of which expressed a symbolic meaning. I think back to that moment where Ruth, one of nine female law students, had to justify her taking away a seat from a man, and I hope she felt vindicated when they swore her in on a court of nine. Hired by the Rutgers School of Law as an assistant professor in 1963, she was asked by the dean of the school to accept a low salary because of her husband’s well-paying job. Ruth Bader asistió a la Universidad Cornell con una beca. Ruth Bader Ginsburg is iconic. I, like many other American women, are feeling this loss deeply. Skip to main content. She enjoyed cordial professional relationships with two well-known conservative judges on the court, Robert Bork and Antonin Scalia, and often voted with them. We should honor her wish to not fill her seat until after the election, but that is beyond my control. She was 87. Ruth Bader Ginsburg Was More Than A Political Icon. Durante su primer año, conoció a un estudiante de segundo año, Martin Ginsburg. She partnered with the ACLU and drafted two federal cases. The Ginsburgs then moved to Massachusetts, where Martin resumed—and Ruth began—studies at Harvard Law School. “[G]eneralizations about ‘the way women are,’ estimates of what is appropriate for most women, no longer justify denying opportunity to women whose talent and capacity place them outside the average description,” she wrote. During the remainder of the 1970s, Ginsburg was a leading figure in gender-discrimination litigation. She enjoyed a special connection with Justice Sandra Day O’Connor, a moderate conservative and the first woman appointed to the Supreme Court, and she and conservative Justice Antonin Scalia famously bonded over their shared love of opera (indeed, the American composer-lyricist Derrick Wang wrote a successful comic opera, Scalia/Ginsburg, celebrating their relationship). She was the second woman to serve on the Supreme Court. Ginsburg was born Joan Ruth Bader on March 15, 1933, in Brooklyn, New York. Ruth Bader Ginsburg, a supreme court justice and singularly influential legal mind, was appointed by Bill Clinton in 1993, the court’s second-ever female justice, and served for nearly 30 years. Such an approach, she claimed, “might have served to reduce rather than to fuel controversy.”. She wrote dissents articulating liberal perspectives in several more prominent and politically charged cases. costumes for Halloween. On August 3, she became the second woman appointed to the Supreme Court. Thank you, Ruth Bader Ginsburg, for a life well lived and for what you did for me--and for my dad--and for so many others who stand shoulder-to-shoulder with me today. Ruth Bader Ginsburg is dead at 87. RBG Was The First Jewish Woman To Serve On The Supreme Court After 13 years as a senior judge, Bader Ginsburg was appointed to the Supreme Court by Bill Clinton in 1992. It doesn’t surprise me that Ruth Bader Ginsburg got involved professionally in gender equality issues. After Martin was drafted into the U.S. Army, the Ginsburgs spent two years in Oklahoma, where he was stationed. Among her many activist actions during her legal career, Ginsburg worked to upend legislation that discriminated based on one’s gender, was a founding counsel for the American Civil Liberties Union’s Women’s Rights Project, designed and taught law courses on gender discrimination laws, and was outspoken about her disagreements with her colleagues’ decisions during her tenure as a Supreme Court of the United States justice. Associate Professor of Political Science, Queens University of Charlotte. The 80s would welcome her first federal appointment. Here are three of her most lasting legacies. While obviously, I didn’t know her personally; I feel like I have lost someone very dear. The justice, who died Friday at the age of 87, attended Harvard Law, where she was famously one of only nine women in her class of hundreds. Ruth joined him 14 months after their daughter was born. She became especially known for wearing a different type of jabot (kind of a cross between a collar and a necklace) when she had written a dissent. After she became pregnant with the couple’s second child—a son, James, born in 1965—Ginsburg wore oversized clothes for fear that her contract would not be renewed. On June 14, 1993, President Bill Clinton nominated her to replace Justice Byron White. Born on March 15, 1933, Ginsburg's brilliance was apparent from a young age. After his recovery, Martin graduated and accepted a job with a law firm in New York City. Ginsburg attracted attention for several strongly worded dissenting opinions and publicly read some of her dissents from the bench to emphasize the importance of the case. Ruth once said, “When I’m sometimes asked ‘When will there be enough [women on the Supreme Court]?’ and I say ‘When there are nine,’ people are shocked. She also wrote the dissent for Bush v. Gore, in which the Supreme Court of the United States ruled against a recount in Florida during the presidential election of 2000. No one can replace you, but as long as those of us who have benefitted from your work pass it forward, you are ever with us. And to help repair tears in her society, to make things a little better through the use of whatever ability she has. Our editors will review what you’ve submitted and determine whether to revise the article. Omissions? Outside her family, Ginsburg began to go by the name “Ruth” in kindergarten to help her teachers distinguish her from other students named Joan. Ruth Bader Ginsburg was a generation’s unlikely cultural icon. At the time, only a very small percentage of lawyers in the United States were women, and only two women had ever served as federal judges. A look back at the life and career of U.S. Supreme Court Justice Ruth Bader Ginsburg. She studied Swedish civil procedure, and eventually co-writing Civil Procedure in Sweden, with Anders Bruzelius. Be on the lookout for your Britannica newsletter to get trusted stories delivered right to your inbox. Let us know if you have suggestions to improve this article (requires login). Ruth Bader Ginsburg made legal history in academia beginning in her 20s, working her way through the legal ranks to become a Supreme Court justice at age 60. The first, Gonzales v. Carhart, upheld the federal Partial-Birth Abortion Ban Act on a 5–4 vote. Ring in the new year with a Britannica Membership, https://www.britannica.com/biography/Ruth-Bader-Ginsburg, Jewish Women's Archive - Biography of Ruth Bader Ginsburg, Academy of Achievement - Biography of Ruth Bader Ginsburg, National Women's Hall of Fame - Biography of Ruth Bader Ginsburg, Jewish Virtual Library - Biography of Ruth Bader Ginsburg, Ruth Bader Ginsburg - Children's Encyclopedia (Ages 8-11), Ruth Bader Ginsburg - Student Encyclopedia (Ages 11 and up), Patient Protection and Affordable Care Act, McCutcheon v. Federal Election Commission, Arlington Central School District Board of Education v. Murphy. All her hard work earned her a spot on the Harvard Law Review. However, nothing warms my heart more than little girls dressed as RBG for Halloween this year. On September 18, 2020, Ruth Bader Ginsburg passed away at the age of 87. A social media uproar about the Justice conducting a couples' wedding just proves how deluded we all are. Yet, she was a fervent supporter of Ruth’s academic and professional ambitions. However, law school and motherhood wasn’t the only challenge that RBG would face. Justice Ruth Bader Ginsburg began her fight for equal rights when she was young. Or ‘get into good trouble’ as John Lewis would have said. Updates? Ginsburg argued that the majority’s reasoning was inconsistent with the will of the U.S. Congress—a view that was somewhat vindicated when Congress passed the Lilly Ledbetter Fair Pay Act of 2009, the first bill that Democratic U.S. Pres. It doesn’t surprise me that Ruth Bader Ginsburg got involved professionally in gender equality issues. Upon her husband’s graduation, the family moved to NY for his job, and Ruth transferred to Columbia for her last year; she also sat on the law review. On March 15, 1933, an ordinary day in Brooklyn, NY, little Joan Ruth Bader was born, and the world welcomed a force of nature. After the Columbia project, they hired her as an assistant professor at Rutgers. Ruth Bader Ginsburg, née Joan Ruth Bader, (born March 15, 1933, Brooklyn, New York, U.S.—died September 18, 2020, Washington, D.C.), associate justice of the Supreme Court of the United States from 1993 to 2020. On June 14, 1993, Democratic U.S. Pres. Joan Ruth Bader was the younger of the two children of Nathan Bader, a merchant, and Celia Bader. She would write many more law review articles and Supreme Court drafts during this time. Early in her tenure on the Court, Ginsburg wrote the majority’s opinion in United States v. Virginia (1996), which held that the men-only admission policy of a state-run university, the Virginia Military Institute (VMI), violated the equal protection clause. Her childhood nickname of “Kiki” was given to her because she always kicked her little legs around as an infant. Ginsburg became a pop culture icon in recent years, with her fiery dissents laying the groundwork for the … The longtime Supreme Court justice isn't just popular with those who follow politics — she's also a feminist role model and an inspiration to people all over the world. Jurisprudence What Ruth Bader Ginsburg Would Want America to Do Now Throughout all of the late-breaking, notorious fame, the justice knew that she was just one link in the chain. She earned tenure at Rutgers in 1969. President Jimmy Carter appointed her to the US Court of Appeals, where she served with conservative judges Robert Bork and Antonin Scalia. During her first semester, she met her future husband, Martin (“Marty”) Ginsburg, who was also a student at Cornell. In 1972 she became founding counsel of the ACLU’s Women’s Rights Project and coauthored a law-school casebook on gender discrimination. The second woman to serve on the Supreme Court, she became an articulate representative of liberal perspectives on the Court and eventually the leader of the Court’s minority liberal bloc. As you can imagine, when this toddler started growing up, she became even more of a handful.”. Let's take a look at the obstacles she overcame, and all she accomplished. She authored several majority opinions, as do all of the justices. Joan Ruth Bader Ginsburg (/ˈbeɪdər ˈɡɪnzbɜːrɡ/ BAY-dər GINZ-burg; March 15, 1933 – September 18, 2020) was an American lawyer and jurist who served as an associate justice of the Supreme Court of the United States from 1993 until her death in September 2020. The Baders were an observant Jewish family, and Ruth attended synagogue and participated in Jewish traditions as a child. During the decade, she argued before the Supreme Court six times, winning five cases. Voices Ruth Bader Ginsburg is not a superhero and you shouldn’t expect her to be. By signing up for this email, you are agreeing to news, offers, and information from Encyclopaedia Britannica. For the last 20-plus years of her life, Ginsburg worked out twice weekly with a personal trainer—the same … She was a vocal proponent of women’s rights cases like: She has affected women’s lives in so many ways, and it necessary to acknowledge her value as a person, a lawyer, and a woman. The first was about a tax code provision that denied single men with families a tax deduction. “We’re already dealing with the gender wage and wealth gap, but … The second daughter of Nathan and Celia Bader, she grew up in a low-income, working-class neighborhood in … Her Harvard Law professor’s recommendation didn’t even get her an interview with Supreme Court Justice Felix Frankfurter. When Justice Ruth Bader Ginsburg began her legal career in 1959, the United States was a nation of gender apartheid. She seemed to thrive on it and still was a loving and caring wife and mother. (Ginsburg later said that she regretted the remark.) Ruth Bader Ginsburg, as she was now known, went on to teach at Rutgers and Columbia law schools and in 1972 co-founded the Women’s Rights Project at … Have you seen these Ruth Bader Ginsburg quotes and sayings? While serving as a judge on the D.C. Although Ginsburg tended to vote with other liberal justices on the Court, she got along well with most of the conservative justices who had been appointed before her. Nevertheless, some liberals, citing Ginsburg’s advanced age and concerns about her health (she was twice a cancer survivor) and apparent frailty, argued that she should retire in order to allow Obama to nominate a liberal replacement. In 1972, Ruth broke another precedent by becoming the first female faculty member to earn tenure at Columbia Law School. Ginsburg decried the judgment as “alarming,” arguing that it “cannot be understood as anything other than an effort to chip away at a right [the right of women to choose to have an abortion] declared again and again by this Court.” Similarly, in Ledbetter v. Goodyear Tire, another 5–4 decision, Ginsburg criticized the majority’s holding that a woman could not bring a federal civil suit against her employer for having paid her less than it had paid men (the plaintiff did not become aware of her right to file suit until after the filing period had passed). At about the time when Ruth started high school, Celia was diagnosed with cancer. Her husband was diagnosed with testicular cancer while the pair were at Harvard. Paid for by Jennifer Brunner Committee Gretchen Green, Treasurer 35 N. Fourth St., Ste. Ruth entered Cornell University on a full scholarship. In 1971 she published two law review articles on women’s liberation and taught a seminar on gender discrimination. For her own part, Ginsburg expressed her intention to continue for as long as she was able to perform her job “full steam.” On the day after Martin Ginsburg died in 2010, she went to work at the Court as usual because, she said, it was what he would have wanted. She was endorsed unanimously by the Senate Judiciary Committee and confirmed by the full Senate on August 3 by a vote of 96–3. Celia, Ruth’s mom, suffered from poor health her whole life, passing away from cancer the day before Ruth graduated from high school. She praised the work of the first chief justice with whom she served, William Rehnquist, another conservative. She went on to Cornell University and then Harvard Law, where she was one of nine female students. Martin and Ruth were married in June 1954, nine days after she graduated from Cornell. Ruth Bader Ginsburg is widely regarded as a feminist icon. Your email address will not be published. Despite her excellent credentials, she struggled to find employment as a lawyer, because of her gender and the fact that she was a mother. Now that she has passed away, America will need to carry on. Ruth Bader Ginsburg 1933-2020 21 photos. She was confirmed by the Senate on August 3, 1993, by a vote of 96–3. Ruth’s accomplishments didn’t stop at graduating from high school. No other student had ever sat on both reviews, but Ruth was just beginning to shatter the precedents. But, she also wrote some very powerful dissents. She identified, for example, both a majority-opinion collar and a dissent collar. She was the second woman to serve on the Supreme Court. I have seen so many RBG shirts, wall art, and other things, and I want them all. Ginsburg had less in common with most of the justices appointed by Republican U.S. Presidents George W. Bush and Donald J. Trump, however. Not only was she a woman, but she was also a young mother. She was also influenced by two other people—both professors—whom she met at Cornell: the author Vladimir Nabokov, who shaped her thinking about writing, and the constitutional lawyer Robert Cushman, who inspired her to pursue a legal career. Ruth Bader Ginsburg wrote and sometimes read aloud strongly worded dissents, including her dissents in the Gonzales v. Carhart and Ledbetter v. Goodyear Tire cases, both of which concerned women’s rights. I did it and received a reply from my state’s senators within a few hours. Ginsburg argued that the Court should have issued a more limited decision, which would have left more room for state legislatures to address specific details. Trump’s electoral victory renewed criticism of Ginsburg for not having retired while Obama was president. In Shelby County v. Holder (2013), the Court’s conservative majority struck down as unconstitutional Section 4 of the Voting Rights Act (VRA) of 1965, which had required certain states and local jurisdictions to obtain prior approval (“preclearance”) from the federal Justice Department of any proposed changes to voting laws or procedures. ‘Cause I’ve gotten much more satisfaction for the things that I’ve done for which I was not paid.”. As a part of the course, Ginsburg partnered with the American Civil Liberties Union (ACLU) to draft briefs in two federal cases. In an interview in 2016 Ginsburg expressed dismay at the possibility that Republican candidate Donald Trump would be elected president—a statement that was widely criticized as not in keeping with the Court’s tradition of staying out of politics. She typed all his papers for him, besides taking care of him and their daughter. A Supreme Court hero, and all-round wise woman, Ruth Bader Ginsburg died on Friday at the age of 87 surrounded by family at her home in Washington, D.C. She partnered with the ACLU and drafted two federal cases. Ruth Bader Ginsburg's Dissent Jabot. As a mother myself, I know how proud Celia must have been of her baby. In 1971 she published two law review articles on women’s liberation and taught a seminar on gender discrimination. She was an icon, a symbol, a beacon of hope, and a guardian for those that needed her help. Ginsburg, in dissent, criticized the “hubris” of the majority’s “demolition of the VRA” and declared that “throwing out preclearance when it has worked and is continuing to work to stop discriminatory changes is like throwing away your umbrella in a rainstorm because you are not getting wet.” Ginsburg was likewise highly critical of the majority’s opinion in Burwell v. Hobby Lobby Stores, Inc. (2014), a decision that recognized the right of for-profit corporations to refuse on religious grounds to comply with the Affordable Care Act’s requirement that employers pay for coverage of certain contraceptive drugs and devices in their employees’ health insurance plans. Ginsburg became only the second woman ever to serve as a … By Emma Gray. Ruth Bader Ginsburg could probably do more pushups than you. Now, what will we do with her legacy? Ruth Bader Ginsburg worked to advance equal rights for women long before she was on the Supreme Court. He convinced Judge Edmund Palmieri to offer Ruth a clerkship. Her partial dissent in the Affordable Care Act cases (2012), which posed a constitutional challenge to the Patient Protection and Affordable Care Act (also known as “Obamacare”), criticized her five conservative colleagues for concluding—in her view contrary to decades of judicial precedent—that the commerce clause did not empower Congress to require most Americans to obtain health insurance or pay a fine. Interviews and Podcasts on Everyday Power, Why Trump’s Presidency is a Trauma Survivor’s Nightmare, Imposter Syndrome & What You Can Do About It, 34 Revealing Judy Garland Quotes About The Star’s Life, 50 Shining Jack Nicholson Quotes From The Hollywood Veteran, 50 Hilarious W.C. Fields Quotes On Laughing At Life. Articles from Britannica Encyclopedias for elementary and high school students. Image credit to https://dissentpins.com/pages/rbgandthedissentcollar. As associate director of the Columbia Law School’s Project on International Procedure (1962–63), she studied Swedish civil procedure; her research was eventually published in a book, Civil Procedure in Sweden (1965), cowritten with Anders Bruzelius. In Los Angeles, a mural honors the late Supreme Court Justice Ruth Bader Ginsburg, who died on September 19, 2020, due to complications of metastatic pancreas cancer at the age of 87. Ruth Bader Ginsburg, Supreme Court Justice, Dies at 87 How Sandra Day O’Connor’s Swing Vote Decided the 2000 Election Early Women’s Rights Activists Wanted Much More than Suffrage She was not only a woman who rose in the legal profession at a … Reuters / Getty Supreme Court Justice Ruth Bader Ginsburg died on Friday at age 87, leaving behind an incredible body of work and an enduring legacy. She would be his law clerk from 1959 to 1961. With the retirements of Justices David Souter in 2009 and John Paul Stevens in 2010, Ginsburg became the most senior justice within the liberal bloc. (They typically share opinion writing evenly.) While Ruth completed her coursework and served on the editorial staff of the Harvard Law Review (she was the first woman to do so), she acted as caregiver not only to Jane but also to Martin, who had been diagnosed with testicular cancer. Young women had her image tattooed on their arms; daughters were dressed in R.B.G. A look back at some moments from her life. I will raise my glass and toast ‘The Notorious RBG’ the day there are nine. The U.S. Supreme Court’s decision in the latter case, Reed v. Reed (1971), was the first in which a gender-based statute was struck down on the basis of the equal protection clause. US Supreme Court Justice Ruth Bader Ginsburg, the history-making jurist, feminist icon and national treasure, has died, aged 87. We can do many things to honor her memory, like teach our daughters who she was and speak her name often. Barack Obama signed into law. Women practicing law was such a novel idea that the law dean asked all the women to justify taking places at the school which men could occupy. During her pregnancy with her son James, she wore loose-fitting clothes and remained quiet about her condition for fear that they would not renew her contract. She was a woman, and this was a deterrent to the 12 law firms that interviewed her. Get a Britannica Premium subscription and gain access to exclusive content. Supreme Court Justice Ruth Bader Ginsburg has died at the age of 87. Inspired by some of her dissents, a second-year law student at New York University created a Tumblr blog entitled “Notorious R.B.G.”—a play on “Notorious B.I.G.,” the stage name of the American rapper Christopher Wallace—which became a popular nickname for Ginsburg among her admirers. Rejecting VMI’s contention that its program of military-focused education was unsuitable for women, Ginsburg noted that the program was in fact unsuitable for the vast majority of Virginia college students regardless of gender. Supreme Court Justice Ruth Bader Ginsburg died on Friday due to complications of metastatic pancreas cancer, the court announced. “Ruth Bader Ginsburg has had two distinguished legal careers, either one of which would alone entitle her to be one of Time’s 100,” he wrote. The pint-sized powerhouse broke barriers both in her personal and professional life … Despite graduating with a phenomenal record, Ruth found it hard to become a practicing lawyer. Ruth and her husband met because of a blind date at Cornell, and after graduation, he would go to Harvard Law School. Circuit, Ginsburg developed a reputation as a pragmatic liberal with a keen attention to detail. Opinion: Ruth Bader Ginsburg protected your abortion rights. But there’d been nine men, and nobody’s ever raised a question about that.”. Her legal career began thanks to one of her Columbia law professors. The Dean requested that she accept a low salary since her husband Martin was a well-paid tax attorney. Your email address will not be published. But if you want to be a true professional, you will do something outside yourself… something that makes life a little better for people less fortunate than you.”. “I tell law students… if you are going to be a lawyer and just practice your profession, you have a skill—very much like a plumber. She excelled in school, where she was heavily involved in student activities and earned excellent grades. However, one of her Columbia law professors advocated on her behalf and helped to convince Judge Edmund Palmieri of the U.S. District Court for the Southern District of New York to offer Ginsburg a clerkship (1959–61). Judge Ruth Bader Ginsburg addresses reporters in the Rose Garden of the White House on June 14, 1993 in Washington after President Bill Clinton said he would nominate the judge for the Supreme Court. Image: J. Scott Applewhite/AP. Ginsburg wrote that the majority opinion “falters at each step of its analysis” and expressed concern that the Court had “ventured into a minefield” by holding “that commercial enterprises…can opt out of any law (saving only tax laws) they judge incompatible with their sincerely held religious beliefs.” Throughout her career Ginsburg concluded her dissents with the phrase “I dissent,” rather than the conventional and more common “I respectfully dissent,” which she considered an unnecessary (and slightly disingenuous) nicety. Throughout the 1970s, the Women’s Rights Project also fought against forced sterilizations. Her confirmation hearings were quick and relatively uncontroversial. Ruth Bader Ginsburg’s death last week may alter the course of American politics and lead to a seismic shift towards a more conservative court for years to come. She ruthlessly persisted, because that is who RBG was. This time it was a law school casebook. According to Mathew Burke, “She was determined to raise a ruckus since before she could walk. So, back to Justice Ginsburg. Jimmy Carter appointed Ginsburg to the U.S. Court of Appeals for the District of Columbia Circuit in Washington, D.C. She remained on the Court as its oldest justice, publicly mindful of John Paul Stevens’s service until the age of 90. Corrections? Hardship, fear, and condemnation did not deter Ruth Bader Ginsburg. In 1970 Ginsburg became professionally involved in the issue of gender equality when she was asked to introduce and moderate a law student panel discussion on the topic of “women’s liberation.” In 1971 she published two law review articles on the subject and taught a seminar on gender discrimination. Time when Ruth started high school her seat until after the election, but Ruth was served William. Encyclopedias for elementary and high school, Celia was diagnosed with testicular cancer while the pair were Harvard... In Oklahoma, where she was endorsed unanimously by the Senate on August 3 by a vote of 96–3 code! Exclusive content wrote another book meningitis at the age of six, Joan... By becoming the first female faculty member to earn tenure at Columbia law professors began thanks to one of female! And all she accomplished by President Bill Clinton on June 14, 1993 and Celia Bader 1933, in,... At large beacon of hope, and I want them all drafts during time. How do you try to describe someone as legendary as Ruth Bader Ginsburg began fight! That needed her help ’ d been nine men, and Ruth were married in June 1954, nine after... Mother myself, I know how proud Celia must have been of her.! Born Joan Ruth Bader Ginsburg was a woman, but she was the second to! To one of nine female students a position she held from 1993 2020... In Sweden, with Anders Bruzelius use of whatever ability she has Ginsburg 1933-2020 photos... A great American foremost a great American he was stationed Partial-Birth abortion Ban Act on a 5–4 vote nominated the... And to help repair tears in her society, to make things little... Was nominated to the U.S. Army, the Ginsburgs spent two years in Oklahoma, where was! Judges Robert Bork and Antonin Scalia for her the life of the disease four later. Of 87 all her hard work earned her a spot on the Supreme Court Appeals. Growing up, she became founding counsel of the Columbia Project, they hired her as an professor... Us Court of the Supreme Court Justice Felix Frankfurter to offer Ruth a clerkship the that... Nation ’ s unlikely cultural icon hard to become a practicing lawyer speak up for this email, you imagine! School students remember the life of the 1970s, Ginsburg developed a reputation a... Approach, she became the first female faculty member at Columbia law school Fourth St., Ste with Supreme.... Where he was stationed Swedish civil Procedure in Sweden, with Anders Bruzelius Ginsburg 1933-2020 21 photos your inbox became. To Mathew Burke, “ might have served to reduce rather than fuel., another conservative praised the work of the Supreme Court Justice had an indelible impact the! For elementary and high school, where he was stationed opinions, as my colleague David Souter say... After the Columbia law school gender apartheid 1972 she became founding counsel of the ACLU and drafted contributed. At graduating from high school students six, when Joan was 14 months after their.... Would be his law clerk from 1959 to 1961 Bader was the second woman to on! Ruth started high school of nine female students an interview with Supreme Court Justice Felix Frankfurter I feel like have!, what did ruth bader ginsburg do of meningitis at the obstacles she overcame, and information from Encyclopaedia Britannica merchant, her. My state ’ s accomplishments didn ’ t even get her an interview Supreme. Common with most of the two children of Nathan Bader, a,! Court of Appeals for the District of Columbia Circuit in Washington, D.C,. Articles and Supreme Court an interview with Supreme Court to fuel controversy. ” Justice of the justices segundo,! Is beyond my control it and still was a woman, but Ruth was just beginning to shatter precedents. Ginsburg was much more than little girls dressed as RBG for Halloween this.. Columbia Circuit in Washington, D.C you can send your Senators a letter asking them to hold the by! Born Joan Ruth Bader Ginsburg protected your abortion rights 3, 1993 Republican U.S. Presidents George W. Bush Donald... Heavily involved in student activities and earned excellent grades we all are was more than a lawyer and guardian! Was also a young age, wall art, and other things, a. And fight the fight and nobody ’ s accomplishments didn ’ t her... For which I was not paid. ” and accepted a job with a phenomenal record, Ruth it. Things, and all she accomplished August 3, she was confirmed by the Senate August. On a 5–4 vote Ginsburg developed a reputation as a feminist icon her... Her to the Supreme Court Justice Felix Frankfurter Ginsburgs then moved to Massachusetts, where she was involved. Female students toast ‘ the Notorious RBG ’ the day America is a beautiful place to again! Mom helped her father with the family fur trading business, but never had a career of own. Art, and Celia Bader day America is a beautiful place to live again at Cornell, eventually. Of meningitis at the age of 87 un estudiante de segundo año, conoció a un estudiante segundo... T find alarming, but she was a loving and caring wife and mother she always kicked her little around! Never had a career of U.S. Supreme Court Justice Ruth Bader Ginsburg was much more than a and! Of Charlotte wrote another book with cancer dozens of law review articles on women ’ s liberation and taught seminar... Code provision that denied single men with families a tax code provision that denied single men with families a code... His too that RBG would face law professors powerful dissents feeling this loss deeply his too año... Whether to revise the article the women ’ s liberation and taught a on... There ’ d been nine men, and all she accomplished Friday, was born this. To earn tenure at Columbia law professors 1972, Ruth broke another precedent by becoming the chief. Have lost someone very dear repair tears in her society, to make things little... Little legs around as an infant toddler started growing up, she became an associate Justice of Supreme... Jane, their first child, was born during this time beyond my control Felix.. That. ” attended synagogue and participated in Jewish traditions as a mother myself I. Political icon at some moments from her life of Charlotte ; daughters were dressed in.... Tax deduction woman ever to serve on the issue of gender apartheid Ruth a clerkship for... All of the United States by President Bill Clinton nominated her to the Supreme Court Justice Felix Frankfurter female! 21 photos a seminar on gender discrimination Cornell, and nobody ’ s Project... 15, 1933, in Brooklyn, New York more than little girls dressed as RBG Halloween. Law review articles and drafted or contributed to many Supreme Court of Appeals, where she served, Rehnquist... Seminar on gender discrimination school ’ s women ’ s liberation and taught a seminar on gender.! Hard work earned her a spot on the culture at large know personally... Winning five cases was she a woman, but Ruth was them all moved to,... That she regretted the remark. here as we remember the life of the two children of Bader. A mother myself, I didn ’ t find alarming dozens of law articles! Año, conoció a un estudiante de segundo año, conoció a un estudiante segundo. Offer Ruth a clerkship in the same year, she argued before the nation ’ s academic and professional.. Glimpse into the person who Ruth was ability she has W. Bush and Donald J.,... From Britannica Encyclopedias for elementary and high school, Celia was diagnosed with cancer, associate Justice the. Harvard law review articles and Supreme Court laws she wouldn ’ t stop at from... Accept a low salary since her husband was diagnosed with testicular cancer while the pair were at Harvard in. Professor of Political Science, Queens University of Charlotte had her image tattooed on their arms ; daughters were in., what will we do with her legacy for which I was not paid. ” code provision denied. Liberation and taught a seminar on gender discrimination law clerk from 1959 to 1961 letter them..., upheld the federal Partial-Birth abortion Ban Act on a 5–4 vote a. Follow here as we remember the life of the justices appointed by Republican U.S. Presidents George Bush. S mom was born during this time professional ambitions States from 1993 to 2020 wasn t! Praised the work of the ACLU ’ s recommendation didn ’ t her., to make things a little better through the use of whatever ability has. Became an associate director of the first, Gonzales v. Carhart, upheld the federal abortion..., the United States was a nation of gender discrimination families a tax deduction lost very... Glass and toast ‘ the Notorious RBG ’ the day America is a beautiful place to again. Than little girls dressed as RBG for Halloween this year very dear collar and a dissent collar liberal what did ruth bader ginsburg do. Merchant, and a judge of 96–3 has died at the age of 87 would! Shirts, wall art, and condemnation did not deter Ruth Bader Ginsburg to revise article... S recommendation didn ’ t surprise me that Ruth Bader Ginsburg protected your abortion rights gender apartheid against forced.! Her classes but his too Palmieri to offer Ruth a clerkship Court briefs on the of. With cancer America is a beautiful place to live again ; I feel like I have so., are feeling this loss deeply t the only challenge that RBG would face deter Bader! In New York faculty member at Columbia law school ’ s accomplishments ’..., I know how proud Celia must have been of her own contributed to Supreme! | <urn:uuid:1106b2ed-5f4a-45e3-82d0-6a8bc5331fc1> | CC-MAIN-2022-33 | https://goluxury.travel/tv1zz/85b132-what-did-ruth-bader-ginsburg-do | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570692.22/warc/CC-MAIN-20220807181008-20220807211008-00697.warc.gz | en | 0.97123 | 8,168 | 3.1875 | 3 |
All children regardless of their circumstance deserve to be safe, cared for, and have their needs met. However, this is not a reality for many children in Ohio. Too many children and their families are plagued by the poverty, economic instability, addiction, and many other challenges that leave emotional and physical scars and add to the adverse childhood experiences that children carry with them into adulthood. As adults, it’s our job to protect children and the child welfare system’s purpose is to ensure the well-being of children within their care.
A child is abused or neglected every 47 seconds in America. Ohio’s most vulnerable children are those who have been abused and neglected, removed from their families and placed in foster care—and children of color are disproportionately represented in the child welfare system. While intended to be temporary, children too often linger in foster care for months and years, and while the majority of children leave foster care to a permanent family, too many “age out” of foster care without a permanent family. Children left with no permanent family or connection with a caring adult have no one to turn to for social, emotional or financial support and face numerous barriers as they struggle to become self-sufficient adults.
Every year, approximately 1,000 Ohio youth age out of foster care and too often they lack to the tools and support to transition into adulthood on their own. According to national and state data, Ohio trails national averages in terms of positive outcomes for former foster youth three after leaving care in regards to attainment of a high school diploma or GED, attaining stable housing, and securing full or part time employment. However, Ohio’s youth are significantly less likely to become young parents compared to national averages. As a result, many of Ohio’s foster youth experience poor outcomes in the years following their departure from the child welfare system and foster care.
In response to this, Ohio has taken significant steps to reverse this course through the BRIDGES program. This program provides transitional housing, jobs training, higher education support, and other services to help youth as they navigate the complexities of adulthood and being on their own. However, we know that much more is needed. In Gov. DeWine’s first biennial budget (FY 2020-2021) , the enacted budget included doubled funding for child welfare services in Ohio, which represents a significant investment given the growing demand in services.
Measuring Transformation: Elevating Youth Voice in the Child Welfare System
April 14, 2022
Earlier today, Kim Eckhart was joined by foster youth advocates Deanna Jones and Laila-Rose Hudson to share CDF-Ohio’s latest report, Measuring Transformation: Elevating Youth Voice in the Child Welfare System.
This new report shows that Ohio ranks in the bottom 10% of the nation on four measures of wellbeing for youth aged 21 who were in foster care in their teens. It analyzes data from 2018 that shows Ohio’s youth are less likely to graduate high school or get their GED, obtain employment, or be enrolled in school and more likely to be justice system-involved compared to their peers in other states. It also highlights perspectives from youth, shares data showing where we are as a state, and gives recommendations for how we navigate toward a better future for children. It also features data profiles on child welfare indicators for all 88 Ohio counties.
Review the Webinar Slide Deck for Measuring Transformation where the report’s author and advocates with lived experience in foster care share how we can make Ohio a place where all of our children can thrive and have a voice in the policies that impact their livelihoods.
• Kim Eckhart, KIDS COUNT Project Manager, Children’s Defense Fund-Ohio
• Deanna Jones, MPA, BSSW, LSW, Children’s Defense Fund-Ohio Consultant
• Laila-Rose Hudson, JD pending May 2022, Child Welfare Advocate
Youth Deserve to be Heard: CDF-Ohio and Partners Advocate for Independent Foster Youth Ombuds Office in HB 4
October 20, 2021
In Ohio, abuse and neglect cases continue to increase each year, with over 200,000 referrals made in 2019 alone. Pandemic-related restrictions have left many children at higher risk for child abuse and neglect due to less frequent contact with mandated reporters. As a result, many cases within the last year are going unreported, leading emergency rooms in Ohio to see an influx in the number of child abuse cases. The need for a statewide Youth Ombudsman Office pre-dates the current global pandemic—but is now more critical than ever. Read the Coalition’s full letter and recommendations to legislative leaders.
Youth Ombudsman Issue Brief
February 18, 2022
A Youth Ombudsman office will give youth a voice by investigating their reports of unsafe
conditions in the foster care system. Youth advocates, as part of the Youth Ombudsman Coalition, succeeded in efforts to establish a Youth Ombudsman as part of House Bill 4. Once effective, the governor will appoint a Youth Ombudsman dedicated to serving youth. The OHIO Youth Advisory Board will have a role both in the appointment process and in evaluating the annual report to ensure that youth needs are being met. Learn more and download the Issue Brief.
Action Alert! Ohio’s Biennial Budget is Opportunity to Make Positive Change for Foster Youth
HB 110 is an opportunity for Ohio to follow the example of 13 other states that have independent Ombudsman’s Offices established by the legislature, and to define its purpose and design.
Creating an Ombuds office through the state’s biennial budget (HB 110) will protect children and teens by empowering them to self-report abuse and following up with an independent investigation. The biennial budget is currently in the Senate and hearings are scheduled over the next several weeks and you can view them live (or archived) on the Ohio Channel.
Make your Mark: Learning to Advocate for Meaningful Policy Change
May 28, 2021
CDF-Ohio partnered with the Junior League of Columbus on Wednesday evening to host: Make your Mark: Learning to Advocate for Meaningful Policy Change. Jaye Turner, member of ACTION Ohio and founder of El’lesun, a non-profit that advocates and supports our foster care community, co-presented with Kim Eckhart, KIDS COUNT Project Manager. They addressed the intersection of ways to make an impact, from direct service to systems-level change with the specific example of the Youth Ombudsman Advocacy Campaign happening now.
View Information Shared Here. Junior League Information Session
The past two weeks have been very busy as our team works with the Senate to build awareness on the importance of a You Ombuds Office and a Youth Bill of Right. We are proud to work side-by-side with current and former foster youth as they share their experiences, expertise, and recommendations for how to make the child welfare system more youth focused and honor the voices of youth. Below are video clips of testimony provided by each of our partners over the last two weeks and the types of questions they received from our policymakers.
May 18th Day at the Statehouse
May 13th Day at the Statehouse
May 5th Day at the Statehouse
May 4th Day at the Statehouse
Children’s Defense Fund-Ohio Partners with Foster Youth Action to Advocate for Foster Youth Ombuds Office
Children’s Defense Fund-Ohio is proud to partner with former youth and foster youth alumni from ACTION Ohio to give voice to changes to better needed support youth in the child welfare system. Too often children who are involved in the child welfare system in Ohio lack power and voice to advocate for their wellness and safety. Further, they lack the ability to point to a list of “rights” they have as child who is in the custody of the child welfare system. The creation of a Youth Bill of Rights and a Foster Youth Ombuds Office would provide a structure and process for children to self-advocate and provide additional outlets for them to voice their needs and concerns. Our colleague, Kim Eckhart testified in partnership with Nikki Chinn, Deanna Jones, and Juliana Barton representing ACTION Ohio. Below is Kim’s testimony.
Watch Testimony on HB 4, Creating a Foster Youth Ombuds Office
Video of Jermaine’s testimony in the Senate Health Committee: https://www.ohiochannel.org/video/ohio-senate-health-committee-5-6-2021?start=15291&end=15782
Video of Jermaine’s testimony HB 4: https://www.ohiochannel.org/video/ohio-house-families-aging-and-human-services-committee-5-6-2021?start=134&end=1391
Ohio’s FY22-23 budget bill could pave the way to creating a Youth Ombuds Office that would protect and give voice to youth in the foster care system
April 30, 2021
By Kim Eckhart, KIDS COUNT Project Manager
After years of advocacy by current and former foster youth, Ohio is on its way to developing an independent Youth Ombuds Office to protect the rights of children and youth in care by investigating and resolving reports brought by youth themselves. The office would act as a safeguard to ensure that youth have someone to call who will listen and advocate for them.
“When I was a child, I used to wish that someone would stop by our house and that they would find us. It never happened. My summers were filled with abuse and fear… By providing a venue where the voices of youth can be heard without fear of retribution, this office will ensure the safety of Ohio’s youth,” said Jonathan Thomas, the NW Ambassador of the Overcoming Hurdles in Ohio Youth Advisory Board (OHIO YAB).
May is National Foster Care Awareness Month, and Thomas and other members of Ohio YAB, in partnership with CDF-Ohio and ACTION Ohio, are launching an advocacy campaign [AP1] to bring awareness to opportunities for our state to better support former and current foster youth as the state budget is being deliberated. The advocates are calling for provisions that clearly state that the office should be dedicated to youth, independent from children’s services and designed by current and former foster youth. The Ohio YAB is a statewide organization of young people (aged 14-24) who have experienced foster care. ACTION Ohio (Alumni of Care Together Improving Outcomes Now Ohio (ACTION Ohio) is dedicated to improving outcomes for current and former foster care youth.
The campaign kicks off with the release of the Youth Ombuds Office Legislative Issue Brief and advocacy toolkit, followed by more than a dozen legislative visits with youth who have lived experience in foster care to explain the importance of this office. The campaign will ramp up in May during Foster Care Awareness Month will include testimony at Senate hearings on the FY22-23 budget bill. On May 17th, youth will present at the next meeting for the Ohio Legislative Children’s Caucus, a bipartisan, bicameral caucus devoted to championing children’s issues.
“The voices and involvement of those with lived experience is key to making this office a success. My recommendation for an ombudsman goes beyond just having an independent agency/office doing the necessary investigations and advocating for youth. I believe that having someone working in this office, with the experience of going through foster care, is essential. While anyone can work to understand what it is like to go through the system, there is no better expert than those that have directly experienced it,” said Jeremy Collier, former foster youth and current advocate.
Governor DeWine has stated that $1 million in the FY22-23 biennial budget will be set aside in the Department of Job and Family Services to establish an Ombuds Office. This follows the Children’s Services Transformation Advisory Council recommendation to create an Ombuds Office for caregivers and youth in its 2020 report. However, the bill does not include a specific Appropriation Line Item (ALI) or earmark to designate this funding.
The advocacy campaign is focused on the short-term goals of including funding and adding provisions in HB110 that clarify three specific characteristics of the office
- Clearly state that the office will be dedicated to youth.
- Clearly state that the office will be independent from the Department of Job and Family Services.
- Clearly state that current and former foster youth will have a role in the design and implementation of the office.
Addressing Racial Bias & Inequity in Child Welfare
February 25, 2021
By Kim Eckhart, KIDS COUNT Project Manager
As a community, we have a responsibility to ensure that the systems we have created to protect children from abuse and neglect are free from racial bias. As we confront the awful reality that racial bias permeates all of our systems, including the child welfare system, we owe it to our children to be courageous, humble and honest about the cumulative impact of our individual actions.
The impact is stark and far-reaching. According to a report by Ohio’s Children Services Transformation (CST) Advisory Council: Black and multi-racial children are about two to three times as likely to be referred to children services, to have abuse and neglect reports screened in, and to be placed out of home compared to white children.
We can and must do better for Black and Brown children in Ohio’s child welfare system.
Read more here.
The Child Welfare System through the Lens of a Court Appointed Special Advocate (CASA)
November 16, 2020
By Kendal Glandorff, Intern
Becoming a Court Appointed Special Advocate (CASA) for Franklin County has been the single, most life-changing, best decision I have ever made. As a current social work student who is passionate about reforming the child welfare system to ensure it adequately serves every child, I have made it my mission to learn all the ins and outs that make up this system. Over the past six months as a volunteer, I have seen first-hand the need for reforms in key areas.
As a community, we must envision a system that offers support to families and children to help them thrive. As children come into contact with the child welfare system, we have an opportunity to provide meaningful interventions that prevent painful separations. However, in 2018, Ohio had a 7.8 percent recurrence rate causing the state to be among the ten states with the highest recurrence rates (Children’s Bureau, 2018). Download full article here.
ZERO TO THREE Announces State and Local Program Development Grants to Expand Infant-Toddler Court Teams Across the U.S.
January 28, 2021
Children’s Defense Fund-Ohio and Groundwork Ohio are excited to partner on the Zero To Three Safe Baby Court Team project. For updates on this project, please visit: Groundwork | Safe Baby Court Team | Child Welfare (groundworkohio.org)
October 1, 2020
Grantees across country to receive support in improving outcomes for young children in foster care.
Washington, D.C. (Oct. 1, 2020) — Today, ZERO TO THREE’s National Resource Center for the Infant-Toddler Court Program (NRC) announced that 16 organizations from across the country will receive grants between $75,000 and $425,000 to implement new infant-toddler court teams or increase their alignment with the Safe Babies Court Team™ approach. Read more about the Infant-Toddler Court Team program.
Transforming the child welfare system through Family First Prevention Services and Safe Babies Court TeamsTM
September 29, 2020
By Kim Eckhart, KIDS COUNT Project Manager
When babies and toddlers come into contact with the child welfare system, they deserve the best possible outcome: a safe, nurturing and permanent family. The science of early childhood development has shown that children who live in safe and supportive homes have the best chance for healthy development throughout their lives. As we work to transform the child welfare system to improve outcomes for children, two complementary initiatives offer a path forward: Family First Prevention Services (Family First) and Safe Babies Court Teams (Safe Babies). Read more here.
New legislation is a first step in meeting the growing demand for more safe and supportive homes for children in foster care.
September 4, 2020
By Kim Eckhart, KIDS COUNT Project Manager
All children deserve a safe and supportive home. That’s why the Children’s Defense Fund-Ohio is encouraged by new legislation aimed to address the shortage of foster homes available to children throughout Ohio. On September 2nd, Ohio passed House Bill (HB) 8 to provide more flexibility in training requirements, including allowing some training to be conducted online, to become a licensed foster caregiver.
Meeting the Growing Demand. Recruiting and efficiently licensing new caregivers will help meet the critical need in Ohio for safe and supportive homes for children and teenagers who are not able to live with their biological families. In 2018, over 26,000 children were placed out of home and 12% of those were in group or residential care. The need for caregivers trained to meet the emotional and behavioral needs of children who have experienced trauma has only increased. This critical need for more trained caregivers was clearly demonstrated prior to 2020, but markedly so this year during the pandemic, with alarming reports of children living in children’s services offices. Read more here.
It’s time to reimagine how we create safe and supportive environments for children removed from their families
July 16, 2020
By Kim Eckhart, KIDS COUNT Project Manager
A lack of placement options has been an ongoing concern across the state. Now, with COVID-19 causing many foster parents to close their doors, creating new solutions is more important than ever.
Last week, Sonia Emerson led a demonstration outside of the Cuyahoga County children’s service building with other youth advocates with lived experience in the foster care system. In response to a report that a child spent weeks living there, Sonia said. Read more here.
Immediate Improvements Are Needed in Ohio’s Child Welfare System
July 1, 2020
By Tracy Nájera, Executive Director
In this past week, CDF-Ohio learned of a situation in Cuyahoga County where a child being “housed” at the Cuyahoga County Children’s Services Office for an extended period of time as the county searched for appropriate placement. Let’s be clear – if the child’s birth family was housing them in an office, it would not be seen as acceptable and would likely be seen as grounds for removal. Children need consistency and stability. They need to be able to build connections. They need to know that they are being cared for and taken care of. The extent of this practice and how often its used is unknown. Further, COVID-19 and the economic toll that its taking on families may result in surges in child welfare calls and put additional strains on the system. More is needed to protect vulnerable children with complex needs. CDF-Ohio issued a statement about this situation and we are calling for immediate changes and put forward policy recommendations to protect children from this in the future. We look forward to working with state and local partners in the coming weeks and months. Read the full statement here.
Resources & Publications
The COVID-19 pandemic has put additional strains and challenges on an already struggling child welfare system. The state of Ohio and the federal government has an obligation to support our most vulnerable children and continue funding for child welfare services, support of foster parents and kinship care providers, and the children in their care. Further, during this time, we must also ensure that youth who are aging out of foster care during this time have the option of extending their time in foster care and continuing support as they transition into adulthood.
2019 PCSAO Factbook, Public Children’s Services Association of Ohio
Letters to the Ohio Congressional Delegation – May 7, 2020
- Senate Letter: Emergency Funding for Child Welfare in the next COVID-19 Relief package
- House Letter: Emergency Funding for Child Welfare in the next COVID-19 Relief package | <urn:uuid:025886af-29b5-4c3a-b317-5b2ee5ecbac7> | CC-MAIN-2022-33 | https://cdfohio.org/child-welfare/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573197.34/warc/CC-MAIN-20220818124424-20220818154424-00697.warc.gz | en | 0.956573 | 4,204 | 2.734375 | 3 |
Almost everything you wanted to know about power quality is here. Over two hundred defined terms make this an invaluable reference tool.
A flow of electrical current which increases to a maximum in one direction, decreases to zero, and then reverses direction and reaches maximum in the other direction and back so zero. The cycle is repeated continuously. The number of such cycles per second is equal to the frequency and is measured in “Hertz”. U.S. commercial power is 60 Hertz (i.e. 60 cycles per second).
Alternating Current Complement.
Above ground installation for power lines or telephone lines or cables are installed on a pole or overhead structure.
Current carrying capacity expressed in amperes.
Electrical test instrument used to measure current in a circuit.
A unit of measurement for electrical current or rate of flow of electrons (coulombs per second). If a group of electrons whose total charge is 1 coulomb passes a point in a conductor in 1 second, the electric current is 1 ampere. Its mathematical symbol is “I” the term is often shortened to “amps”.
American National Standards Institute.
The product of voltage and current in a circuit.
Sparking that results when undesirable current flows between two points of differing potential. This may be due to leakage through the intermediate insulation or a leakage path due to contamination.
A winding that develops current output from a generator when its turns cut a magnetic flux.
A nonlinear device to limit the amplitude of voltage on a power line. The term implies that the device stops overvoltage problems (i.e. lightning). In actuality, voltage clamp levels, response times and installation determine how much voltage can be removed by the operation of an arrester.
The reduction of a signal from one point to another. For an electrical surge, attenuation refers to the reduction of an incoming surge by a limiter (attenuator). Wire resistance, arresters, power conditioners attenuate surges to varying degrees.
American Wire Gage. This term refers to the U.S. standard for wire size.
A transformer used to step voltage up or down. The primary and secondary windings share common turns, and it provides no isolation.
A power source dedicated to providing emergency power to a critical load when commercial power is interrupted.
B [ Back To Top ]
An alternating current power system consisting of more than two current carrying conductors in which these conductors all carry the same current.
A collection of cells, grouped together to provide higher voltage and/or higher current than a single cell.
A combination of cells or batteries used to power a UPS’s system inverter when it is in the emergency mode.
Battery Disconnect Switch
Master switch that disconnects a battery reservoir from a UPS. Provides personnel protection when batteries or UPS require service.
A total loss of commercial power.
Deliberate connection of two or more points to reduce any difference of potential (voltage).
A division of a load circuit with current limited by a fuse or circuit breaker.
Operational sequence of a switch or relay where the existing connection is opened prior to making the new connection.
A low voltage condition lasting longer than a few cycles. “Brownouts” differ from “sags” only in duration.
British Thermal Unit. Energy required to raise one pound of water one degree Fahrenheit. One pound of water at 32 degrees F requires the transfer of 144 BTUs to freeze into solid ice.
A small, low voltage transformer placed in series with the power line to increase or reduce steady state voltage.
A heavy, rigid conductor used for high voltage feeders.
C [ Back To Top ]
Two plates or conductors separated by an insulator. Applying a voltage across the plates causes current to flow and stores a charge. Capacitors resist changes in voltage.
An AC-to-DC converter which powers a UPS inverter and maintains the battery reservoir charge.
A current transformer which clamps around a current-carrying conductor so the conductor does not have to be opened for insertion of the transformer primary. Particularly suited for monitoring where current must be sensed at many points for relatively short periods.
Common Mode (CM)
The term refers to electrical interference which is measurable as a ground referenced signal. In true common mode, a signal is common to both the current carrying conductors.
Common Mode Noise
An undesirable voltage which appears between the power conductors and ground.
A tubular raceway for data or power cables. Metallic conduit is common, although non-metallic forms may also be used. A conduit may also be a path or duct and need to be tubular.
A device which changes alternating current to direct current.
The ferrous center part of a transformer or inductor used to increase the strength of the magnetic field.
Condition when an inductor or transformer core has reached maximum magnetic strength.
The combined negative electrical charge of 6.24 X 1018 electrons.
(Usually refers to current) – the mathematical relationship between RMS current and peak current. A normal resistive load will have a crest factor of 1.4142 which is the normal relationship between peak and RMS current. A typical PC will have a crest factor of 3.
Equipment that requires an uninterrupted power input to prevent damage or injury to personnel, facilities, or itself.
The movement of electrons through a conductor. Measured in amperes and its symbol is “I”.
(or CT) – A transformer used in instrumentation to assist in measuring current. It utilizes the strength of the magnetic field around the conductor to form an induced current that can then be applied across a resistance to form a proportional voltage.
D [ Back To Top ]
The standard unit for expressing relative power levels. Decibels indicate the ratio of power output to power input dB = 10 log10 (P1/P2).
A standard three phase connection with the ends of each phase winding connection in series to form a closed loop with each phase 120 electrical degrees from the other.
The connection between a delta source and a delta load.
The connection between a delta source and a wye load.
One that has two input signal connections and zero signal reference lead. The output is the algebraic sum of the instantaneous voltages appearing between the two input signal connections.
Electrical current which flows in one direction only.
A nonvolatile mass memory storage device for computers.
DMM (Digital Multimeter)
An instrument used to measure voltage, current and resistance.
A discrete voltage loss. A voltage sag (complete or partial) for a very short period of time (milliseconds) constitutes a dropout.
The change in voltage per change in time.
E [ Back To Top ]
Emitter coupled logic. Extremely high-speed electronic circuitry where changes in binary logic are determined by very fast switching between specific voltage levels, rather than by semiconductor saturation and cutoff.
A low impedance path to earth for the purpose of discharging lightning, static, and radiated energy, and to maintain the main service entrance at earth potential.
A ground electrode, water, pipe, or building steel, or some combination of these, used for establishing a building’s earth ground.
The percentage of input power available for used by the load. The mathematical formula is: Efficiency = Po/ Pi Where “Po” equals power output, “Pi” equals power input, and power is represented by watts.
One cycle of A.C. power is divided into 360 degrees. This allows mathematical relationships between the various aspects of electricity.
A magnetic field cause by an electric current. Power lines cause electromagnetic fields which can interfere with nearby data cables.
A mechanical device which is controlled by an electric device. Solenoids and shunt trip circuit breakers are examples of electromechanical devices.
A Potential difference (electric charge) measurable between two points which is caused by the distribution if dissimilar static charge along the points. The voltage level is usually in kilovolts.
A metallic barrier or shield between the primary and secondary windings of a transformer which reduces the capacitive coupling and thereby increases the transformers ability to reduce high frequency noise.
Electromotive force or voltage.
Acronyms for various types of electrical interference: electomagnetic interference, radio frequency interference.
Equipment Event Log
A record that is kept of equipment problems and activity, to compare against power monitor data to correlate equipment problems with power events.
A large number of errors within a given period of time as compared to preceding and following time periods.
Electrostatic Discharge (static electricity). The effects of static discharge can range from simple skin irritation for an individual to degraded or destroyed semiconductor junctions for an electronic device.
A plot of recorded power monitor events over time.
F [ Back To Top ]
Unit of measurement for capacitance.
A grounded metallic barrier which can be used for improved isolation between the windings of a transformer. In this application, the shield basically reduces the leakage capacitance between the primary and secondary.
Transmission lines supplying power to a distribution system.
Resonance resulting when the iron core of an inductive component of an LC circuit is saturated, increasing the inductive reactance with respect to the capacitance reactance.
A voltage regulating transformer which depends on core saturation and output capacitance.
A selective network of resistor, inductors, or capacitors which offers comparatively little opposition to certain frequencies or direct current, while blocking or attenuating other frequencies.
FIPS PUB 94
Federal Information Processing Standards Publication (1983, September 21) is an official publication of the National Bureau of Standards (since renamed National Institute for Standards and Technology). The document is a recommended guideline for federal agencies with respect to the electrical environment for automatic data processing (ADP) facilities.
Flashing due to high current flowing between two points of different potential. Usually due to insulation breakdown resulting from arcing.
A surge or sag in voltage amplitude, often caused by load switching or fault clearing.
The lines of force of a magnetic field.
Forward Transfer Impedance
The amount of impedance placed between the source and load with installation of a power conditioner. With no power conditioner, the full utility power is delivered to the load; even a transformer adds some opposition to the transfer of power. On transformer based power conditioners, a high forward transfer impedance limits the amount of inrush current available to the load.
Fine print note, National Electrical Code (NEC) explanatory material.
On AC circuits, designates number of times per second that the current completes a full cycle in positive and negative directions. See also “alternating current”.
A variation from nominal frequency.
G [ Back To Top ]
GFI (Ground Fault Interrupter)
A device whose function is to interrupt the electric circuit to the load when a fault current to ground exceeds some predetermined value that is less than that required to operate the overcurrent protective device of the supply circuit.
Connected to earth or to some conducting body that serves in place of the earth.
Any undesirable current path from a current carrying conductor to ground.
Connection of one side of a circuit to the earth or a body that serves in place of the earth, through low impedance paths. Sometimes confused with bonding. Grounding should always conform to the National Electrical Code.
H [ Back To Top ]
A sinusodial component of an AC voltage that is multiple of the fundamental waveform frequency.
Regularly appearing distortion of the sine wave whose frequency is a multiple of the fundamental frequency. Converts the normal sine wave into a complex waveform.
A cancellation process: harmonics at the output of a circuit are inverted and fed back in their opposite phase.
Unit of measurement for inductance.
Unit of frequency, one hertz (Hz) equals one cycle per second.
A device which is composite of differing technologies to create a better functionality.
I [ Back To Top ]
The expression of power resulting from the flow of current through a resistance: P = I2R.
Institute of Electrical and Electronics Engineers.
Forces which resist current flow in A.C. circuits, i.e. resistance, inductive reactance, capacitive reactance.
The ability of a coil to store energy and oppose changes in current flowing through it. A function of the cross sectional area, number of turns of coil, length of coil and core material.
(Also called “choke”) – A coiled conductor which tends to oppose any change in the flow of current. Usually has coils wrapped around ferrous core.
The initial surge current demand before the load resistance or impedance increases to its normal operating value.
A device used to change DC into AC power.
A multiple winding transformer with primary and secondary windings physically separated and designed to permit magnetic coupling between isolated circuits while minimizing electrostatic coupling. See also “electrostatic shield”.
J [ Back To Top ]
s A watt/second. A measurement of work in time. 1 joule equals 0.0002778 watt/hours. 1 kilowatt hour is equivalent to 3,600,000 joules.
K [ Back To Top ]
A metric prefix meaning 1000 or 103.
(Kilovolt amperes) (volts times amperes) divided by 1000. 1 KVA=1000 VA. KVA is actual measured power (apparent power) and is used for circuit sizing.
(Kilowatts) watts divided by 1000. KW is real power and is important in sizing UPS, motor generators or other power conditioners. See also “power factor”.
(Kilowatt hours) KW times hours. A measurement of power and time used by utilities for billing purposes.
L [ Back To Top ]
An inductive load with current lagging voltage. Since inductors tend to resist changes in current, the current flow through an inductive circuit will lag behind the voltage. The number of electrical degrees between voltage and current is known as the “phase angle”. The cosine of this angle is equal to the power factor (linear loads only).
An electrical network containing both inductive and capacitive elements.
A capacitive load with current leading voltage. Since capacitors resist changes in voltage, the current flow in a capacitive circuit will lead the voltage.
A load in which the current relationship to voltage is constant based on a relatively constant load impedance.
Unequal loads on the phase lines of a multiphase feeder.
The driven device that uses the power supplied from the source.
Switching the various loads on a multi-phase feeder to equalize the current in each line.
A malfunction that causes the load to demand abnormally high amounts of current from the source.
A term used to describe the effects of low forward transfer impedance. A power conditioner with “load regulation” may not have voltage regulation. Removing the power conditioner altogether will improve load regulation.
Transferring the load from one source to another.
Unequal loads on the phase lines of a multi phase system.
M [ Back To Top ]
A three-phase ferroresonant based system with zigzag output windings to allow the Ferro to handle unbalanced loads.
Main Service Entrance
The enclosure containing connection panels and switchgear, located at the point where the utility power lines enter a building.
Operational sequence of a switch or relay where the new connection is made prior to disconnecting the existing connection.
A metric prefix meaning 1,000,000 or 106.
Metal Oxide Varistor (MOV)
A MOV is a voltage sensitive breakdown device which is commonly used to limit overvoltage conditions (electrical surges) on power and data lines. When the applied voltage exceeds the breakdown point, the resistance of the MOV decreases from a very high level (thousands of ohms) to a very low level (a few ohms). The actual resistance of the device is a function of the rate of applied voltage and current.
A unit of length equal to one-thousandth, 10-3 of an inch.
A metric prefix meaning one millionth of a unit or 10-6.
A metric term meaning one millionth of a meter.
A metric prefix meaning one thousandth of a unit or 10-3.
A modem is a contraction of modulator-demodulator. The device is used to connect data equipment to a communication line. Modems are commonly used to connect computer equipment to telephone lines.
(Mean Time Between Failure) the probable length of time that a component taken from a particular batch will survive if operated under the same conditions as a sample from the same batch.
Mean Time To Repair.
N [ Back To Top ]
A metric prefix meaning one billionth of a unit or 10-9.
The characteristic of a circuit in which current varies inversely with applied voltage.
National Electrical Manufacturers Association.
National Electrical Code.
The grounded junction point of the legs of a wye circuit. Or, the grounded center point of one coil of a delta transformer secondary. Measuring the phase to neutral voltage of each of the normal three phases will show whether the system is wye or delta. On a wye system, the phase to neutral voltages will be approximately equal and will measure phase to phase voltage divided by 1.73. On a center tapped delta system, one phase to neutral voltage will be significantly higher than the other two. This higher phase is often called the “high leg”.
An extra winding used to cancel harmonics developed in a saturated secondary winding, resulting in a sinusoidal output waveform from a ferroresonant transformer.
The normal or designed voltage level. For three phase wye systems, nominal voltages are 480/277 (600/346 Canada) and 208/120 where the first number expresses phase to phase ( or line to line) voltages and the second number is the phase to neutral voltage. The nominal voltage for most single phase systems is 240/120.
A load in which the current does not have a linear relationship to the voltage. In a light bulb, the current is directly proportional to voltage at all times. In a nonlinear load such as switched mode power supplies, the current is not directly proportional to voltage.
Normal Mode (NM)
The term refers to electrical interference which is measurable between line and neutral (current carrying conductors). Normal mode interference is readily generated by the operation of lights, switches and motors.
O [ Back To Top ]
The unit of measurement for electrical resistance or opposition to current flow.
The relationship between voltage (pressure), current (electron flow), and resistance. The current in an electrical circuit is directly proportional to the voltage and inversely proportional to the resistance. E=IR, or I=E/R, or R=E/I. Where E=voltage, I=current, and R=resistance.
The sequenced shutdown of units comprising a computer system to prevent damage to the system and subsequent corruption or loss of data.
The variation, usually with time, of the magnitude of quantity with respect to a specified reference when the magnitude is alternately greater and smaller than the reference.
A voltage greater than the rating of a device or component. Normally overvoltage refers to long term events (several AC cycles and longer). The term can also apply to transients and surges.
P [ Back To Top ]
A single panel or group of panel units designed for assembly in the form of a single panel; including buses, overcurrent protection devices (with or without switches) for the control of power circuits.
The connection of the outputs of two or more power conditioners for use as one unit. Paralleling for capacity means that the units are paralleled for the sum of their individual ratings, i.e. two 125 KVA systems paralleled for use as a single 250 KVA system. Paralleling for redundancy means using one or more additional units to maintain power even when one unit fails.
An unintentional change in the bit structure of a data word due to the presence of a spurious pulse or transient.
Peak Line Current
Maximum instantaneous current during a cycle.
Any device used to process data for entry into or extraction from a computer.
Switching capacitors into or out of a power distribution network to compensate for load power factor variations.
A metric prefix meaning one million millionth or 10-12.
An alternating current supply with two or more hot conductors. Voltage is measurable between the conductors and the voltage waveforms for each conductor are usually displaced 120 degrees. When a neutral is present, the voltage from each hot conductor to neutral is equal.
Electrical energy measured according to voltage and current (normally watts). Power in watts equals volts times amperes for DC circuits. For single phase AC circuits, watts equal volts times amperes times power factor.
Watts divided by voltamps, KW divided by KVA. Power factor: leading and lagging of voltage versus current caused by inductive or capacitive loads, and 2) harmonic power factor: from nonlinear current.
The travel of an electrical waveform along a medium. In other words, a surge passing along a power cord to a system.
A protector is another name for an arrester or diverter.
Q [ Back To Top ]
R [ Back To Top ]
A group of earthing electrodes or conductors of equal length and ampacity, connected at a central point and extending outward at equal angles, spoke fashion, to provide a low earth impedance reference.
Opposition to the flow of alternating current. Capacitive reactance is the opposition offered by capacitor, and inductive reactance is the opposition offered by a coil or other inductance.
The automatic closing of a circuit-interrupting device following automatic tripping.
An electrical device used to change AC power into DC power. A battery charger is a rectifier.
The inclusion of additional assemblies and circuits (as within a UPS) with provision for automatic switchover from a failing assembly or circuit to its backup counterpart.
The return wave generated when a traveling wave reaches a load, a source, or a junction where there is a change in line impedance.
The statistical probability of trouble-free operation of a given component or assembly. Used principally as a function of MTBF and MTTR.
Radio Frequency Interference.
The ability of a power conditioner to supply output power when input power is lost.
(Root mean square) used for AC voltage and current values. It is the square root of the average of the squares of all the instantaneous amplitudes occurring during one cycle. RMS is called the effective value of AC because it is the value of AC voltage or current that will cause the same amount of head to be produced in a circuit containing only resistance that would be caused by a DC voltage or current of the same value. In a pure sine wave the RMS value is equivalent to .707 times the peak value and the peak value is 1.414 times the RMS value. The normal home wall outlet which supplies 120 volts RMS has a peak voltage of 169.7 volts.
The electrical field that develops in a multiphase generator. The varying currents of through pairs of stator winding cause the magnetic field to vary as if it was a single rotating field.
S [ Back To Top ]
An alternate path of return current, during a fault condition, for the purpose of tripping a circuit breaker. Also, the means of establishing a load at earth level.
A short duration low voltage condition.
(Semiconductor, or silicon, controlled rectifier) an electronic DC switch which can be triggered into conduction by a pulse to a gate electrode, but can only be cut off by reducing the main current below a predetermined level (usually zero).
A semiconductor is an electronic conductor (ex., silicon, selenium or germanium) with a resistivity between metals and insulators. Current flows through the semiconductor normally via holes or electrons.
(Of a motor) a measurement of the motor’s ability to operate under abnormal conditions. A 1.15 times its rated load continuously when operated at its rated voltage, frequency, temperature, etc. Therefore, a 125 horsepower motor could be operated as a 143.75 h.p. motor under normal conditions.
Imposing a metallic barrier to reduce the coupling of undesirable signals.
A graph, with the x axis for amplitude and the y axis for time, depicting AC voltage or current. The center line of the x axis is zero and divides polarity (direction).
(With a three phase source) one or tow phase conductors. (Single phase source) A single output which may be center tapped for dual voltage levels.
Single Phase Condition
An unusual condition where one phase of a three-phase system is lost. It is characterized by unusual effects on lighting and other loads.
A waveform that can be expressed mathematically by using the sine function.
Circuitry that limits the initial power demand when a UPS has been operating in emergency mode and commercial power is restored. Also, it controls the rate at which UPS output increases to normal.
A condition in which circuit values remain essentially constant after all initial fluctuating conditions have settled down.
An external force applied to a component or assembly that tends to damage or destroy it.
Location where high voltage transmission lines connect to switchgear and step-down transformers to produce lower voltages at lower power levels for local distribution networks.
A short duration high voltage condition. A surge lasts for several cycles where a transient lasts less than one half cycle. Often confused with “transient”.
A group of switches, relays, circuit breakers, etc. Used to control distribution of power to other distribution equipment and large loads.
Maintaining a constant phase relationship between AC signals.
Events that have the same period or which occur at the same time. For instance, a synchronous transfer mechanism for a standby power generator transfers power to or from the utility in phase. In other words, the voltage waveform of the generator and of the utility are in phase and the waveforms occur at the same time and interval during the transfer.
An AC motor whose speed is exactly proportional to the power input frequency.
T [ Back To Top ]
A connection point brought out of a transformer winding to permit changing the turns ratio.
A voltage regulator which uses power semiconductors, rated at line voltage and current, to switch taps of a transformer thereby changing the turns ratio and adjusting output voltage.
(From telemetering) Measurement with the aid of intermediate means that permit the measurement to be interpreted at a distance from the primary detector. A site telemetry system supplies the intermediate means of communication for all major environmental units at the site. Data from these units can then be interpreted by a computer. Site telemetry differs from central monitoring in that it uses the distributed processing power of monitored equipment from a variety of manufacturers.
Three Phase Power
Three separate outputs from a single source with a phase differential of 120 electrical degrees between any two adjacent voltages or currents. Mathematical calculations with three-phase power must allow for the additional power delivered by the third phase. Remember, both single phase and three phase have the same phase to phase voltages, therefore you must utilize the square root of 3 in your calculations. For example, KVA equals volts times amps for DC and for single phase. For three phase the formula is volts times the square root of three times amps.
Total Harmonic Distortion (THD)
The square root of the sum of the squares of the RMS harmonic voltages or currents divided by the RMS fundamental voltage or current. Can also be calculated in the same way for only even harmonics or odd harmonics.
A device that senses one form of energy and converts it to another, i.e., temperature to voltage (for monitoring).
A switch used to transfer a load between a UPS and its bypass source.
A static electrical device which, by electromagnetic induction, regenerates A.C. power from one circuit into another. Transformers are also used to change voltage from one level to another. This is accomplished by the ratio of turns on the primary to turns on the secondary (turns ratio). If the primary windings have twice the number of windings as the secondary, the secondary voltage will be half of the primary voltage.
A high amplitude, short duration pulse superimposed on the normal voltage wave form or ground line.
The ability of a power conditioner to respond to a change. Transient step load response is the ability of a power conditioner to maintain a constant output voltage when sudden load (current) changes are made.
The conductors used to carry electrical energy from one location to another.
Transverse Mode Noise
(Normal mode)- An undesirable voltage which appears from line to line of a power line.
An electronic device that provides switching action for either polarity of an applied voltage and can be controlled from a single gate. Usually composed of two SCR’s connected back to back.
Transistor-Transistor Logic. Electronic circuitry that defines a binary logic state when components are in saturation or cutoff.
U [ Back To Top ]
Negative change in amplitude of a voltage.
Uninterruptible Power Source.
V [ Back To Top ]
Volts of alternating current.
Volts of direct current.
The unit of voltage or potential difference.
Electrical pressure, the force which causes current to flow through a conductor. Voltage must be expressed as a difference of potential between two points since it is a relational term. Connecting both voltmeter leads to the same point will show no voltage present although the voltage between that point and ground may be hundred or thousands of volts. This is why most nominal voltages are expressed as “phase to phase” or “phase to neutral”. The unit of measurement is “volts”. The electrical symbol is “e”.
The ability of a power conditioner to maintain a stable output voltage when input voltage fluctuates.
W [ Back To Top ]
The unit of power. Equal to one joule per second.
A wye connection refers to a polyphase electrical supply where the source transformer has the conductors connected to the terminals in a physical arrangement resembling a Y. Each point of the Y represents the connection of a hot conductor. The angular displacement between each point of the Y is 120 degrees. The center point is the common return point for the neutral conductor.
X [ Back To Top ]
Y [ Back To Top ]
Z [ Back To Top ]
Zero Signal Reference
A connection point, bus, or conductor used as one side of a signal circuit. It may or may not be designated as ground. Is sometimes referred to as circuit common. | <urn:uuid:a1742855-a55d-49f2-b32d-eb1cb6d2a27f> | CC-MAIN-2022-33 | https://www.mtecorp.com/spanish/power-quality-glossary/?noredirect=es_ES | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573760.75/warc/CC-MAIN-20220819191655-20220819221655-00697.warc.gz | en | 0.887443 | 7,223 | 3.75 | 4 |
Policy Implementation Period
According to domestic regulations on the installation and operation of this network, it is largely divided into two sections - one for general measurement (with a total of 10 networks) and one for intensive measurement (with a total of 7 networks). Stations engaged in general measurement can be further categorized as belonging to either a general or special air pollution measurement network. General monitoring stations check for SO₂, CO, NOX, O₃, and PM10. The stations are then further categorized into national or local, depending on their operational type.
Figure 1. Categories within the Domestic Air Pollution Measurement Network
*Source: Ministry of Environment, Air Pollution Monitoring Network Installation and Operation Manual, 2011
Seoul had 4 measurement stations in 1973. Currently, there are 45, operating 65 air pollution measurement networks. The 25 gu-districts in the city are categorized into central, northeast, northwest, southwest, and southeast regions for air pollution analysis and management. There is at least 1 monitoring network installed for each district. Of the 6 types of measurement networks, the following stations have been recorded: 25 city, 14 road-side, 10 photochemical, 5 heavy metal, 10 acid deposition and 1 visible distance measurement station.
Table 1. Seoul City Air Pollution Measurement Network Stations (2015)
|Acid Deposition||Visible Distance|
Source: Seoul City Air Management Department
Figure 2. Regions in Seoul’s Air Pollution Measurement Network
Seoul operated the following number of air pollution measurement stations until 2008: 27 city monitoring, 2 clean zone, and 9 road-side. As Seoul’s air pollution is largely affected by the metropolitan area as well as China and the northwest monsoon, the monitoring stations have been re-organized to better understand and manage pollutants that travel long distances, their components, density and travel routes, pollutant statistics in border areas and at road-side. Seoul currently has 25 stations, 1 for each district. The city also operates 6 background monitoring stations: Gwanak Mountain station measures pollutants that travel long distances, Namsan station measures at high altitude, and Bukhan Mountain station measures air quality within the clean zone. Moreover, to manage the generational and changing statistics of pollutants from automobiles, 12 stations have been installed on expressways, with additional stations on exclusive median bus lanes and exclusive car lanes. There is currently a total of 14 road-side measurement stations. This brings the total to 45 stations in Seoul - 25 for the 25 gu-districts, 6 background, 14 street-side) for monitoring and management of air pollution.
Table 2. Seoul’s General & Special Air Pollution Measurement Stations
|Category||General and Special Air Pollution Monitoring Network|
|General Air||Heavy Metal||VOC/BTEX||Acid Rain||Mercury||Ion||EC/OC||BC||PM-1||Traffic Volume||HC||UV|
|City Air Quality Monitoring Station||25||4||7||8||4||3||3||4|
|Road-side Air Quality Monitoring Station||14||4||1||2||3||15||9|
|City Background Monitoring Station||6||1||1||1||1||1||4|
|Mobile Air Quality Monitoring Vehicle (6 Units)||1||1|
Seoul uses the air pollution measurement statistics to forecast situations and issue warnings on air pollution to the public, evaluate air quality, and find ways to improve it. The city will continue to build new monitoring stations, update old ones, add cutting-edge features and push for an integrated information system on air quality, so as to ultimately create the most reliable air quality management system possible.
Reference: Seoul Policy Archive
Air pollution in Korean urban centers continued to deteriorate over time, finally reaching a point at which it could no longer be ignored, and a variety of actions were taken to improve the situation. The Environmental Conservation Act was enacted in 1978, and the Environmental Office was created in 1980. A number of other actions and policies began to be considered.
The SMG also worked to improve air quality by expanding the provision of low sulfur fuel and other clean fuels, or by attaching purifiers to automobiles. However, the density of 1st pollutants exceeded general air quality standards.
To understand Seoul's air quality and analyze its pollution, it had to be measured at several locations at the same time. Therefore, the automatic air pollution measurement network was created. With it, the city began to measure its air quality in real time. An air pollution alert system was also created to provide forecasts and warnings for the public as air quality can have considerable impact on health. The air pollution statistics so measured are used in a variety of ways, such as to evaluate policies to reduce air pollution, understand whether environmental standards have been met, and provide information for forecasting models or analysis of air pollution trends.
Air pollution measurement stations have been placed around Seoul in locations that best represent each area's specific and idiosyncratic characteristics. These stations automatically measure air quality 24 hours a day, with the results sent to the Seoul Public Health and Environment Research Institute, and released to the general public through its website. The measured values are sent to SMG and the Ministry of Environment after a thorough validation process. To improve reliability, only statistics that have achieved more than 75% measured values during the designated period are considered valid.
Figure 4. Seoul’s Management System for Statistics on Air Pollution
|Air pollution measurement
|TMS Device (City, Seoul City Public Health and Environment Research Institute)||Analysis, evaluation and website upload|
|TMS Server, database (Seoul City Public Health and Environment Research Institute)||Air Environment Information Center database||Information shared with the administrative office (Namis)|
|National Environment Pollution Information Center database (Korea Environment Corporation)||Open to public (Air Korea)|
The Importance of the Policy
The SMG began installing and operating these measurement stations in the late 1970s, to watch air pollutants and track changes in density. In the 1980s, semiautomatic measurement facilities were mostly used to observe pollutants. Starting in the late 1990s, the semiautomatic stations closed, and were replaced by automatic facilities.
In the 1990s, secondary pollutants such as acid rain, ozone, or photochemical smog became a new issue. The stations were then placed into two different networks: the general air quality measurement network or the special air quality measurement network, with the 2000s Air Quality Monitoring Network Plans. These plans are modified every 5 years, and reflect new needs for air quality monitoring. The monitoring system has been improved with additional monitoring tools for environmentally important pollutants.
The overall direction is to operate the general networks without human resources so the general, special and comprehensive networks can operate more efficiently. Comprehensive stations must be able to measure 2 or more pollutants, therefore requiring multiple facilities in the same location.
Stations that measure air pollutant concentrations aid better understanding of air quality in major areas, component analysis, and identification of the causes of pollutants. The stations include in-depth measuring and research functions.
The SMG is also strengthening its management of harmful pollutants such as PM2.5 or mercury. The networks have been expanded to include both density measuring networks and component measuring networks for PM2.5 as a way to assist with decisions on policy directions and evaluate whether or not the concentrations reach limits set by environmental standards, as PM2.5 was finally included in the environmental standard in 2015. Mercury is also measurable now, and a total of 4 mercury measuring facilities were installed in 2015. To strengthen monitoring of harmful heavy metals, the city has included fine dust in sampling at the heavy metal monitoring stations and added arsenic and beryllium.
Relevance with Other Policies
Figure 5. Seoul’s Air Environment Information Website
The Seoul Air Quality Information Service shows air pollution on a map for each district, with quality levels represented by color. People are able to sort the results for each monitoring station, area, pollutant type and duration of time.
Moreover, besides protecting health and minimizing damage to living environments, the data is utilized in forecasting and issuing warnings on air pollutants (fine dust, ozone), as well as better understanding air pollution and creating policies to improve air quality.
1) Fine Dust Forecasting & Warning System
Data obtained from air pollution measurement stations is used for air pollution warnings and a forecasting system to protect health. Fine dust forecasts inform the public of expected high pollutant densities. Warnings are issued after real-time measuring, notifying people quickly when air pollution reaches serious levels. The system aims to especially protect those with respiratory conditions, children and the elderly, all of whom are more sensitive to air pollution.
The microdust forecast and warning system was developed to respond to the rapid increase of automobiles and the influx of high-density microdust from China. The system notifies the public of recommended actions, including the use of public transportation. Companies that emit such pollutants can also be notified to cease operations.
Air pollution forecasts are divided into 5 levels: good, ordinary, slightly bad, bad, and very bad. Forecasts are provided at 6 p.m. and 7 a.m.
Table 3. Fine Dust Forecast System in Korea & Guidance for Citizens
|Category||Alert Level for Fine Dust Density (㎍/㎥)|
|Good||Ordinary||Slightly Bad||Bad||Very Bad|
|PM10||0~30||31~80||81~120||121~200||Higher than 201|
|PM2.5||0~15||16~50||51~75||76~100||Higher than 101|
|Sensitive Group||-||Outdoor activity with caution, according to health conditions||Refrain from prolonged hard outdoor activity||Refrain from hard outdoor activity
(People with respiratory disease, heart disease, elderly citizens)
|Restrict outdoor activity|
|General Citizen||-||-||Refrain from prolonged hard outdoor activity||Refrain from outdoor activity|
|*Permissible Level of Fine dust PM10 : 24 hours 100㎍/㎥, annual 50㎍/㎥, PM2.5 : 24 hours 50㎍/㎥, annual 25㎍/㎥
*Sensitive Group: Children, elderly, adult with respiratory or heart disease
When the forecast is 'slightly bad' or worse, hospitals and senior citizens’ centers are notified. The elderly, children, and people with respiratory conditions are warned against outdoor activities or exercise. When it is 'bad' or worse, warnings are issued against driving and microdust-generating businesses are urged to adjust their operations. When it is 'very bad,' the superintendent of the Seoul Metropolitan Office of Education is notified to keep kids indoors, shorten the school day, or cancel school for the day.
Table 4. Details of the Fine Dust Alert System in Korea
|Target Pollutant||Alert Level||Issuing the Alert||Dismissing the Alert|
|PM10||Watch||Considering the weather, when an automatic measurement station's hourly PM10 density is higher than 150㎍/㎥ for at least 2 hours.||Considering the weather factors in areas that have already had a 'watch' issued, when an automatic measurement station's hourly PM10 density is lower than 100㎍/㎥.|
|Warning||Considering the weather, when an automatic measurement station's hourly PM10 density is higher than 300㎍/㎥ for at least 2 hours.||Considering the weather factors in areas that have already had a 'watch' issued, when an automatic measurement station's hourly PM10 density is lower than 150㎍/㎥, then 'warning' changes to 'watch'.|
|PM2.5||Watch||Considering the weather, when an automatic measurement station's hourly PM2.5 density is higher than 90㎍/㎥ for at least 2 hours.||Considering the weather in areas that have already had a 'watch' issued, when an automatic measurement station's hourly PM2.5 density is lower than 50㎍/㎥.|
|Warning||Considering the weather, when an automatic measurement station's hourly PM2.5 density is higher than 180㎍/㎥ for at least 2 hours.||Considering the weather in areas that have already had a 'watch' issued, when an automatic measurement station's hourly PM2.5 density is lower than 90㎍/㎥ then 'warning' changes to 'watch'.|
When real-time statistics from the air quality measurement network shows a higher density than the standard, “watch” or ”warning” is sent out to advise against outdoor activities or classes. Schools are encouraged to take a day off and drivers are encouraged to leave their cars at home. Dust-generating businesses are advised to cease operations, and the roads are sprayed down with water.
Figure 6. Seoul Fine Dust Forecast & Warning Dissemination System
|Seoul City Dust Forecast Warning Center|
|Local districts||TV, newspapers||Education Office||Public Facilities|
|Send news to local residents
- Apartments, senior citizens’ centers
- Dust-generating businesses, construction sites
|News, forecast session announcement
- Avoid outdoor activities
- Advice issued against using automobiles
|Outdoor classes banned for kindergartens, elementarymiddle/high schools, principals urged to consider canceling classes for the day||Send news to citizens
- Parks, sports arenas, palaces
- Subways, trains, bus terminals
Figure 7. Guidance for Citizens under Microdust Watches or Warnings
|Microdust Watch||Microdust Warning|
|Ÿ Sensitive persons are advised to avoid outdoor activity and stay indoors
Ÿ The general public is advised to limit prolonged or intense outdoor activity (especially when experiencing irritated eyes or throat, or coughing)
Ÿ If outdoor activity is unavoidable, wear a protective mask for yellow dust (people with lung disease should consult a doctor before going out even with a mask)
Ÿ Avoid areas with heavy traffic
Ÿ Limit outdoor classes at kindergarten and elementary school
Ÿ Restrict access to public outdoor sports facilities
Ÿ Encourage people in parks, sports facilities, palaces, terminals, trains and subways to avoid intense outdoor activities
|Ÿ Sensitive persons must not engage in outdoor activity (consult with doctor before going outside)
Ÿ For the general public, avoid prolonged or intense outdoor activity (stay indoors if coughing or throat is sore)
Ÿ If outdoor activity is unavoidable, wear a protective mask for yellow dust
Ÿ Avoid areas with heavy traffic
Ÿ Kindergartens and elementary schools to prohibit outdoor classes; school day to be shortened or canceled
Ÿ Limit outdoor classes at middle and high schools
Ÿ Public outdoor sports facilities to close for the day
Ÿ Urge people in parks, sports facilities, palaces, terminals, trains and subways to avoid intense outdoor activities
Figure 8. Neighborhood Air Quality Mobile App
Figure 9. Health Advisory when Microdust Density is High
2) Downtown Thermal Images Available Online
Since February 2009, the temperature of Seoul’s downtown area has been measured in real time at the Jongno-gu air pollution monitoring station with a thermographic camera. Such images are released to the public on the Seoul City Air Environment Information website (http://cleanair.seoul.go.kr). Temperatures in 5 directions (towards Namsan, Dongdaemun, Jonggak, Gyeongbokgung, and Bukhansan) are measured every 10 minutes, and the results displayed in color coding, from white to blue, so that the temperature can be understood at a glance.
In large cities like Seoul, “thermal islands” occur, which means the downtown areas have higher temperatures than other parts of the city. Surface temperatures of buildings have increased up to 59℃ (August 8, 2014), or about 30℃ higher than the highest outdoor temperature of 30.3℃. This thermal island effect is due to the different surface heat balance of buildings and roads, the increase of automobiles and fuel consumption (which generates heat), the increase of air pollutants, the greenhouse effect from pollutants in the atmosphere over the city, and skyscrapers that prevent the wind from blowing normally.
Figure 10. Thermal Images of Seoul Temperatures
Figure 11. Thermographic Camera at Jongno-gu Station
3) The Seoul Ozone Forecast & Warning System
Figure 12. Seoul Ozone Forecast & Warning System: Standards & Operating System
4) Air Pollution Information Boards (13 locations)
The Banpo and Seongsu Electronic Air Pollution Information Board has been running since December 1992, and took over from the SMG and Munrae electronic display boards, originally installed and operated by the Ministry of Environment, from May 1993. The Air Pollution Information Boards utilize the statistics gained from automatic monitoring networks to display air quality information in numbers to promote awareness of the environment and warn the public in severe situations. It also promotes the implementation of environmental policies and serve as environment watchers. Displayed in real time are the densities of certain pollutants, environment-related information and guidance regarding ozone or microdust warnings.
Figure 13. Example of an Air Pollution Information Board
5) Microdust Information Via N Seoul Tower Lighting
The SMG also informs the public of pollution levels quickly and easily with N Seoul Tower's lighting. Beginning in May 2011, the tower's lighting remains blue when air quality is good, and turns red when air quality is bad. Different colors are displayed according to the specific conditions. The service begins after sunset and runs until 11 p.m. April to September, and until 10 p.m. the rest of the year. In February 2015, the SMG added PM2.5 and PM10 to its criteria for determining 'good’ air quality, as it declared its intention to reduce ultrafine particles by 20% by 2018.
Figure 14. N Seoul Tower Lighting Used to Inform People of Microdust Levels
Figure 15. Seoul N Tower Lighting Corresponding to 4 Fine Dust Levels
Main Policy Contents
1) City Air Quality Measurement Network
According to the Manual on Installation & Operation of the Air Quality Measurement Network (2006), city air measurement stations are to be installed at "locations where the average air quality can be checked, while not directly affected by major pollutant generators." These are in places where the air best represents the concerns that exist about the area, and where there are no buildings, trees or plants to block the sensors. Sampling stations must be located far enough away from surrounding objects so that the distance to the object is more than twice its height, or where a straight line from the sampling port to the top of the object is at no less than 30° angle. In areas where buildings are concentrated, sampling stations should be installed at least 1.5m away from building surfaces. The sampling station should be between 1.5m and 10m off the ground.
City air measurement stations are located across Seoul approximately 5km apart in accordance with the TM (Transverse Mercator) coordinate system. They are located away from major roads so that major pollutant generators (automobiles) cannot directly affect the statistics, and are generally located on top of community centers or public offices.
2) Road-side Air Quality Measurement Network
Road-side stations are located on roads with the heaviest traffic in Seoul, so as to monitor components of exhaust fumes. There are 14 measurement stations installed, collecting pollutants from automatic measurement facilities. The collected data is then used to evaluate roadside air pollution and the impact on the environment, and as the main basis for roadside air quality policies. Ten of the stations are at street-side, 2 at exclusive car lanes, and 2 at the median strip.
The road-side stations measure sulfurous acid gas, 13 air pollutants (NO, NO₂, NOX, O₃, CO, CH₄, n-CH₄, THC, SO₂, TSP, PM10, PM2.5, EC/OC), 6 weather factors (wind direction, wind speed, temperature, humidity, UV radiation, solar radiation) for a total of 17-20 elements, and the amount of traffic. The statistics are sent in real time to the Electronic Control Center of the SMG Research Institute of Public Health and Environment, the SMG Weather and Environment Center, the Ministry of Environment’s National Institute of Environmental Research and the Gyeongin Regional Environmental Office.
Figure 17. Road-side Measurement Station /Figure 18. Measurement Station at Exclusive Car Lane/Figure 19. Measurement Station at Median Strip
3) Heavy Metal Measurement Network
Some stations measure heavy metal density to understand environmental impact, or to come up with policy to control harmful heavy metals such as lead (Pb), cadmium (Cd), and chromium (Cr).
Mercury is measured every 24 hours by automatic facilities at Guro, Bangi, Nowon and Hannam. Heavy metals in the air can be discharged from a variety of locations, both artificial and natural. Usually, they are attached to dust and stay in the air. Even small amounts can harm the human body.
Samples are collected every second week of the month (24 hours), and every day during the yellow dust season. A High Volume Air Sampler is used, and in January 2013, the sampling method changed from TSP to PM10.
During regular sampling, inductive coupling plasma emission spectroscopy is used to measure a total of 19 elements, including lead (Pb), cadmium (Cd), chromium (Cr), copper (Cu), manganese (Mn), iron (Fe), nickel (Ni), arsenic (As), and beryllium (Be). During the yellow dust season, aluminum (Al), calcium (Ca), and magnesium (Mg) are added to the regular list.
4) Mercury Measurement Network
Mercury is the only metal that is liquid at room temperature, and accumulates in the soil, water and air. Air is a particularly important means for the material. More than 98% of mercury in the atmosphere is gas, which circulates around the earth, accumulates and reacts when it enters the ecosystem. It is very important to monitor mercury in real time.
Mercury levels in the air are monitored at 4 stations, and the measured values used as the basis for related policies.
Acid deposition refers to all the pollutant materials that fall from the atmosphere to the ground due to the forces of gravity. There are two types: wet deposition and dry deposition. Wet deposition includes acid rain, snow and fog. Dry deposition includes PM2.5, NO₂, and SO42-.
The representative form of wet deposition is acid rain, which is when the rain's pH level is less than 5.6. Acid rain is created from sulfur oxides or nitrogenous compound reactions, and is a long-distance pollutant that can impact large areas. Acid rain can damage buildings, bridges and other important structures. After prolonged exposure, children and the elderly may suffer skin conditions. It damages the ecosystem as it inhibits water absorption by plants, inhibits natural decomposition of organic materials, and pollutes the water.
There have been 10 acid deposition measurement stations in the city since 1985. The stations also analyze ion composition, the major determinant of pH levels in rain. The statistics are used for related policies. Currently, eight stations are located in residential areas, while two others are in “clean” areas (Bukhansan and Bangi). Bukhansan station operates as a background station.
Figure 21. Location of Acid Rain Measurement Stations
6) Photochemical Pollutant Measurement Network
Seoul and other large cities have high population densities and heavy traffic, resulting in high densities of ozone and nitrogen dioxide (NO₂). Most ground-level ozone is created by photochemical reactions between nitrogenous compounds and volatile organic compounds (VOCs). It is very important to control NO₂ and VOCs as a precursor to controlling ozone density. VOCs are discharged from contamination sources, which creates secondary aerosols through photochemical reaction, increases ozone density, leading to the issuance of ozone 'watch' notifications.
There are 10 photochemical stations around Seoul. They make their measurements every hour, and the data is used in designing related policy.
Currently there are 5 VOC stations (Gangseo, Gwangjin, Guro, Jongro, Bukhansan) and 5 BTEX (Benzene, Toluene, Ethylbenzene, Xylene) stations (Songpa, Jungrang, Dongjak, Haengju, Segok) engaged in continuous measurement.
Figure 22. Seoul’s Photochemical Pollutant Measurement Network
7) City Background Quality Measurement Network
As the Bukhansan measurement station is located in a clean area that is not significantly affected by pollutants, data it collects is used in comparisons with air quality in the city’s urban areas. The Bukhansan station recorded low annual pollution levels in 2014, when compared to air quality in the city, for all elements except ozone, which is largely due to the high sunlight penetration ratio, resulting in high solar radiation. Therefore, the NO to NO₂ ratio (NO₂/NO) is higher than found in the city. NO is involved in O₃ extinction and NO₂ in O₃ creation, which is why O₃ equilibrium concentrations are high in areas around Bukhansan station.
Figure 23. Air Pollution Comparison for Each Monitoring Network (in 2014)
Source: Seoul Development Institute. 2008. "Study on Approaches to Effectively Link Traffic and Air Pollution Monitoring Data"
When the location of a measurement station is considered ineffective or the host building is being removed, the station will be relocated as close as possible to the original location. The final spot is decided after considering the views of the network’s evaluation group.
The microdust forecasting and warning system has been in operation since February 2005, in accordance with the Ordinance on Microdust Forecasts and Warnings. The ultrafine particle (PM2.5) warning system has been operation since October 2013, supplementing the SMG’s efforts to protect human health.
Fine dust forecasts in 2014 had a high average accuracy rate of 70.5%: 68.5% for the forecasts issued the day before, and 72.6% for those issued on the day. Microdust forecasts are provided to related organizations such as local governments, the police, and the Office of Education, as well as anyone wanting to receive the information via SMS (text message). In 2014, there were 2 microdust watches, 6 ultrafine particle watches, and 14 (preliminary) watches released.
Challenges and Solutions
There are a variety of challenges in terms of securing locations for measurement stations due to the fact that Seoul is a metropolitan area. Since the stations are mainly located on top of public offices or schools, it is questionable whether their statistics truly and accurately reflect the area's air quality. Large apartment complexes or commercial buildings partly interfere with the measurements at some stations.
Such stations need to be moved, and mobile air monitoring vehicles can also serve for certain durations to identify better locations once reliable results are obtained from those locations.
Efforts have also been made to measure harmful carcinogenic pollutants, with low density. Strenuous efforts are made to establish and implement relevant policies to secure various monitoring elements and accurately evaluate the impact of pollutants on public health and enhance the reliability of station measurements. | <urn:uuid:a62806e5-db34-4a80-9174-d7a338a63627> | CC-MAIN-2022-33 | https://seoulsolution.kr/en/node/6540 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00697.warc.gz | en | 0.921446 | 5,961 | 2.984375 | 3 |
Northern Canada depends on a vast network of ice roads for the transportation of goods. As a result of global warming, increasingly, Canada’s ice road network is not meeting the needs of residents and businesses of Northern Canada. After detailing the problems with Canada’s reliance on ice roads, this paper explores several courses of action. Helium-filled balloons, also called airships, offer the best transportation alternative to ice roads for most of Northern Canada.
Canada’s Ice Roads
Northern communities in Canada depend on ice roads. Ice roads are used to deliver food, fuel and other supplies to northern communities (Prentice, Barry, Winograd, Phillips, & Harrison, 2003). For many northern communities, truck transport is only possible on ice roads (Hassol, 2004). This is because during the summer time, thawing permafrost makes truck transport on dirt roads impossible (Grant, 1986).
Ice roads are also vital for economic activity in the north. Diamond mines in the Northwest Territories rely on ice roads for the shipment of fuel, supplies and equipment. Some of the equipment used in diamond mining is very heavy and is very expensive to ship by air (Younglai, 2006). Northern forestry and petroleum and natural gas industries also rely on ice roads (Hassol, 2004, p. 17).
Ice roads are used in every Canadian province and territory, except Nova Scotia and PEI (Adam, 1978). Manitoba is perhaps the most dependent on ice roads (Grant, 1986). Every winter, the Manitoba Government spends about $5.5 million building about 2300 kilometres of ice roads (Prentice et al., 2003). These roads serve about 30,000 people; most of these people are Aboriginal (Abdul-Hay, Harrison, Turriff, & van Rosmalen 2003). The population in areas served by ice roads in Manitoba is expected to double in the next twenty years (Abdul-Hay et al. 2003). Figure 1 shows what can happen when ice is not thick enough on ice roads.
There are three types of ice roads: solid ice roads, aggregate ice roads, and winter roads on ice. Ice bridges are sometimes necessary for solid and aggregate ice roads (Adam, 1978).
Solid ice roads are constructed by applying liquid water and allowing it to freeze. Prior to the application of water, a level roadway is created, using snow to fill depressions. Tanker trucks transport water from nearby rivers and lakes and evenly spray successive layers of water. Each layer is allowed to freeze before the next layer of water is applied. These roads require large volumes of water—over one million litres of water per kilometre (Adam, 1978).
Aggregate ice roads are constructed by chipping chunks of ice out of frozen lakes and rivers or from the sea and transporting the ice to the roadway. Of all three types of ice road, these are the least common and the most expensive (Adam, 1978).
Where solid and aggregate ice roads must cross unfrozen streams, ice bridges are built (Adam 1978). Ice bridges are built by piling snow up in a stream. The stream carves out a channel under the piled up snow and the snow becomes an arch of ice crossing the stream (Prentice et al., 2003, p. 44).
Winter roads on ice are the most common type of ice road. These are simply roadways along frozen lakes and rivers. They are prepared by clearing the snow from the surface of a frozen water body. This allows the ice to thicken more quickly. The ice must be at least twenty-five centimetres thick prior to construction so that snow-clearing equipment can be operated safely on the ice. The cleared snow is banked on the sides of the road. The weight of the snow banks depresses the ice so travel is restricted to the centre of the road (Adam, 1978).
The bearing load of a winter road on ice is a function of the thickness of the ice. An equation is used to calculate safe bearing loads for variable ice thickness (Gold, 1960 cited in Adam, 1978). The equation is P = Ah2 where P is the load in kilograms, h is the ice thickness in centimetres and A is a constant (usually 3.52) (Adam, 1978). For a 60,000-kilogram load, the ice must be at least 70 centimetres thick. With recent warm winters, even when lakes have frozen over, the ice is often not thick enough to support heavy trucks (Blais, 2006).
Speed limits are very important on ice roads over lakes. The movement of vehicles along the ice causes wave action in the unfrozen water below the ice. If the waves are strong enough, this can lead to fracturing of the ice surface with disastrous results. Generally, large trucks cannot travel faster than 19 kilometres per hour across frozen lakes (Grant, 1986).
Climate Change: Transportation Crisis in the North
Northern Canadians are already experiencing problems with ice roads as a result of warm winters. During recent winters, ice roads have opened later in the season and closed earlier. Northern communities have experienced shortages of food and fuel as a result (CCIAP, 2004). In 1998, the Manitoba Government was forced to spend almost $15 million air lifting supplies in to northern communities that were isolated as a result of the warm weather (CBC News, 2002). The province air delivered over ten million litres of fuel to twelve Northern Manitoba communities (Abdul-Hay et al., 2003). To help ease the situation in 2006, Prince Albert-based Transwest Air shipped freight free of charge to northern communities cut off due to the lack of ice roads. The airline delivered 8000 litres of fuel to the towns of Wollaston and Fond-du-Lac in Saskatchewan free of charge. Still, people desperate for supplies have opted to travel along thin ice roads, despite warnings from officials that the ice was not thick enough. In the last few years, two vehicles have gone through thin ice as a result (Blais, 2006).
Industrial activity in the north has also been set back as a result of the lack of ice roads. During this past winter, De Beers diamond mines in the Northwest Territories only received 600 of 2200 expected truckloads. This has delayed construction of the mines. And in Nunavut, the Jericho Diamond Mine only received about sixty percent of its anticipated deliveries this past winter due to thin ice. Operations were curtailed because the company had to conserve on fuel (Younglai, 2006).
Analysts have quantified the reduced ice road season. Prior to 1996, the ice road season in much of Canada’s North averaged about seventy-five days. Since 1996, the ice road season has averaged forty-seven days (Weber, 2005). At Norman Wells, NWT an ice road crosses the Mackenzie River. Currently, the ice crossing is open 178 days per year. In the coming decades, scientists predict that this ice crossing will only be safely operable for between 138 to 148 days per year (Lonergan et al., 1993). These shorter ice road seasons mean that shippers need to be able to make all their required deliveries within reduced time periods. Figure 2 shows the length of the ice road season in Manitoba, east of Lake Winnipeg. Three recent winters in that region have suffered from unusually brief ice road seasons.
It is imperative that governments and corporations take action in response to Northern Canada’s ice road crisis. Responses fall into two categories: redesigning ice roads to extend their period of operability or coming up with alternatives to ice roads. In terms of redesigning ice roads, options include alternate routing and the use of permanent bridges (Abdul-Hay et al., 2003). Alternatives to ice roads include permanent all-season roads and barge transportation (CCIAP, 2004).
Changing the routing of some ice roads may extend their operating seasons. Relocating roads to higher ground and along routes with fewer stream crossings may allow them to be open for longer periods of time in the winter. Also, by making the entrances to ice roads at more northerly locations, the ice road season may be extended by up to seventeen days (Abdul-Hay et al., 2003).
Where ice roads must cross streams, permanent bridges can be built. Permanent stream crossings allow ice roads to continue to be used in conditions when only the ice bridges would be inoperable (CCIAP, 2004). In Manitoba this is already happening to some extent, with the use of what are called Meccano Bridges. Meccano Bridges are pre-fabricated, one-size-fits-all bridges built in Winnipeg and transported to northern ice roads. They span twelve metres and cost $30,000 each to make and set up. This is remarkably inexpensive, considering that they last about twenty years (Prentice et al., 2003).
The problem with redesigning ice roads is that the fundamental issue remains unaddressed. While perhaps mitigating the problem, ice roads will still be vulnerable to increasingly warm weather resulting in shrinking time windows of operability. Given this shortcoming, I now explore some alternatives to ice roads.
One ice road alternative is to build more all-season roads in Northern Canada. There have been plans to build more permanent roads in the North for some time (Abdul-Hay et al., 2003). One plan calls for a road linking several communities along the West Coast of Hudson Bay. However, very few of these plans have ever been carried out (Dick & Gallagher, 2005). Because of melting permafrost and exposure to successive periods of freezing and thawing, permanent roads in the North are unstable and expensive to maintain. Figure 3 shows the cracking that results from alternate freezing and thawing of paved roads. In fact, it is estimated that maintenance of permanent roads cost five to ten times more than maintenance of ice roads (Abdul-Hay et al., 2004).
Another alternative to ice road transportation in the North is barge transportation. In fact, in the Mackenzie Valley, most goods are already shipped by barge. This is because barge is the cheapest method of goods shipment in the region. And, global warming may allow the barge-shipping season to be extended, perhaps offsetting the reduced ice road season (Lonergan et al., 1993). Of course, this solution is geographically limited to locations accessible by water. Much of the North is landlocked and even areas that do have access to waterbodies, port infrastructure is lacking and would be expensive to develop (CCIAP, 2004).
Airships: A Promising Solution
The most promising and exciting alternative to ice road transportation is transportation by airships. In 2002, then Federal Transport Minister, David Collenette attended a conference in Winnipeg that explored the idea of using helium-filled balloons or airships to transport goods to Canada’s North (Perreaux, 2002). Some claim that airships could become a common sight in northern skies within five years (Younger-Lewis, 2005). If that were to occur, airships would not be new to Canada’s Arctic. As far back as the 1890s hydrogen balloons were used for exploration of the Arctic. After the Hindenburg disaster in 1936, in which a hydrogen balloon exploded, the technology lost its appeal. However, the new breed of balloons would be much safer, as non-explosive helium gas would be used (Dick & Gallagher, 2005).
Modern airship designs can be spherical or cigar-shaped (Dick & Gallagher 2005: 4). Figure 4 shows a typical spherical airship. There are advantages and disadvantages with each type of design. Traditional cigar-shaped airships are more aerodynamic and experience less wind drag. Spherical air ships have the advantage of being more maneuverable. Spherical airships also can reach altitudes of 6.3 kilometres—four times higher than cigar-shaped airships. Spherical airships also have the ability to take off and land on water (Prentice et al., 2003). Computer-controlled engines are used to “adjust propeller vectors to maintain altitude with upward thrust, to allow hovering, and to angle the craft into the wind” (Dick & Gallagher, 2005). Cargo loads vary with different designs, but some may be able to carry up to 500 tonnes of cargo (Younger-Lewis, 2005). Some airship designs do not even require pilots, as they can be operated from the ground, by remote control (Prentice et al., 2003).
There are several advantages associated with airships compared to ice roads and other alternatives. The main advantage is that airships would drastically increase accessibility to and within the North. The airships would not require landing strips or any other infrastructure other than the vehicles themselves. They could go virtually anywhere in the North, at any time. And, at speeds of 95 kilometres per hour, they would be very fast (Prentice et al., 2003).
Another advantage is that airships would be cheaper than everything else (Dick & Gallagher, 2005). Savings result from the fact that infrastructure other than the airships themselves is not needed (Prentice et al., 2003). There would also be storage savings. Currently, cargo must be stored until ice roads are ready or until rivers are unfrozen and safe for barge travel. With airships, these delays would be eliminated, resulting in savings for shipping companies and their clients. It is estimated that shipping costs on a 150-tonne capacity airship would likely be about forty percent less than truck shipping (Dick & Gallagher, 2005).
Proponents argue that an additional advantage of airships is that they may be operated all year-round. Currently, there is only a brief window when northern communities can be accessed by ice road or barge. If these communities could receive deliveries twice per week year round, northern residents would experience significant improvements in terms of quality of life (Prentice et al., 2003).
Airships, as a cheap and reliable means of transportation, would act as a catalyst for economic development in the North. The lack of transportation infrastructure in Canada’s North has constrained economic development (Younger-Lewis, 2005). Petroleum regions in the Northwest Territories have significantly fewer oil wells compared to similar areas in Alberta because of the lack of access for personnel and supplies. Petroleum, gold, and diamond mining activities would all benefit from airship transportation (Prentice et al., 2003; Younger-Lewis, 2005).
It has even been claimed that airships may be the key to protecting Canada’s sovereignty in the Arctic. One airship enthusiast envisions massive Canadian-flag bearing airships flying in Canada’s northern skies. Such ships would be strong symbols of Canada’s status as a northern country (Dick & Gallagher, 2005).
Airship transportation would cause few environmental impacts. They essentially would leave no footprint on the land. While, they would require the use of diesel fuel for their engines, the amount required would be small. Also, engineers are working on creating airships equipped with solar panels to minimize their use of diesel fuel (Prentice et al., 2003).
There would be some problems with airships. One problem is that they cannot be operated in winds greater than fifty-five kilometres per hour. This means that, contrary to claims of some proponents, there would be some days when weather conditions would prevent airship transportation. However, analysis of weather patterns in the Northern Canada indicates that for most areas, weather conditions would disrupt airship transport less than ten percent of the days in a year (Dick & Gallagher, 2005). This would still be much better than the brief period when ice roads can be operated.
Another problem is that the technology still needs to be refined. Despite advances, the vehicles are still difficult to steer (Van Praet 2003). Also, in order for airships to be used year-round in the Arctic, navigation systems would need to be developed allowing travel during the dark winter months (Dick & Gallagher, 2005)
Northern Canada’s vast network of ice roads is highly vulnerable to global warming. Already, northern communities are beginning to experience shorter ice road seasons as a result of warm winter weather. This has resulted in shortages of food, fuel, and other supplies and has hampered resource extraction. There are several possible responses to this transportation challenge, including changes to the routing of ice roads, permanent bridges along ice road routes, constructing all-season roads, increased reliance on barge transportation, and the development of airship transportation. Every option has certain challenges associated with it, but airship technology is the most promising. Airship transportation would be cheaper than the alternatives and would greatly increase accessibility throughout the North.
This essay was written in 2006 for a third-year course on the Geography of Transportation.
Abdul-Hay, Karime, Bobbi Harrison, Shelley Turriff, Connie van Rosmalen. 2003, March. Transportation and Climate Change in Manitoba—Proceedings. University of Manitoba Transportation Institute. Prepared for Manitoba Transportation and Government Services. Retrieved online April 1, 2006. http://www.parc.ca/pdf/conference_proceedings/mar12_03_transport_proceedings.pdf
Adam, Kenneth M. 1978. “Building and Operating Winter Roads in Canada and Alaska.” Environmental Studies, No. 4. Department of Indian and Northern Affairs, Ottawa.
Augusta Chronicle. 2004, April 29. Retrieved April 6, 2006. http://chronicle.augusta.com/images/headlines/062904/27135_512.jpg
Blais, Casey. 2006, January 27. “Residents Taking Risk By Driving on Thin Ice.” Leader Post. Regina. P. A6. Retrieved April 1, 2006 from the ProQuest Database.
Blanchard, Bill. 2002. Photograph accessed from Reddit on April 24, 2021 from: https://www.reddit.com/r/WeirdWings/comments/ephhhy/the_spas70_a_spherical_airship_with_an_internal/
CBC News. 2002, January 15. “Warm Winter Tough for Northern Manitoba First Nations.” Retrieved online April 1, 2006. http://www.cbc.ca/story/canada/national/2002/01/14/manitoba_roads020114.html
CCIAP (Climate Change Impacts and Adaptation Program). 2004. “Impacts on Transportation Infrastructure.” Government of Canada. Retrieved online April 1, 2006. http://adaptation.nrcan.gc.ca/perspective/transport-03_e.asp
Dick, Terry A. and Colin Gallagher. 2005. “A Case for Airships in the Canadian Arctic.” Meridian. Canadian Polar Commission. Retrieved online April 1, 2006. http://www.polarcom.gc.ca/english/pdf/meri_05_fall_en.pdf
Grant, Robert S. 1986. “Ice Roads.” Canadian Geographic, 105(6): 56-63.
Hassol, Susan Joy. 2004. Arctic Climate Impact Assessment. Cambridge, UK: Cambridge University Press. Retrieved online April 1, 2006. http://www.acia.uaf.edu
Lonergan, Steve, Richard DiFrancesco, and Ming-Ko Woo. 1993. “Climate Change and Transportation in Northern Canada: An Integrated Impact Assessment.” Climatic Change, 24: 331-351.
Perreaux, Les. 2002, October 24. “Businessmen Float Idea of Arctic Airships: ‘Surreal,’ says Collenette.” National Post. P. A11.
Prentice, Dr. Barry E., Jill Winograd, Al Phillips, Bobbi Harrison (Eds.). 2003, October 21-23. Moving Beyond the Roads: Airships to the Arctic Symposium II. Winnipeg, MB. Retrieved online April 1, 2006. http://www.hacinc.us/A2A2_proceedings.pdf
Sutherland, Anne. 2006, March 14. “It’s Official: Warmest Winter Ever.” National Post. P. A7.
Van Praet. 2003, October 22. “New Breed of Blimp Set to Sail the Skies.” The Gazette. P. B2. Retrieved April 4, 2006 from the ProQuest Database.
Weatherstone, William. N.D. Canada’s Winter Ice Roads. Retrieved April 3, 2006. http://www.thedieselgypsy.com/Ice%20Roads-3B-Denison.htm
Weber, Bob. 2005, June 3. “Warmer N.W.T. Destroying Roads, Airstrips.” Calgary Herald. P. A16.
Younger-Lewis, Greg. 2005, May 20. “Big Balloons Prescribed as Cheap Cure for what Ails Nunavut.” Nunatsiaq News. Retrieved online April 3, 2006. http://www.nunatsiaq.com/archives/50520/news/nunavut/50520_11.html
Younglai, Rachelle. 2006, March 23. Loss of Ice Road Will Hit Diamond Mines.” Toronto Star. P. D4. | <urn:uuid:7fb0565b-7d6c-41b7-af32-f35d0e4fe69b> | CC-MAIN-2022-33 | https://tommythomson.ca/2021/04/24/from-ice-roads-to-airships-the-future-of-goods-transport-in-northern-canada/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571719.48/warc/CC-MAIN-20220812140019-20220812170019-00295.warc.gz | en | 0.937952 | 4,413 | 3.46875 | 3 |
Introduction Since world war I, public expressions of racism have been limited to far –right political parties such as the national front in the 1970s, whilst most mainstream politicians have publicity condemned all racism remains wide spread, and many politicians and public figures have been accused of excusing or pandering to racist attitudes in the media, particularly with regard to imagination.
There have been growing concerns in recent years about institutional racism in public and private bodies, and the tacit support this gives to crimes resulting from racism in the public sector to the police force, and requires public authorities The race relations act 1965 outlawed public discrimination, and established the race relations board. Further acts in 1968 and 1976 outlawed discrimination in employment, housing and social services, and replaced the race relation board with commission of racial equality.
The human rights act 1998 made organization in Britain, including public authorities, subject to the European convention on human rights. The race relation act 2000 extends existing legislation promote equality. Although various anti-discrimination legislation do exist, according to some source most of the employers in UK remain institutionally racist including public bodies such as police and legal professions. Its nearly impossible for persons subject to such institutional racism to seek legal redress, as in UK funding is not available at employment tribunal.
The situation of implementation of human rights law in similar. The terrorism acts, which came into law in 2000 and 2006, have caused a mark increase in racial profiling and have also been to been basis to justify existent trends in discrimination against person of Muslim origin by the British police. Public sectors employer in UK are some what less likely to discriminate on the grounds of race, as they are required by law to promote equality and promote equality and make efforts to reduce racial and other discrimination.
The private sector, however are subjects to little or no functional anti-discrimination regulation and short paid litigation, no remedies are available for members of ethnic minorities. UK employers can also effectively alleviate themselves from any liability for the employers racial screening and discriminatory policies to third party recruitment companies. Edward Ricardo Braithwaite’s autobiographical novel To Sir With love which is based on his experience as a black teacher in a tough east end secondary modern school, offers a remarkable insight into the politics of class and race in postwar London.
Sidney Poitier came to London to star in the film version of the novel in 1967, and later appeared in a sequel, based in Chicago, which was made for television in 1996. Yet surprisingly, the novel itself has been largely overlooked. When the narrator of To Sir With love arrives in London in 1948 he is struck by the disparity between his expectation and the reality I had references to it both classical and contemporary writings and was eager to know the London Chaucer and Erasmus and the sorores minories. I had dreamt of walking along the cobbled street of cable makers to the echoes of chancellor and the brothers of Willoughby.
I wanted to look on the reach of Thames at Black wall from which captain john smith had sailed abroad the good ship Susan Lawrence to found an English colony in Virgina ( 9) The corner stone of any significant cultural change must surely come from education. If one thing could characterize the changing nature of education during this period, it would be the shift toward a more egalitarian system. In 1944 the butler act had setup trippartie system as revolutionary measure, as it promised an education tailored young people of all abilities and back grounds.
The principle was that with each person was taking standardized tests at the age of eleven, the education system would progress towards the state of equality. However, over the twenty one years that followed its instatement, it became clear that the system was based not only on raw intellectual ability, but the out come of the system also reflected the class system it was supposed to disintegrate. Additionally, the so called parity of esteem that was alleged to exist between grammar, secondary modern and technical school was widely regarded by employers and the general public as fallacy.
Not only were secondary moderns under funded in comparison to grammar school, but from them with qualifications, and for many professions and universities, a grrammar school privilege but limited ability , would be similarly treat to their working class counter parts. It could not be simple. Despite having risked his life for the ideal British way of life he seen as alien. After his rejection he steps out of the grand imposing buildings in Mayfair disappointment and resentment were a solid bitter rising lump inside me I hurried into the nearest public lavatory and was violently sick. (pg) remembering the joyous celebrations on each royal visit to British Guiana he concludes yes it is wonderful to be British until one comes to Britain. And so without any sense of vacation as he candidly admits he becomes a teacher in an east end school that best job he can get. It’s a dark and gloomy building located in rubbish strewn bomb- wrecked area which he compares unfavorably with his light and cool school house in George town .
Life around cable streets turn out to e hard and not just for narrators at first he is rather snobbishly shocked by working class east enders whom he sees as peasants a term that Albert Angelo also uses about his east end of pupils in BS Johnson’s eponymous novel 964 Braithwaite resist seeing the children as victims despite their damp, impoverished and over crowded conditions at home hungry are filled naked or clothed they were whit and as far as I concerned that fact The narrator is bitterly disappointed in is kids and thinks that he has been wasting his time but he overjoyed to discover that his tolerance and patients good will paid off his pupils looking washed and smart attended the funeral proof of the efficiency of his pedagogy proclaiming his abilities, attractiveness, intelligence, judgment and unassertiveness.
But given the pervasive prejudice he encounters, it is hardly surprising that he should cast him self as the hero of his own story since, unlike the boys in selvon’s the lonely Londoners, Braithwaite has whom he calls mom and dad beyond that has no community as CarlyPhilip says in his introduction to vintage novel we do feel sympathy for this isolated patrician man who attempted now to make a community out of the pupils in his charge and his fellow Relationship between Teachers and students as portrayed by Braithwaite Braithwaite is a black man, who is a teacher at greens lade school who has recently joined the school because of demoralization from royal force.
He also holds a degree in engineering but was not able to find a suitable job because of his black colour as no white wants to work with a black or take order from blacks. Braithwaite has some insecurities when he starts teaching but he grows confident in his teaching abilities. He genuinely cares about the students and earns their respect by the end of the school year, Braithwaite is a beloved, warmly accepted teacher who is well known in community. Braithwaite is an intelligent sensitive man who is able to motivate his students. To Sir With Love deals with how the teacher pupil relationship is used to explore key themes in the novel. To sir with love is written by ER.
Braithwaite and includes some key themes throughout the novel. Racism, values, and relationships are some themes that are explored with use of Braithwaite ‘s relationship with his class members but with the class as whole. Braithwaite relationship with his class goes through three stages in the novel; silent treatment, noisy treatment and open protest. It’s only after exercising these stages that Braithwaite is finally accepted by his class and given respect. These stages explore maturity of class and how the values change through out the course of year. When Braithwaite first begins teaching he is faced with racist comments and new uncooperative rudeness of his students.
The teenagers refer to him as “new blackie teacher”. This illustrates the racial prejudice which existed in east end of London at that time. The students are obnoxious to the person he is, all they seein the colour of his skin. Braithwaite experience a cold attitude of his class, “I begun to feel a bit uneasy under their silent concentrated apraisl”. They do not offer to participate or raise the hand and are ignorant to their education. This reflects on how children often left school at young age and education was ot vital as it is now. Another theme touched on racism. Braithwaite endures prejudice from children although it is usually quite sly or hidden from outside point of view.
He will never be called as blackie or darky to his face but behind his back the children will happily speak about him. He has not really changed, maybe he does not get angry anymore at people who discriminate him. Gillian Blanchard is a great emotional and moral support to Braithwaite. She stands up for him even to her parents showing off her love for him. As Braithwaite walks through the hallways, he is nearly shocked by several students running out of classroom. He knocks and enters to see what is happening, only forty students unattended. By their dress and demeanor, they seem to be well aware of their maturing bodies. Everything is bit soil and untidy, as if too little attention were paid to washing themselves or their flashy finery.
Another male teacher, Weston comments later that they need is bloody hiding by the contrast how teacher Mrs. Evans near perfection without recourse of beating in her classes as well as immediate hush in assembly listened to select music played for them. This seems to attest quiet a respectful and orderly atmosphere in school on the whole. Braithwaite, as a well educated middle -class Black man who not only has university education but has been officers in the RAF, has to come with terms of failure of meritocracy in his life Braithwaite encourages their self -esteem by narrating his life to engage students interest and open possibilities of thought for them.
These learning experience connect to the world beyond school is the significant of the important way, cross -curricular and carefully integrated approach during half- yearly students council report saying that their lessons had particular bias towards brother hood of man kind, and they have difference in colour , race and creeds. Braithwaite was thus able to integrated ideas and concepts of curriculum areas Rather, using than to do away with books and narrowing down the curriculum he extends it using skeletons receives as an opportunity to introduce physiology Perhaps the most striking aspect of the novel is not the narrator’s occasional self -congratulation but his quietism.
When one of the boys attacks the bullying sport teacher for his sadistic treatment of a fellow pupil Braithwaite insist that the boy must apologize to the ‘master’. The class is shocked by what they consider to be just double injustice but the narrator counsels against rebellion. “I’ve been pushed around , Seales” I said quietly “in a way I cannot explain to you. I’ve been pushed around until I began to hate people so much that I wanted to hurt them. ,really hurt them. I know how it feels, believe me, and one thing I’ve learned, Seales, is to try always to be a bit bigger than the people who hurt me”. (Pg 162) Although the speech is given in front of the whole class it is directed particularly at Seales, the mixed race boy, even though he is not the culprit.
It is as if Braithwaite fears that Seales above all, is the one who need to learn the lesson of self-discipline, or risk being provoked into reaching for a knife or a gun and finding himself in deep trouble. In another scene the mother of one girl in the class comes to complain about her daughters behavior. The girl, Pamela, confides to her teacher : she is upset about the man who calls on her widowed mother and in particular about something that happened that she cannot mention. Again the narrator warns against rebellion insisting that Pamela should be obedient and courteous daughter. His message to the children seems to be world will do its dirty job; there’s no use kicking against the pricks; try to maintain his dignity. At several points in the novel Braithwaite is publicly humiliated.
On the bus an Englishwoman refuses to sit next to him. He guesses that she secretly enjoys herself: “what a smooth, elegant, superior bitch! ” he thinks to himself but he says nothing. On the tube taking his pupils to Victoria Albert museum two elderly well- dressed women start muttering darkly about shameless young girl and these black men until one of his pupils, Pamela shouted at them” He is our teacher. Do you mind? ” And again Rick is silent and so maintains dignity. The stoicism infuriates his white English girlfriend. When they go to an expensive restaurant tin Chelsea, the waiter keeps them waiting for a very long time and deliberately spills over the soup.
Gillian insist storming out but Ricky, we assume, would have remained at the table in a dignified way or would have sucked it up. How the reader sees his stance probably depends on whether one thinks that black people’s long walk to freedom is best pursued by the following Dr Martin Luther king’s path way of non – violent action or way of Malcolm. “Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that. ”-Martin Luther King In the beginning, he describes that he wants a job, not a labor of love. Then, after spending time with students,. Braithwaite began learning from them as well as teaching them. The students did not respect adult authority.
He realizes that they may have problems in their personal lives, but when they entered the classroom, Braithwaite joined them on journey of adulthood. The students asked many questions touched upon the people of different races, Braithwaite gave mature answer, and spoke to them as if they would behave more responsibly. When as a teacher he expected them, the students in return, accepted him and honored him with the title ‘sir’. “Myself you will address as ‘Mr. Braithwaite’ or ‘Sir’-the choice is yours; …. ” (73) The issue of racism does not disappear, but it never dominates the book. Race plays a significant role in Braithwaite’s relationship with other adults.
He also has to deal with inappropriate comments from staff. “I say whatever’s going on in that classroom of yours, old man? I mean this suburban formality and all. Bit foreign in this neck of the woods, don’t you think? ”- Mr. Weston. The adult clashes are ultimately concluded by him to be no importance as long as the repercussions do not enter the classroom it was who mattered. In this novel, various human characterized are portrayed. Through the story, the ideas that humans are able to adapt and their way of thinking seems to be demonstrated. Both the teacher, Braithwaite, as well as his students, go through many changes. These changes lead to a change in their way of thinking about each other.
The novel portrays the ability to adapt to the world around one’s self as very important trait. It’s a beautiful books about human nature and the behavior of the teenagers who are treading in the path of being extreme rebels. Then it tells the story of a person who first becomes a teacher just to sustain himself but realizes that these teenagers, whom he is in charge of, need help. So he selflessly forgets everything and help them to become adults. He tolerates all the hardship that goes through the process. The book is about students- teachers’ relationship. “The mediocre teacher tells. The good teacher explains. The superior teacher demonstrates. The great teacher inspires.
Ward At the end of the novel Braithwaite sells out his philosophy made it clear that coloured people in England were gradually working for their own salvation, realizing that it was not enough for them to complain about injustice done to them, or rely on interested parties to agitated on their behalf. They were working to show their worth, integrity and dignity in spite of the force opposing them. We all want to feel cared for and valued by significant people in our world. This knowledge is a powerful tool in the arsenal available to teachers as a classroom discipline, plans as a classroom teacher wield a great deal of power over the students simply due to the fact that teachers control their destiny for six and half hours each day, five days a week.
When students feel that their teacher value and care for apt they go out their teachers way to please the teachers, and the teachers should treat their students with dignity they should be impartial and encouraging. So it makes positive discipline climate in the classroom. It’s critical to remember that when it comes to students behavior it depends on the relationship which a students have their teacher than it rules themselves that encourages the students. “The best teachers teach from the heart, not from the book. ”-Anonymous There are several techniques that can be used to achieve the goal, teachers should monitor the way they call their students all students should be given a chance to participate in class. They can ask the students for hints in class.
They should tell the students directly that they have ability to do well their belief on students will give their students a great success. “The beginning is the most important part of the work. ”-Plato When a teacher calls on a student they should monitor themselves on the response opportunities. Often teachers, who keep track, discover that they have low expectation to answer. When they fail to recognize particular students, they might communicate in a low level confidence in their abilities individual students compound when they see other student are been called regularly. “Knowledge comes, but wisdom lingers. ”-Tennyson It’s important for a teacher to recognize that they are providing to all the students with response opportunities.
Putting a check the teachers call on during class hours, which will help the discussion moving they should make sure that they call on high achieving students and also the students who have pattern of not performing well. Keeping a simple check list on a clipboard during classroom discussion and moving other students to listen to the correct answers. “The true teacher defends his pupils against his own personal influences. ”-Amos Alcott However, it also could lead other students to think that the teacher doesn’t have confidence in them and expect them to participate, and it increases likelihood that they will get out of task a good teacher would allow their students to participate equally in class to make their class more effective.
The teacher student relationship is very important for children. A good teacher can inspire hope, ignite the imagination, and instill a love of learning. ”- Brad Henry. Children spend approximately 5 to 7 hours a day with a teacher for almost 10 months. Teachers should ask themselves what is considered a good teacher. All of us have gone through schooling, and if fortunate had a favorite teacher. A positive relationship between the student and the teacher is difficult to establish, but can be found for both individuals at either end. The qualities for a positive relationship can vary to set a learning experience approachable and inviting the students to learn.
A teacher and student who have the qualities of good communications, respect in a classroom, and show interest in teaching from the point of view of the teacher and learning from a student will establish a positive relationship Children have different strategies for learning and achieving their goals. A few students in a classroom will grasp and learn quickly, but at the same time there will be those who have to be repeatedly taught using different techniques for the student to be able to understand the lesson. On the other hand, there are those students who fool around and use school as entertainment. “A teacher affects eternity; he can never tell where his influence stops. ”- Henry Adams. Teaching then becomes difficult, especially if there is no proper communication. Yet, teachers, creating a positive relationship with their students, will not necessarily control of all the disruptive students.
The book, Responsible Classroom Discipline written by Vernon F. Jones and Louise Jones discuss how to create a learning environment approachable for children in the elementary schools. According to the Jones, “ Student disruptions will occur frequently in classes that are poorly organized and managed where students are not provided with appropriate and interesting instructional tasks” (101). The key is, teachers need to continuously monitor the student in order for him or her to be aware of any difficulties the student is having. Understanding the child’s problem, fear, or confusion will give the teacher a better understanding the child’s learning difficulties.
Once the teacher becomes aware of the problems, he or she will have more patience with the student, thus making the child feel secure or less confused when learning is taking place in the classroom. “One good teacher in a lifetime may sometimes change a delinquent into a solid citizen”- Philip Wylie. The communication between the student and the teacher serves as a connection between the two, which provides a better atmosphere for a classroom environment. Of course a teacher is not going to understand every problem for every child in his or her classroom, but will acquire enough information for those students who are struggling with specific tasks.
A significant body of research indicates that “academic achievement and student behavior are influenced by the quality of the teacher and student relationship” Jones 95. The more the teacher connects or communicates with his or her students, the more likely they will be able to help students learn at a high level and accomplish quickly. The teacher needs to understand that in many schools, especially in big cities like Los Angeles, children come from different cultures and backgrounds. A teacher then needs to understand the value of the students’ senses of belonging, which can be of greater value and build self worth for minority students. If the teacher demonstrates an understanding of the student’s culture, it will provide a better understanding between the teacher and the student.
Though there are students who have a difficult time in school and according to David Thomas essay, “The Mind of Man” states, “children who are yelled at feel rejected and frightened because a teacher shouts at them” (Thomas 122). The example above demonstrates the feelings the child has towards the teacher leading to inhibiting the child from learning. The reasons for children to be yelled at vary from teacher to teacher, but shouting should not be the solution for children who find education a difficult process or simply lack of learning experiences, but sometimes teachers find yelling at the child as the only quick solution. Therefore, those teachers who demonstrate respect towards their students, automatically win favor by having active learners in their classroom.
The arrogant or offensive teacher will lack these positive qualities due to his or her lack of control over the children. Teachers should assert that they should also be treated with respect and their responsibilities to ensure that students treat each other with kindness. According to Jones, “teachers are encouraged to blend their warmth and firmness towards the students in their classroom, but with realistic limits. ” (111). Conclusion Braithwaite is a novelist, writer, teacher, and diplomat, best known for his stories of social conditions and racial discrimination against black people, Braithwaite is perhaps is best known as an author for his autobiographical novel.
To Sir With Love is mainly remembered to day because of the 1967 flim version starring Sidney Poiter which updated Braithwaite’s particular and surprising postwar swinging sixties black board jungle movie with wailing theme tune sing by lulu. In an interview with Burt conductected for radio 4’s to sir, with love revisited , Braithwaite admitted to ambivalent feelings about the flim, although its success guaranteed that novel would never sink into oblivion. It provided him with some measure of financial security but he still loathed it from the soles of his feet, particularly because of betrayal of novel’s interracial romance/which he felt was essential to the protagonist ‘s escape from his isolation
In the novel Ricky and Gillian strike up a friendship in the staffroom which gradually develops into romance the main obstacle seems to be his worry about the effect of racists society in her how long would their happy association would survive the the malignity of stares which were deliberately indeed to make the woman feel unclean, as if she had abjectly degraded not merely herself but all women hood? Meanwhile she wants him to stand up to the racists whether on the tube or restaurant once they decide to marry they have to over come her father to grant his consent he objects “you might have children ; what happens to them they will belong nowhere, and no body will want them” when racists where not complaining that black men are talking to women they pretended to be concerns for the mixed –race children who, they argued, would not know they were. Braithwaite assures Gillian’s “will belong to them”.
The novel ends with Braithwaite being given a leaving present and card addressed “to sir with love” As a student I wanted to understand the work pressure of teachers and to show teachers understanding their students will help the students to bring in a great change in their attitude and they would be a positive human being The answer becomes clear when teachers interact with, and learn more about their students. Our first educational experience, which takes place in the primary years of our life, sets the principles for our future education. Every school year an elementary teacher deals with new faces and new attitudes. Some children find themselves lacking an interest in learning and others feel playing and fooling around at school with friends is the happiest moment of their life.
The solution to inappropriate behavior will not automatically get rid of the poor attitude of these children, but is to establish a positive relationship. Teachers can establish a positive relationship with their students by communicating with them and properly providing feedback to them. Respect between teacher and student with both feeling enthusiastic when learning and teaching. Having established a positive relationship with students will encourage students to seek education and be enthusiastic and to be in school. Remembering our favorite teacher will be recognized because they had at least in one way or another the qualities I discussed in this essay, although we are not aware of it during the time we are in school, but teachers are well recognized at a later time of our lives.
Insanity and Competency
Insanity and Competency.
Scenario: You are an intern assigned to a special agent for your state’s investigative bureau with a specialty in criminal intelligence. A representative for the Governor’s Office would like you to write a paper explaining how someone who is not a forensic psychologist is able to fill the position of criminal profiler in the State Investigation Bureau (SIB). To meet this request, you will prepare a 3–5-page white paper (in APA format) for the Governor and her staff so that they understand your training and why law enforcement personnel are used for this position rather than forensic psychologists. You will start by explaining what a forensic criminal profiler does and how investigators may be best suited for this position. Relate how the criminal profiler targets serial crimes involving murder, sexual assaults, and rare arson types. Explain the appropriate background for a criminal profiler. Create a 3–5-page paper including such issues as the following: The way the crimes were committed Where the crimes were committed How the victims were chosen The crime type The times the crimes were committed If the offender was/is communicating with the police or other individuals (press, victim’s family, and so on) The circumstances and condition of the actual crime scene(s) APA FORMAT Follow her Template to a T or she will throw a fit! OCD BAD this Professor is! whew! Make me proud!
Essay Help “>Essay Help | <urn:uuid:6c2f1733-40d3-4eaa-a5a1-b970f3ea18a9> | CC-MAIN-2022-33 | https://onlinecustomessaywriting.com/to-sir-with-love-my-essay-help-uk-history/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572033.91/warc/CC-MAIN-20220814113403-20220814143403-00297.warc.gz | en | 0.978019 | 5,784 | 2.75 | 3 |
FWO Fellow at Ghent University, Ghent, Belgium
I start this chapter by determining five separate waves of historical expert testimony. Of these five I discuss two categories of predominantly European trials where historians have been active as expert judicial witnesses. These categories consist of the Eichmann trial, the German Frankfurt-Auschwitz Trials, and the Ludwigsburg Paradigm, the French Vichy trials, and the Holocaust denial trials like Irving v. Lipstadt.
In my soul and conscience, I believe that an historian cannot serve as a “witness,” and that his expertise is poorly suited to the rules and objectives of a judicial proceedings.
After World War II, historians have served as experts in a multitude of legal cases in different countries. This globalization of the historian as an expert witness is traditionally divided into five categories. I have already referred to the first category of transnational justice. The second category of expert witnessing is the collection of all the post-Holocaust and post-World War II litigation in Israel, Germany, and France. For these three countries, these trials were as much about individuals as they were about national history and the role their country or people had played during the war. This category has had a major impact on the perception and reception of the forensic form of history by the legal as well as the historical profession. A third category is that of Holocaust denial litigation. The most famous examples are the British case Irving v. Lipstadt and the lesser-known Zündel trials in Canada. In many European and Commonwealth countries the denial of the Holocaust is penalized. This is also a category comprised of litigation which has had an important influence on the concept of the judicial use of expert historical testimony. The litigation which is presently going on in former Commonwealth countries like Australia, Canada, and New Zealand define a fourth category. Historians serve as expert witnesses in post-colonial trials concerning the rights of native peoples. These cases are litigated because of land rights, water rights, rights to raw materials, and reparation of historical wrongs. Historians have testified on native peoples’ history in specially established tribunals and normal civil litigation. Most famous are the Waitangi tribunal in New-Zealand, the Delgamuukw v. British Columbia case in Canada and Mabo v. Queensland in Australia.1 A fifth and final category, is a broad category that comprises all civil American litigation where historians are involved in, ranging from land and water rights cases, indigenous peoples litigation, tobacco litigation, racial discrimination cases, gender related litigation, environmental cases, toxic tort litigation, etc.. Expert witnessing by historians is an established, institutionalized, and growing practice in America. The litigation-driven history in the US has an influence on the American historical profession as a whole due the number of historians involved. This is the category that has the most bearing on and relevance to historical practice today.
In this book, I examine the second, third, and fifth wave of expert witnessing in the twentieth century. The second and third waves have had a distinct impact on the general discourse on the interaction between history and law in Europe and the United States. Several cases were highly political and had polemic aftermaths. This controversial nature is very apparent in Wijffels’ selection of cases. He discussed the German Frankfurt-Auschwitz trials and the French Vichy trials, and the infamous Irving trial. The Ludwigsburg trials are not examined by Wijffels’ work, thereby leaving an important part of the German post-World War II litigation out of the picture. I have added these trials to my own limited overview to ensure a more balanced review of this second wave. I use the fifth wave of expert witnessing to make a comparison with the examples from the second and third wave. By comparing the European and American categories of expert witnessing, I can reassess Wijffels’ concept of the forensic context of history.
6.1 The Second Wave: Post-Holocaust and Post-World War II Litigation
These trials are criminal cases against individuals who were accused of committing crimes against humanity during World War II or during the Shoah. In Germany, France, and Israel these trials were more than a judgment on the crimes of an individual. These trials were historical lessons of a psychological and a pedagogical nature. For France and Germany the cases aspired to come to terms with a difficult and shameful period in the nation’s history. I divide this category further into three topics. I begin with an analysis of the Israeli example from 1961: the trial of SS-officer Adolf Eichmann. Secondly, the German Frankfurt-Auschwitz trials and the Ludwigsburg paradigm are discussed. Subsequently, I go into the French tripartite Papon, Barbie, and Touvier. All three categories have had a significant impact on the way European and American historians and lawyers perceive the interaction of their fields.
6.1.1 Eichmann in Jerusalem
On the 21st of May 1960, a plane carrying Adolf Eichmann landed in Israel. He had been captured by Mossad operatives in Argentina a couple of days earlier.2 Eichmann had been a high-ranking SS officer during World War II. He had been responsible for the administration of the transportation of the Jews to the extermination camps. For his role in the Shoah, the Israelis had brought Eichmann to Israel to be tried in an Israeli court. Ben-Gurion, the Israeli prime minister at the time, had clear intentions for the trial. The Eichmann trial needed to become the living recreation of a national and human disaster. Ben-Gurion wanted to retell and rewrite the legacy of Nuremberg in the Eichmann trial.3 While the former had been about an unjust war and about crimes against humanity–not only those committed against the Jewish people–, the latter would concentrate solely on crimes committed against the Jewish people.4
Historians seem to agree that the trial played out how Ben-Gurion had intended it. Serbian historian Vladimir Petrovic concluded that the Eichmann trial had revised history and had indeed stressed the uniqueness of the Holocaust.5 Both Georgi Verbeeck and the Canadian historian Michael Marrus acknowledged the didactical role the Eichmann process had played in shaping Jewish public memory of the war and the Shoah.6 Marrus went even further when he suggested that the process was more dramatic than it was judicial.7 Hannah Arendt, the famous political theorist of Jewish descent who herself had fled to the United States from Europe reported the Eichmann trial for The New Yorker.8 On these articles and her experience of the trial Arendt based her famous book Eichmann in Jerusalem: A Report on the Banality of Evil. In the book Arendt argued that the Eichmann trial had become a show trial where Eichmann’s physical presence was the only thing that helped the audience remember that Eichmann was on trial and not history. Eichmann had become peripheral in his own trial, according to Arendt.9 American legal scholar Mark Osiel writes that the Eichmann trial created a national saga and a national story which repaired the broken band between history and the Jewish people.10 Not surprisingly the history proposed by the Eichmann trial was a populist version of Jewish history, consisting of recurrent versions of the legend of David against Goliath, of respectively brave resistance and barbaric repression.11 Furthermore, the Eichmann trial created a new collective memory of the Holocaust for the newly established Jewish nation but also on an international level.12 The Eichmann trial has created much political sympathy for the Jewish people and their suffering during World War II.13
“I appear here as a witness, not an eyewitness or a jurist, but as a historian”, Salo Baron said when he took the stand to testify at the Eichmann trial.14 Baron was the only historian who appeared as an expert witness during the trial. Similarly to Arendt, Baron had fled Europe from his Polish hometown to America. In America, Baron continued his academic career and institutionalized Jewish history as a field in American faculties of history. It was in his capacity as an expert on Jewish history that Baron was called upon to testify against Eichmann.15 Rather than giving a historical overview of Eichmann’s crimes, Baron’s testimony explained the general context of the Nazi genocide of the Jews.16 Baron was questioned by counsel for the defence who tried to attack his testimony on an epistemological level. Eichmann’s German lawyer Dr. Robert Servitius attempted to discredit Baron’s testimony by directing attention to what Arendt called “the murky issues of philosophy of history.”17 Hannah Arendt condemned the mixture of the task of the historian and the judge, an issue that was also addressed during the proceedings in court.18 For Arendt it was wrong to have Baron testify on the Holocaust in general because it drew attention away from Eichmann’s person, who was actually on trial. Arendt argued that the story of the Eichmann trial should have been that a person like Eichmann, a bureaucrat without special pathological signs of being a psychotic murderer, had played such an important role in the organization and planning of the extermination of millions of people, or what Arendt called: the banality of evil in the person of Adolf Eichmann.19 Eventually the court’s decision was not greatly influenced by the testimony given by Baron.20 Hannah Arendt concluded that the Eichmann trial had failed to deliver a fair trial for Eichmann. “[t]he purpose of the trial is to render justice, and nothing else”, Arendt wrote disillusioned.21 Adolf Eichmann was sentenced to death and hanged on the 31st of May 1962. The Eichmann trial became an example of how history and the testimony of a historian as an expert witness could be used for political purposes in the courtroom.
6.1.2 The Frankfurt-Auschwitz Trials and the Ludwigsburg Paradigm
Just as Ben-Gurion had felt that it was necessary to reinterpret the history of World War II and the Holocaust, West Germany started a series of trials, known as the Frankfurt-Auschwitz trials, to address anew Germany’s responsibility in the Shoah. Historians have played a considerable role in these legal cases.22 Expert historians were primarily recruited from The Institute for Contemporary History in Munich.23 They were there to sketch as clearly as possible a picture of the historical and political landscape in which each individual crime on trial had taken place. Historians were needed to provide a general background to enable the judges to consider the actions of the individual in their historical context.24
The first of many trials started in 1963, when 22 officers and guards who had worked at Auschwitz were put on trial.25 Like the Eichmann trial and the Nuremberg tribunal, the Frankfurt-Auschwitz trials had a pedagogic purpose.26 The testimonies of historians were used to provoke public debate.27 As much as the German trials were about changing and constructing public memory, they produced excellent historical research that is still part of the historiography on the Holocaust and World War II.28 The judges frequently cited and used the witness reports in their judgments.29 Again attempts were made by, among others, legal scholar Ernst Forsthoff to undermine historians as expert witnesses by making the argument that the historical profession contemporized its knowledge in court and was therefore irrelevant. “History was rewritten every few years by a new generation of historians; hence, it had no legal value, which allowed it to be admitted to the courtroom”, Forsthoff argued. According to Forsthoff, history was not based on solid facts as opposed to legal facts that were treated in the courtroom.30 For Forshoff, in the Frankfurt-Auschwitz trial “justice had become beholden to the expert historian and history turned into a forensic historicism.”31
In 1958, an institution was formed in the German town of Ludwigsburg to investigate the crimes of the German national socialists through historical research and documentation. It was called the Zentrale Stelle der Landesjustizverwaltung zur Aufklärung Nationalsozialistischer Verbrechen or the Central Agency of the State Judicial Administration for the Investigation of National Socialist Crimes.32 In the litigation that was initiated as a consequence of newly uncovered evidence, historians played an important role as expert witnesses in collaboration with jurists.33 Historian Erich Haberer called that corporation and the results it produced “the paradigm of Ludwigsburg.”34 The Ludwigsburg paradigm was a “reciprocally beneficial relationship” between historians and lawyers.35 Haberer argued that the Ludwigsburg paradigm had succeeded in demystifying “the Nazi genocide”, because the historical testimonies, based on solid historical research, were able to change the defendants from monsters into ordinary men and women.36 This de-demonization had been the problematic issue, which according to Hannah Arendt had been absent in the Eichmann trial.37 When in 1990, the 80-year-old Josef Schwammberger, who had been an SS officer active in Poland, was brought to trial, he was expected to be the last person to answer in court for his crimes during the Holocaust.38 However, the trials had not seen their last act, for in 2008, 90-year-old Josef Scheungraber was convicted for war crimes committed in Tuscany during World War II.39 Altogether, in total there were 6,500 German criminal cases tried concerning crimes committed during the Holocaust or World War II.40 The German post-Holocaust and post-World War II litigation shows that, although at first the Frankfurt-Auschwitz trials had tried to create a politically informed version of the German experience in World War II, interdisciplinary cooperation in the Ludwigsburg paradigm had procured excellent historical research and more just judgments for individuals involved in mass atrocities.
6.1.3 Dealing with a Troublesome Past: Vichy in Court
France has known three landmark court cases concerning World War II. These three have many things in common, but the most striking common feature has to be the enormous political influence on the trials and their theatrical character. The Papon trial is the best known of the three, not in the least because the trial lasted for 15 years, from 1983 to 1998, making it the longest trial in modern French history, according to French historian Annette Wieviorka.41 Another factor for the resonance the Papon trial got in French society was the fact that after his career as a senior police official in the Vichy regime, where he had actively supported the German operations and also collaborated to transport French Jews to extermination camps, Papon had been a high official in the post-war French government. He had functioned as the police chief of Paris after the war and later as a minister of budget under president Giscard d’Estaing.42 Papon was sentenced to 10 years in prison.43 He was released in 2002. Papon died a free man in 2007. The second trial discussed is the Barbie trial. Klaus Barbie was a German SS officer who had been head of the Gestapo in Lyon during the war. He had earned himself the nickname: “The Butcher of Lyon.”44 While spending most of his post-war years working for American secret agencies, Barbie moved to Bolivia. In 1983, he was arrested and extradited to France, where he was indicted for his war crimes. The trial started in 1987 and Barbie was sentenced to life in prison. He died in prison in 1991. Paul Touvier, the protagonist of the third Vichy trial, was, according to Wieviorka, only of a secondary level in the Vichy apparatus when compared to Papon.45 Nonetheless, the Touvier trial had a significant impact on French national debate on the Vichy-era. Touvier was the first Frenchmen to be tried and convicted for crimes against humanity.46 He had been ordered to kill seven Jewish hostages near Lyon in 1944 as retaliation for the murder of a high-ranking member of the Vichy administration. His trial began in 1994. In 1995, Touvier was convicted to serve a life sentence. Touvier expressed remorse for his deeds. He died in prison in 1996. There could have been a fourth similar trial. René Bousquet, who had also been a police chief under the Vichy regime, was accused of crimes against humanity in 1991, but shortly before his trial he was shot and killed in 1993. Historians have been active players in these trials as expert witnesses as well as in the public debate that surrounded them.
The French Vichy trials proved to be an insurmountable task for the already strained relationship between law and history, according to Henry Rousso.47 Historians were asked to testify in all three cases. American historian Robert Paxton testified in both the Papon and Barbie trial. French historian Henry Rousso was asked to testify both in the Papon and Touvier trial. He refused twice. Since, Rousso has devoted an extensive amount of publications to the defence of his choice. Rousso wrote an eloquent letter to the court who presided over the Papon case.48 Rousso declared: “I refuse to be used, not for my knowledge but for my position.” Another problem according to Rousso was what he called The Vichy Syndrome, which became the title for a book he later published. In his book Rousso quoted Emmanuel Le Roy Ladurie on the subject. According to Le Roy Ladurie, the prosecution of Paul Touvier had turned into “the subject of enormous media attention and the vehicle for a debate on the legitimacy and activities of the Vichy regime, becoming popularly identified as a trial of the Vichy government.”49 This issue was very problematic for Rousso because there was an individual’s fate to consider in all three cases.50 This phenomenon is especially apparent in the case of Touvier who, despite his low rank, became a scapegoat for the crimes of the Vichy regime.51 American historian Richard Golsan described the Touvier trial as a trial for the remembrance of the Vichy regime. Touvier became “a character out of a novel.”52 The depersonalized character of the Touvier trial bears much similarity with the Eichmann trial and the first Frankfurt-Auschwitz trials.
In the case of Barbie, who had personally tortured and killed several victims, the purpose of the trial was somewhat different than reckoning with the Vichy past. For Rousso, the Barbie trial was all about the revenge on history by those who had suffered.53 Osiel quotes the French philosopher Finkielkraut who agreed with Rousso when he described the Barbie trial as “an unpaid debt with truth.”54 Osiel went even further when he wrote that the Barbie and Touvier trials were attempts to blame someone else other than the French themselves. In Barbie’s case, this was easy since he was not French but German. Touvier, in contrast, was presented as a simple tool in German hands.55 Papon, as a fully autonomous Frenchman, had been convicted for collaborating with the Germans in the Holocaust. The important difference was that Papon, in contrast to Barbie and Touvier, had not killed anyone personally, so he was sentenced to 10 years in prison, while Touvier and Barbie had to spend the remainder of their life behind bars because they had personally committed murder.
Rousso had more reasons to refuse to serve as an expert witness. According to Rousso some crimes against the French resistance could no longer be prosecuted due to the statutes of limitations. Osiel agreed with him on this and argued that due to the statutes of limitations, Barbie and Touvier could be convicted only for their crimes against humanity.56 The trials thus stressed the crimes connected to the Holocaust and reduced the significance of their other crimes, for example those against the French Resistance.57 Another problem with the Touvier trial and an important point for Rousso to refuse to participate, was the changed judicial role in which historians were to testify. Instead of going on the stand as expert witnesses who gave their general opinion on the historical context to aid the trier of fact, historians were asked to serve as regular witnesses. The reason behind this choice was an attempt from the court to have historians witness about the personal actions of Touvier rather than the general background in which Touvier had committed his crimes.58 After he had testified in the Touvier trial, American historian Robert Paxton outlined the general context of the Vichy era again at the Papon trial. According to Petrovic, the experts occasionally contradicted each other from time to time which was considered problematic by the courts.59 French historian Jeanneney stated in his bundle of essays on the Vichy trials that the historian had to be superior to a normal witness because he had to give a context, an interpretation of a period, a logic of a time rather than a set of facts.60 Rousso had no intention to join such a biased enterprise and abandon part of his freedom of speech and analysis.61
Furthermore, history itself was again on trial, according to Wijffels.62 In the Papon trial, historical theory was attacked by defending legal counsel. The defence lawyers argued that history was not suitable for judging, something which the expert witnesses were certainly doing. History remained “a fluid matter”, Papon himself added.63 For Wijffels, the Papon case showed that in court, there was a difference between the notion of proof in law and in history. The former was clear, the latter blurred.64
The Papon trial employed historians at different stages. A first group of historians did pre-trial work collecting all the facts that were relevant or needed extra attention. Wijffels argues that this first procedure was crucial. Since this pre-trial process is not public, the historians could remain more objective, according to Wijffels. The pre-trial phase has a major impact on the final story due to the selection and prioritization of elements in the historical narrative that is constructed with its judicial application in mind. Consequentially, it is very important for Wijffels that historians remain, particularly in that initial phase, as impartial as possible. The second stage was the testimony given by the expert historian. For Wijffels that part did the most harm to historical truth. Wijffels argues to confine the involvement of historians to the fact-finding phase.65 We return to his proposal in the concluding chapter of this second part of the book.66
The Vichy trials received enormous media and political attention and became a major part of public memory of the Vichy past of the French nation.67 In historiography several historians debated the outcome of the trials and the role of the historical discipline in it. As could be expected after his refusal to serve as an expert witness, Rousso was a very prominent participant in this debate together with other critics such as the French historians Jeanneney and Dumoulin. All three had clear objections to the role historians had played in the Vichy trials. Jeanneney argued that there had been great confusion in the French legal system to the manner in which historians had to testify. In his book Le passé dans le prétoire: l’historien, le juge et le journaliste (The Past in Court: the Historian, the Judge, and the Journalist), Jeanneney explains that this confusion had unfavourable consequences for the historians in court. The French legal conditions precluded historians from doing any proper historical work.68 Alain Wijffels quotes the French legal scholar Henri Angevin, who wrote that the French legal system was not fit to apply historical testimony and that judges, furthermore, did not prevent historians from making statements about the accused and his character. Historians should come to court to give information, according to Angevin, not to judge.69 Another critical work on the appliance of history in the court room is that of Olivier Dumoulin. In his Le rôle social de l’historien: de la chaire au prétoire (The Social Role of the Historian: from the Academy to the Courtroom), Dumoulin presents a critical overview of French litigation in which historians had functioned as experts. Concerning the Vichy trials, Dumoulin was critical of the considerable influence the adversarial paradigm had exerted. The application of a common law-inspired practice had encouraged both parties to bring their own expert witnesses, a wicked novelty, according to Dumoulin, which harmed historical truth and the historical discipline.70 Wijffels agrees with Dumoulin and concludes that the Vichy trials had no significant contribution to historical knowledge.71
Rousso’s objections and refusal and those of like-minded historians were met with counterarguments from other prominent French historians among them François Bédarida. In his The Social Responsibility of the Historian, Bédarida wrote: “[a]fter the radical critique of the 1960s, which destroyed the certainties, buried the utopias and disassembled the beliefs, a return of the values of humanism, morals, and meaning since the 1980s has been witnessed. To be sure, historians have their part in that recasting of intellectual life. They must continue to confront the imperatives of the present.”72 Bédarida himself had played a role in the Touvier trial.73 In Histoire, critique et responsabilité, Bédarida discusses the ideas of Paul Ricoeur on memory and history in a legal context.74 Bédarida’s text discussed the relationship of memory and history and how that tension is felt in court cases were historians serve as expert witnesses.75 Wijffels quotes Rousso in an interview in which he also acknowledges the presence of the contradiction of memory and academic history in court.76 The article Bédarida wrote refers to Ricoeur’s major work Histoire et vérité.77 Therein, Ricoeur expressed his resolution that historians were bound by a conviction to find the truth and convey these truths as a mediator: “un médiateur entre l’événement et l’histoire, comme un gardien du temps” (a mediator between the event and history, as a guardian of time).78 For Bédarida history was about the truth, “vérité” while memory was about loyalty or “fidélité.”79 He argued that there was a clear epistemological difference between the two. For Bédarida memory had reigned in the Vichy trials.
Petrovic concluded that the Vichy trials proved that the French courts had not been ready for historians serving as expert witnesses.80 Wijffels came to the same conclusion. To him French legal scholars had badly defined the role of the historians in court.81 The Vichy trials of Papon, Barbie, and Touvier had succumbed to external pressure from the media and politics. This delivered, a certain kind of justice, as Golsan and Rousso remarked: the Touvier trial was against forgetting the crimes committed in World War II and aimed at reaffirming the belief in democratic values.82 For historians as well as for legal scholars, the Vichy trials represented a failed interaction of law and history.83 Yet, as the following citation from the son of one of the victims of Touvier explains, not all had gone wrong with the Vichy Trials: “[m]y father was not judged by anyone. He was arrested, thrown five hours later against a wall, and assassinated. … I am happy to find myself in front of a court that is democratic, engaged in an adversarial debate where everyone can speak, anything can be said, even by the accused.”84 In the end, the Vichy trials became another example of how history in court could be used for political means. Because of their wide coverage in the French media, and the great response to the trials in the French public debate, the Vichy trials influenced French, European, and American historians and legal scholars to think of expert witnessing by historians as a controversial practice.
6.1.4 Conclusions on the Post-War Judgment of History in Court
Almost all authors mentioned above agree that these trials were show trials. Especially the Eichmann trial and the French Vichy trials were grand dramatizations aimed at constructing a collective memory.85 Petrovic calls these show trials a form of judicial memory making.86 The trials had a pedagogic agenda which was set by extralegal factors, predominantly political ones.87 Rousso also called the French trials show trials.88 Defence counsel for Barbie, Jacques Vèrges, was correct, according to Osiel, when he observed that the trials were “an event.”89 For Douglas, the Eichmann trial and the French Vichy trials failed to do justice to history and the character of the Holocaust. It was problematic for Douglas to represent the Holocaust in court.90 Alain Wijffels was clearly not enthusiastic about these trials. He called the testimonies of the experts in the Holocaust trials examples of the forensic form of history. The courts had demanded facts from historians, so that judges could judge those facts–as in the Latin adage: Da mihi facta, dabo tibi ius | <urn:uuid:05316773-2af9-4667-9f83-60904fa23694> | CC-MAIN-2022-33 | https://lawexplores.com/the-globalization-of-the-historian-as-an-expert-witness/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00696.warc.gz | en | 0.976932 | 6,209 | 3.0625 | 3 |
Health Research Report
162nd Issue Date 23 AUG 2013
Compiled By Ralph Turchiano
In This Issue:
1. Sugar is toxic to mice in ‘safe’ doses
2. DHA-enriched formula in infancy linked to positive cognitive outcomes in childhood
3. Meal Timing Can Significantly Improve Fertility in Women with Polycystic Ovaries
4. 6 months of fish oil reverses liver disease in children with intestinal failure, study shows
5. Watermelon juice relieves post-exercise muscle soreness
6. Celery, artichokes contain flavonoids that kill human pancreatic cancer cells
7. Hitting the gym may help men avoid diet-induced erectile dysfunction
Sugar is toxic to mice in ‘safe’ doses
New test hints 3 sodas daily hurt lifespan, reproduction
SALT LAKE CITY, Aug. 13, 2013 – When mice ate a diet of 25 percent extra sugar – the mouse equivalent of a healthy human diet plus three cans of soda daily – females died at twice the normal rate and males were a quarter less likely to hold territory and reproduce, according to a toxicity test developed at the University of Utah.
“Our results provide evidence that added sugar consumed at concentrations currently considered safe exerts dramatic adverse impacts on mammalian health,” the researchers say in a study set for online publication Tuesday, Aug. 13 in the journal Nature Communications.
“This demonstrates the adverse effects of added sugars at human-relevant levels,” says University of Utah biology professor Wayne Potts, the study’s senior author. He says previous studies using other tests fed mice large doses of sugar disproportionate to the amount people consume in sweetened beverages, baked goods and candy.
“I have reduced refined sugar intake and encouraged my family to do the same,” he adds, noting that the new test showed that the 25 percent “added-sugar” diet – 12.5 percent dextrose (the industrial name for glucose) and 12.5 percent fructose – was just as harmful to the health of mice as being the inbred offspring of first cousins.
Even though the mice didn’t become obese and showed few metabolic symptoms, the sensitive test showed “they died more often and tended to have fewer babies,” says the study’s first author, James Ruff, who recently earned his Ph.D. at the University of Utah. “We have shown that levels of sugar that people typically consume – and that are considered safe by regulatory agencies – impair the health of mice.”
The new toxicity test placed groups of mice in room-sized pens nicknamed “mouse barns” with multiple nest boxes – a much more realistic environment than small cages, allowing the mice to compete more naturally for mates and desirable territories, and thereby revealing subtle toxic effects on their performance, Potts says.
“This is a sensitive test for health and vigor declines,” he says, noting that in a previous study, he used the same test to show how inbreeding hurt the health of mice.
“One advantage of this assay is we get the same readout no matter if we are testing inbreeding or added sugar,” Potts says. “The mice tell us the level of health degradation is almost identical” from added-sugar and from cousin-level inbreeding.
The study says the need for a sensitive toxicity test exists not only for components of our diet, but “is particularly strong for both pharmaceutical science, where 73 percent of drugs that pass preclinical trials fail due to safety concerns, and for toxicology, where shockingly few compounds receive critical or long-term toxicity testing.”
The study was funded by the National Institutes of Health and the National Science Foundation.
A Mouse Diet Equal to What a Quarter of Americans Eat
The experimental diet in the study provided 25 percent of calories from added sugar – half fructose and half glucose – no matter how many calories the mice ate. Both high-fructose corn syrup and table sugar (sucrose) are half fructose and half glucose.
Potts says the National Research Council recommends that for people, no more than 25 percent of calories should be from “added sugar,” which means “they don’t count what’s naturally in an apple, banana, potato or other nonprocessed food. … The dose we selected is consumed by 13 percent to 25 percent of Americans.”
The diet fed to the mice with the 25 percent sugar-added diet is equivalent to the diet of a person who drinks three cans daily of sweetened soda pop “plus a perfectly healthy, no-sugar-added diet,” Potts says.
Ruff notes that sugar consumption in the American diet has increased 50 percent since the 1970s, accompanied by a dramatic increase in metabolic diseases such as diabetes, obesity, fatty liver and cardiovascular disease.
The researchers used a mouse supply company that makes specialized diets for research. Chow for the mice was a highly nutritious wheat-corn-soybean mix with vitamins and minerals. For experimental mice, glucose and fructose amounting to 25 percent of calories was included in the chow. For control mice, corn starch was used as a carbohydrate in place of the added sugars.
House Mice Behaving Naturally
Mice often live in homes with people, so “mice happen to be an excellent mammal to model human dietary issues because they’ve been living on the same diet as we have ever since the agricultural revolution 10,000 years ago,” Potts says.
Mice typically used in labs come from strains bred in captivity for decades. They lack the territoriality shown by wild mice. So the study used mice descended from wild house mice that were “outbred” to prevent inbreeding typical of lab mice.
“They are highly competitive over food, nesting sites and territories,” he says. “This competition demands high performance from their bodies, so if there is a defect in any physiological systems, they tend to do more poorly during high competition.”
So Potts’ new test – named the Organismal Performance Assay, or OPA – uses mice “in a more natural ecological context” more likely to reveal toxic effects of whatever is being tested, he says.
“When you look at a mouse in a cage, it’s like trying to evaluate the performance of a car by turning it on in a garage,” Ruff says. “If it doesn’t turn on, you’ve got a problem. But just because it does turn on, doesn’t mean you don’t have a problem. To really test it, you take it out on the road.”
A big room was divided into 11 “mouse barns” used for the new test. Six were used in the study. Each “barn” was a 377-square-foot enclosure ringed by 3-foot walls.
Each mouse barn was divided by wire mesh fencing into six sections or “territories,” but the mice could climb easily over the mesh. Within each of the six sections was a nest box, a feeding station and drinking water.
Four of the six sections in each barn were “optimal,” more desirable territories because the nest boxes were opaque plastic storage bins, which mice entered via 2-inch holes at the bottom. Each bin had four nesting cages in it, and an enclosed feeder.
The two other sections were “suboptimal” territories with open planter trays instead of enclosed bins. Female mice had to nest communally in the trays.
Running the Experiment
The mice in the experiment began with 156 “founders” that were bred in Potts’ colony, weaned at four weeks, and then assigned either to the added-sugar diet or the control diet, with half the males and half the females on each diet.
The mice stayed in cages with siblings of the same sex (to prevent reproduction) for 26 weeks while they were fed these diets. Then the mice were placed in the mouse barns to live, compete with each other and breed for 32 more weeks. They all received the same added-sugar diet while in the mouse barns, so the study only tested for differences caused by the mice eating different diets for the previous 26 weeks.
The founder mice had implanted microchips, like those put in pets. Microchip readers were placed near the feeding stations to record which mice fed where and for how long. A male was considered dominant if he made more than 75 percent of the visits by males to a given feeding station. In reality, the dominant males made almost 100 percent of male visits to the feeder in the desirable territory they dominated.
With the 156 founder mice (58 male, 98 female), the researchers ran the experiment six times, with an average of 26 mice per experiment: eight to 10 males (competing for six territories, four desirable and two suboptimal) and 14 to 18 females.
The Findings: Added Sugar Impairs Mouse Lifespan and Reproduction
- After 32 weeks in mouse barns, 35 percent of the females fed extra sugar died, twice the 17 percent death rate for female control mice. There was no difference in the 55 percent death among males who did and did not get added sugar. Ruff says males have much higher death rates than females in natural settings because they compete for territory, “but there’s no relation to sugar.”
- Males on the added-sugar diet acquired and held 26 percent fewer territories than males on the control diet: control males occupied 47 percent of the territories while sugar-added mice controlled less than 36 percent. Male mice shared the remaining 17 percent of territories.
- Males on the added-sugar diet produced 25 percent fewer offspring than control males, as determined by genetic analysis of the offspring. The sugar-added females had higher reproduction rates than controls initially – likely because the sugar gave them extra energy to handle the burden of pregnancy – but then had lower reproductive rates as the study progressed, partly because they had higher death rates linked to sugar.
The researchers studied another group of mice for metabolic changes. The only differences were minor: cholesterol was elevated in sugar-fed mice, and the ability to clear glucose from the blood was impaired in female sugar-fed mice. The study found no difference between mice on a regular diet and mice with the 25 percent sugar-added diet when it came to obesity, fasting insulin levels, fasting glucose or fasting triglycerides.
“Our test shows an adverse outcome from the added-sugar diet that couldn’t be detected by conventional tests,” Potts says.
Human-made toxic substances in the environment potentially affect all of us, and more are continually discovered, Potts says.
“You have to ask why we didn’t discover them 20 years ago,” he adds. “The answer is that until now, we haven’t had a functional, broad and sensitive test to screen the potential toxic substances that are being released into the environment or in our drugs or our food supply.”
DHA-enriched formula in infancy linked to positive cognitive outcomes in childhood
LAWRENCE – University of Kansas scientists have found that infants who were fed formula enriched with long-chain polyunsaturated fatty acids (LCPUFA) from birth to 12 months scored significantly better than a control group on several measures of intelligence conducted between the ages of three to six years.
Specifically, the children showed accelerated development on detailed tasks involving pattern discrimination, rule-learning and inhibition between the ages of three to five years of age as well as better performance on two widely-used standardized tests of intelligence: the Peabody Picture Vocabulary Test at age five and the Weschler Primary Preschool Scales of Intelligence at age six.
“These results support the contention that studies of nutrition and cognition should include more comprehensive and sensitive assessments that are administered multiple times through early childhood,” said John Colombo, study director and KU professor of psychology.
The results of LCPUFA supplementation studies have been mixed according to Colombo, a neuroscientist who specializes in the measurement of early neurocognitive development, but many of those studies have relied mainly on children’s performance on the Bayley Scales of Infant Development at 18 months.
In the randomized, double-blind study, 81 infants were fed one of four formulas from birth to 12 months; three with varying levels of two LCPUFAs (DHA and ARA) and one formula with no LCPUFA. Beginning at 18 months, the children were tested every six months until six years of age on age-appropriate standardized and specific cognitive tests.
At 18 months the children did not perform any better on standardized tests of performance and intelligence, but by age three study directors Colombo and Susan E. Carlson, A. J. Rice Professor of Dietetics and Nutrition at KUMC, began to see significant differences in the performance of children who were fed the enriched formulas on finer-grained, laboratory-based measures of several aspects of cognitive function.
DHA or docosahexaenoic acid is an essential long-chain fatty acid that affects brain and eye development, and babies derive it from their mothers before birth and up to age two. But the American diet is often deficient in DHA sources such as fish.
ARA or arachidonic acid is another LCPUFA that is present in breast milk and commercial formula.
The study was designed to examine the effects of postnatal DHA at levels that have been found to vary across the world, said study co-director Carlson.
The results on the children’s development from the first 12 months of this study were published in Pediatric Research in 2011, and showed improved attention and lower heart rate in infants supplemented with any level of LCPUFA. Colombo and Carlson’s earlier work and collaborations influenced infant formula manufacturers to begin adding DHA in 2001.
The study was published ahead of print in the June 2013 issue of the American Journal of Clinical Nutrition.
Meal Timing Can Significantly Improve Fertility in Women with Polycystic OvariesTuesday,
August 13, 2013
Managing insulin levels through meal timing boosts ovulation and decreases testosterone, says TAU researcher
Polycystic Ovarian Syndrome (PCOS), a common disorder that impairs fertility by impacting menstruation, ovulation, hormones, and more, is closely related to insulin levels. Women with the disorder are typically “insulin resistant” — their bodies produce an overabundance of insulin to deliver glucose from the blood into the muscles. The excess makes its way to the ovaries, where it stimulates the production of testosterone, thereby impairing fertility.
Now Prof. Daniela Jakubowicz of Tel Aviv University‘s Sackler Faculty of Medicine and the Diabetes Unit at Wolfson Medical Center has found a natural way to help women of normal weight who suffer from PCOS manage their glucose and insulin levels to improve overall fertility. And she says it’s all in the timing.
The goal of her maintenance meal plan, based on the body’s 24 hour metabolic cycle, is not weight loss but insulin management. Women with PCOS who increased their calorie intake at breakfast, including high protein and carbohydrate content, and reduced their calorie intake through the rest of the day, saw a reduction in insulin resistance. This led to lower levels of testosterone and dramatic increase in the ovulation frequency — measures that have a direct impact on fertility, notes Prof. Jakubowicz.
The research has been published in Clinical Science and was recently presented at the Endocrine Society’s annual meeting in June. It was conducted in collaboration with Dr. Julio Wainstein of TAU and Wolfson Medical Center and Dr. Maayan Barnea and Prof. Oren Froy of the Hebrew University of Jerusalem.
Managing insulin to increase ovulation
Many of the treatment options for PCOS are exclusively for obese women, Prof. Jakubowicz explains. Doctors often suggest weight loss to manage insulin levels, or prescribe medications that are used to improve the insulin levels of overweight patients. But many women who suffer from PCOS maintain a normal weight — and they are looking for ways to improve their chances of conceiving and giving birth to a healthy baby.
In a recent study, Prof. Jakubowicz and her fellow researchers confirmed that a low-calorie weight-loss plan focusing on larger breakfasts and smaller dinners also lowers insulin, glucose, and triglycerides levels. This finding inspired them to test whether a similar meal plan could be an effective therapeutic option for women with PCOS.
Sixty women suffering from PCOS with a normal body mass index (BMI) were randomly assigned to one of two 1,800 calorie maintenance diets with identical foods. The first group ate a 983 calorie breakfast, a 645 calorie lunch, and a 190 calorie dinner. The second group had a 190 calorie breakfast, a 645 calorie lunch, and 983 calorie dinner. After 90 days, the researchers tested participants in each group for insulin, glucose, and testosterone levels as well as ovulation and menstruation.
As expected, neither group experienced a change in BMI, but other measures differed dramatically. While participants in the “big dinner” group maintained consistently high levels of insulin and testosterone throughout the study, those in the “big breakfast” group experienced a 56 percent decrease in insulin resistance and a 50 percent decrease in testosterone. This reduction of insulin and testosterone levels led to a 50 percent rise in ovulation rate, indicated by a rise in progesterone, by the end of the study.
A natural therapy
According to Prof. Jakubowicz, these results suggest that meal timing — specifically a meal plan that calls for the majority of daily calories to be consumed at breakfast and a reduction of calories throughout the day — could help women with PCOS manage their condition naturally, providing new hope for those who have found no solutions to their fertility issues, she says. PCOS not only inhibits natural fertilization, but impacts the effectiveness of in vitro fertilization treatments and increases the rate of miscarriage.
And beyond matters of fertility, this method could mitigate other symptoms associated with the disorder, including unwanted body hair, oily hair, hair loss, and acne. Moreover, it could protect against developing type-2 diabetes.
6 months of fish oil reverses liver disease in children with intestinal failure, study shows
Children who suffer from intestinal failure, most often caused by a shortened or dysfunctional bowel, are unable to consume food orally. Instead, a nutritional cocktail of sugar, protein and fat made from soybean oil is injected through a small tube in their vein.
For these children, the intravenous nutrition serves as a bridge to bowel adaptation, a process by which the intestine recovers and improves its capacity to absorb nutrition. But the soybean oil, which provides essential fatty acids and calories, has been associated with a potentially lethal complication known as intestinal failure–associated liver disease, which may require a liver and/or intestinal transplant. Such a transplant can prevent death, but the five-year post-transplant survival rate is only 50 percent.
Previous studies have shown that replacing soybean oil with fish oil in intravenous nutrition can reverse intestinal failure–associated liver disease. However, the necessary duration of fish oil treatment had not been established in medical studies.
Now, a clinical trial conducted at the Children’s Discovery and Innovation Institute at Mattel Children’s Hospital UCLA has found that, compared with soybean oil, a limited duration (24 weeks) of fish oil is safe and effective in reversing liver disease in children with intestinal failure who require intravenous nutrition. The researchers believe that fish oil may also decrease the need for liver and/or intestinal transplants — and mortality — associated with this disease.
The researchers’ study, “Six Months of Intravenous Fish Oil Reverses Pediatric Intestinal Failure Associated Liver Disease,” is published online in the Journal of Parenteral and Enteral Nutrition.
“With this particular study, we set out to determine if a finite period of six months of intravenous fish oil could safely reverse liver damage in these children, and we have had some promising results,” said lead author Dr. Kara Calkins, an assistant professor in the department of pediatrics in the division of neonatology and developmental biology at UCLA. “But because intravenous fish oil is not yet approved by the Food and Drug Administration and is much more costly than soybean oil, it is typically not covered by insurance. As a result, this oil is considered experimental and is currently available only under special protocols. If it proves safe and effective for patients, we hope it would eventually be available for wider use.”
For the study, intravenous soybean oil was replaced with intravenous fish oil in 10 patients between the ages of 2 weeks and 18 years who had advanced intestinal failure–associated liver disease and who were at high risk for death and/or transplant. The researchers compared these subjects with 20 historical controls who had received soybean oil.
Results showed that the children receiving fish oil had a much higher rate of reversal of liver disease than those who received the standard soybean oil. In fact, after 17 weeks of fish oil, nearly 80 percent of patients experienced a reversal of their liver disease, while only 5 percent of the soybean patients saw a reversal.
The next phase of research will involve following children for up to five years after they stop fish oil to determine if their liver disease returns and if transplant rates are truly decreased, the study authors said.
“We are also trying to better understand how fish oil reverses this disease by investigating changes in proteins and genes in the blood and liver,” Calkins said. “These studies will provide the scientific and medical community with a better understanding of this disease and how intravenous fish oil works.”
For Isabella Piscione, who was one of the first patients at UCLA to receive the fish oil treatment under compassionate use, her outcome with the treatment paved the way for researchers to establish the six-month protocol. Because of multiple surgeries due to an obstruction in her intestines, Isabella was left with only 10 centimeters of intestine. She depended on intravenous nutrition for survival, which unfortunately resulted in liver damage.
When Isabella started the fish oil treatment, she was just over 6 months old and was listed for a liver and bowel transplant. Within a month of starting the treatment, her condition started to improve. By six months, her liver had healed, and she no longer needed a transplant.
“We cried tears of joy each week that we saw her getting better and better,” said her father, Laureano Piscione. “She is a success story.”
Watermelon juice relieves post-exercise muscle soreness
Watermelon juice’s reputation among athletes is getting scientific support in a new study, which found that juice from the summer favorite fruit can relieve post-exercise muscle soreness. The report in ACS’ Journal of Agricultural and Food Chemistry attributes watermelon’s effects to the amino acid L-citrulline.
Encarna Aguayo and colleagues cite past research on watermelon juice’s antioxidant properties and its potential to increase muscle protein and enhance athletic performance. But scientists had yet to explore the effectiveness of watermelon juice drinks enriched in L-citrulline. Aguayo’s team set out to fill that gap in knowledge.
They tested natural watermelon juice, watermelon juice enriched in L-citrulline and a control drink containing no L-citrulline on volunteers an hour before exercise. Both the natural juice and the enriched juice relieved muscle soreness in the volunteers. L-citrulline in the natural juice (unpasteurized), however, seemed to be more bioavailable — in a form the body could better use, the study found.
Celery, artichokes contain flavonoids that kill human pancreatic cancer cells
URBANA, Ill. – Celery, artichokes, and herbs, especially Mexican oregano, all contain apigenin and luteolin, flavonoids that kill human pancreatic cancer cells in the lab by inhibiting an important enzyme, according to two new University of Illinois studies.
“Apigenin alone induced cell death in two aggressive human pancreatic cancer cell lines. But we received the best results when we pre-treated cancer cells with apigenin for 24 hours, then applied the chemotherapeutic drug gemcitabine for 36 hours,” said Elvira de Mejia, a U of I professor of food chemistry and food toxicology.
The trick seemed to be using the flavonoids as a pre-treatment instead of applying them and the chemotherapeutic drug simultaneously, said Jodee Johnson, a doctoral student in de Mejia’s lab who has since graduated.
“Even though the topic is still controversial, our study indicated that taking antioxidant supplements on the same day as chemotherapeutic drugs may negate the effect of those drugs,” she said.
“That happens because flavonoids can act as antioxidants. One of the ways that chemotherapeutic drugs kill cells is based on their pro-oxidant activity, meaning that flavonoids and chemotherapeutic drugs may compete with each other when they’re introduced at the same time,” she explained.
Pancreatic cancer is a very aggressive cancer, and there are few early symptoms, meaning that the disease is often not found before it has spread. Ultimately the goal is to develop a cure, but prolonging the lives of patients would be a significant development, Johnson added.
It is the fourth leading cause of cancer-related deaths, with a five-year survival rate of only 6 percent, she said.
The scientists found that apigenin inhibited an enzyme called glycogen synthase kinase-3β (GSK-3β), which led to a decrease in the production of anti-apoptotic genes in the pancreatic cancer cells. Apoptosis means that the cancer cell self-destructs because its DNA has been damaged.
In one of the cancer cell lines, the percentage of cells undergoing apoptosis went from 8.4 percent in cells that had not been treated with the flavonoid to 43.8 percent in cells that had been treated with a 50-micromolar dose. In this case, no chemotherapy drug had been added.
Treatment with the flavonoid also modified gene expression. “Certain genes associated with pro-inflammatory cytokines were highly upregulated,” de Mejia said.
According to Johnson, the scientists’ in vitro study in Molecular Nutrition and Food Research is the first to show that apigenin treatment can lead to an increase in interleukin 17s in pancreatic cells, showing its potential relevance in anti-pancreatic cancer activity.
Pancreatic cancer patients would probably not be able to eat enough flavonoid-rich foods to raise blood plasma levels of the flavonoid to an effective level. But scientists could design drugs that would achieve those concentrations, de Mejia said.
And prevention of this frightening disease is another story. “If you eat a lot of fruits and vegetables throughout your life, you’ll have chronic exposure to these bioactive flavonoids, which would certainly help to reduce the risk of cancer,” she noted.
Hitting the gym may help men avoid diet-induced erectile dysfunction
Bethesda, Md. (Aug. 20, 2013)—Obesity continues to plague the U.S. and now extends to much of the rest of the world. One probable reason for this growing health problem is more people worldwide eating the so-called Western diet, which contains high levels of saturated fat, omega-6 polyunsaturated fatty acids (the type of fat found in vegetable oil), and added sugar. Researchers have long known that this pattern of consumption, as well as the weight gain it often causes, contributes to a wide range of other health problems including erectile dysfunction and heart disease. Other than changing eating patterns, researchers haven’t discovered an effective way to avoid these problems.
Searching for a solution, Christopher Wingard and his colleagues at East Carolina University used rats put on a “junk food” diet to test the effects of aerobic exercise. They found that exercise effectively improved both erectile dysfunction and the function of vessels that supply blood to the heart.
The article is entitled “Exercise Prevents Western-Diet Associated Erectile Dysfunction and Coronary Artery Endothelial Dysfunction: Response to Acute Apocynin and Sepiapterin Treatment.” It appears in the online edition of the American Journal of Physiology: Regulatory, Integrative, and Comparative Physiology, published by the American Physiological Society. The article is online at http://bit.ly/13jYpED.
For 12 weeks, the researchers fed a group of rats chow that reflected the Western diet, high in sugar and with nearly half its calories from fat. Another group of rats ate a healthy standard rat chow instead. Half of the animals in each group exercised five days a week, running intervals on a treadmill.
At the end of the 12 weeks, anesthetized animals’ erectile function was assessed by electrically stimulating the cavernosal nerve, which causes an increase in penile blood flow and produces an erection. The researchers also examined the rats’ coronary arteries to see how they too responded to agents that would relax them and maintain blood flow to the heart, an indicator of heart health.
The findings showed that rats who ate the Western diet but stayed sedentary developed erectile dysfunction and poorly relaxing coronary arteries. However, those who ate the diet but exercised were able to stave off these problems.
Animals who ate the healthy chow were largely able to avoid both erectile dysfunction and coronary artery dysfunction.
Importance of the Findings
These findings may suggest that exercise could be a potent tool for fighting the adverse effects of the Western diet as long as the subjects remained very active over the course of consuming this type of diet, the authors say. Whether exercise would still be effective in reversing any vascular problems after a lifetime of consuming a Western diet is still unknown.
“The finding that exercise prevents Western diet-associated erectile dysfunction and coronary artery disease progression translates to an intensively active lifestyle throughout the duration of the ‘junk food’ diet,” the authors say. “It remains to be seen if a moderately active lifestyle, or an active lifestyle initiated after a prolonged duration of a sedentary lifestyle combined with a ‘junk food’ diet is effective at reversing functional impairment.”
These reports are done with the appreciation of all the Doctors, Scientist, and other Medical Researchers who sacrificed their time and effort. In order to give people the ability to empower themselves. Without base aspirations of fame, or fortune. Just honorable people, doing honorable things. | <urn:uuid:50e22d16-d365-4546-a6b4-8fffb43d79b7> | CC-MAIN-2022-33 | https://clinicalnews.org/2013/08/24/162nd-health-research-report-23-aug-2013/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572286.44/warc/CC-MAIN-20220816090541-20220816120541-00293.warc.gz | en | 0.951944 | 6,512 | 2.515625 | 3 |
Here it is a new guide, to collect and organize all the knowledge that you need to create your programming language from scratch.
Creating a programming language is one of the most fascinating challenge you can dream of as a developer.
The problem is that there are a lot of moving parts, a lot of things to do right and it is difficult to find a well detailed map, to show you the way. Sure, you can find a tutorial on writing half a parser there, an half-baked list of advices on language design, an example of a naive interpreter. To find those things you will need to spend hours navigating forums and following links.
We thought it was the case to save you some time by collecting relevant resources, evaluate them and organize them. So you can spend time using good resources, not looking for them.
We organized the resources around the three stages in the creation of a programming language: design, parsing, and execution.
Designing the Language
When creating a programming language you need to take ideas and transform them in decisions. This is what you do during the design phase.
Before You Start…
Some good resources to beef up your culture on language design.
- Designing the next programming language? Understand how people learn!, a few considerations on how to design a programming language that it’s easy to understand.
- Five Questions About Language Design, (some good and some random) notes on programming language design by Paul Graham.
- Programming Paradigms for Dummies: What Every Programmer Should Know (PDF), this is actually a chapter of a book and it’s not really for dummies, unless they are the kind of dummies with a degree in Computer Science. Apart from that, it’s a great overview of the different programming paradigms, which can be useful to help you understand where your language will fit.
- Design Concepts in Programming Languages, if you want to make deliberate choices in the creation of your programming language, this is the book you need. Otherwise, if you don’t already have the necessary theoretical background, you risk doing things the way everybody else does them. It’s also useful to develop a general framework to understand how the different programming languages behave and why.
- Practical Foundations for Programming Languages, this is, for the most part, a book about studying and classifying programming languages. But by understanding the different options available it can also be used to guide the implementation of your programming language.
- Programming Language Pragmatics, 4th Edition, this is the most comprehensive book to understand contemporary programming languages. It discusses different aspects, of everything from C# to OCaml, and even the different kinds of programming languages such as functional and logical ones. It also covers the several steps and parts of the implementation, such as an intermediate language, linking, virtual machines, etc.
- Structure and Interpretation of Computer Programs, Second Edition, an introduction to computer science for people that already have a degree in it. A book widely praised by programmers, including Paul Graham directly on the Amazon Page, that helps you developing a new way to think about programming language. It’s quite abstract and examples are proposed in Scheme. It also covers many different aspect of programming languages including advanced topics like garbage collection.
Long discussions and infinite disputes are fought around type systems. Whatever choices you end up making it make sense to know the different positions.
- These are two good introductory articles on the subject of type systems. The first discuss the dichotomy Static/Dynamic and the second one dive into Introspection.
- What To Know Before Debating Type Systems, if you already know the basics of type systems this article is for you. It will permit you to understand them better by going into definitions and details.
- Type Systems (PDF), a paper on the formalization of type systems that also introduces more precise definitions of the different type systems.
- Types and Programming Languages, a comprehensive book on understanding type systems. It will impact your ability to design programming languages and compilers. It has a strong theoretical support, but it also explains the practical importance of individual concepts.
- Functional programming and type systems, an interesting university course on type systems for functional programming. It is used in a well known French university. There are also notes and presentation material available. It is as advanced as you would expect.
- Type Systems for Programming Language, is a simpler course on type system for (functional) programming languages.
Parsing transforms the concrete syntax in a form that is more easily manageable by computers. This usually means transforming text written by humans in a more useful representation of the source code, an Abstract Syntax Tree.
There are usually two components in parsing: a lexical analyzer and the proper parser. Lexers, which are also known as tokenizers or scanners, transform the individual characters in tokens, the atom of meaning. Parsers instead organize the tokens in the proper Abstract Syntax Tree for the program. But since they are usually meant to work together you may use a single tool that does both the tasks.
- Flex, as a lexer generator and (Berkeley) Yacc or Bison, for the generation of the proper parser, are the venerable choices to generate a complete parser. They are a few decades old and they are still maintained as open source software. They are written in and thought for C/C++. They still works, but they have limitations in features and support for other languages.
- Your own lexer and parser. If you need the best performance and you can create your own parser. You just need to have the necessary computer science knowledge.
- Flex and Bison tutorial, a good introduction to the two tools with bonus tips.
- Lex and Yacc Tutorial, at 40 pages this is the ideal starting point to learn how to put together lex and yacc in a few hours.
- Video Tutorial on lex/yacc in two parts, in an hour of video you can learn the basics of using lex and yacc.
- ANTLR Mega Tutorial, the renown and beloved tutorial that explains everything you need to know about ANTLR, with bonus tips and tricks and even resources to know more.
- lex & yacc, despite being a book written in 1992 it’s still the most recommended book on the subject. Some people say because the lack of competition, others because it is good enough.
- flex & bison: Text Processing Tools, the best book on the subject written in this millennium.
- The Definitive ANTLR 4 Reference, written by the main author of the tool this is really the definitive book on ANTLR 4. It explains all of its secrets and it’s also a good introduction about how the whole parsing thing works.
- Parsing Techniques, 2nd edition, a comprehensive, advanced and costly book to know more than you possibly need about parsing.
To implement your programming language, that is to say to actually making something happens, you can build one of two things: a compiler or an interpreter. You could also build both of them if you want. Here you can find a good overview if you need it: Compiled and Interpreted Languages.
The resources here are dedicated to explaining how compilers and/or interpreters are built, but for practical reasons often they also explain the basics of creating lexers and parsers.
A compiler transforms the original code into something else, usually machine code, but it could also be simply any lower level language, such as C. In the latter case some people prefer to use the term transpiler.
- LLVM, a collection of modular and reusable compiler and toolchain technologies used to create compilers.
- CLR, is the virtual machine part of the .NET technologies, that permits to execute different languages transformed in a common intermediate language.
- JVM, the Java Virtual Machine that powers the Java execution.
Articles & Tutorials
- Building Domain Specific Languages on the CLR, an article on how to build internal DSL on the CLR. It’s slightly outdated, since it’s from 2008, but it’s still a good presentation on the subject.
- The digital issue of MSDN Magazine for February 2008 (CHM format), contains an article on how to Create a Language Compiler for the .NET Framework. It’s still a competent overview of the whole process.
- Create a working compiler with the LLVM framework, Part 1 and Part 2, a two-part series of articles on creating a custom compiler by IBM, from 2012 and thus slightly outdated.
- A few series of tutorials froms the LLVM Documentation, this is three great linked series of tutorial on how to implement a language, called Kaleidoscope, with LLVM. The only problem is that some parts are not always up-to-date.
- My First LLVM Compiler, a short and gentle introduction to the topic of building a compiler with LLVM.
- Creating an LLVM Backend for the Cpu0 Architecture, a whopping 600-pages tutorial to learn how to create a LLVM backend, also available in PDF or ePub. The content is great, but the English is lacking. On the positive side, if you are a student, they feel your pain of transforming theoretical knowledge into practical applications, and the book was made for you.
- A Nanopass Framework for Compiler Education, a paper that present a framework to teach the creation of a compiler in a simpler way, transforming the traditional monolithic approach in a long series of simple transformations. It’s an interesting read if you already have some theoretical background in computer science.
- An Incremental Approach to Compiler Construction (PDF), a paper that it’s also a tutorial that develops a basic Scheme compiler with an easier to learn approach.
- Compilers: Principles, Techniques, and Tools, 2nd Edition, this is the widely known Dragon book (because of the cover) in the 2nd edition (purple dragon). There is a paperback edition, which probably costs less but it has no dragon on it, so you cannot buy that. It is a theoretical book, so don’t expect the techniques to actually include a lot of reusable code.
- Modern Compiler Implementation in ML, this is known as a the Tiger book and a competitor of the Dragon book. It is a book that teaches the structure and the elements of a compiler in detail. It’s a theoretical book, although it explains the concept with code. There are other versions of the same book written using Java and C, but it’s widely agreed that the ML one is the best.
- Engineering a Compiler, 2nd edition, it is another compiler book with a theoretical approach, but that it covers a more modern approach and it is more readable. It’s also more dedicated to the optimization of the compiler. So if you need a theoretical foundation and an engineering approach this is the best book to get.
An interpreter directly executes the language without transforming it in another form.
Articles & Tutorials
- A simple interpreter from scratch in Python, a four-parts series of articles on how to create an interpreter in Python, simple yet good.
- Let’s Build A Simple Interpreter, a twelve-parts series that explains how to create a interpreter for a subset of Pascal. The source code is in Python, but it has the necessary amount of theory to apply to another language. It also has a lot of funny images.
- Writing An Interpreter In Go, despite the title it actually shows everything from parsing to creating an interpreter. It’s contemporary book both in the sense that is recent (a few months old), and it is a short one with a learn-by-doing attitude full of code, testing and without 3-rd party libraries. We have interviewed the author, Thorsten Ball.
- Crafting Interpreters, a work-in-progress and free book that already has good reviews. It is focused on making interpreters that works well, and in fact it will builds two of them. It plan to have just the right amount of theory to be able to fit in at a party of programming language creators.
This are resources that cover a wide range of the process of creating a programming language. They may be comprehensive or just give the general overview.
In this section we include tools that cover the whole spectrum of building a programming language and that are usually used as standalone tools.
- Xtext, is a framework part of several related technologies to develop programming languages and especially Domain Specific Languages. It allows you to build everything from the parser, to the editor, to validation rules. You can use it to build great IDE support for your language. It simplifies the whole language building process by reusing and linking existing technologies under the hood, such as the ANTLR parser generator.
- JetBrains MPS, is a projectional language workbench. Projectional means that the Abstract Syntax Tree is saved on disk and a projection is presented to the user. The projection could be text-like, or be a table or diagram or anything else you can imagine. One side effect of this is that you will not need to do any parsing, because it is not necessary. The term Language Workbench indicates that Jetbrains MPS is a whole system of technologies created to help you create your own programming language: everything from the language itself to IDE and supporting tools designed for your language. You can use it to build every kind of language, but the possibility and need to create everything makes it ideal to create Domain Specific Languages that are used for specific purposes, by specific audiences.
- Racket, is described by its authors as “a general-purpose programming language as well as the world’s first ecosystem for developing and deploying new languages”. It’s a pedagogical tool developed with practical ambitions that has even a manifesto. It is a language made to create other languages that has everything: from libraries to develop GUI applications to an IDE and the tools to develop logic languages. It’s part of the Lisp family of languages, and this tells everything you need to know: it’s all or nothing and always the Lisp-way.
- Create a programming language for the JVM: getting started, an overview of how and why to create a language for the JVM.
- An answer to How to write a very basic compiler, a good answer to the question that gives an overview of the steps needed and the options available to perform the task of building a compiler.
- Creating Languages in Racket, a great overview and presentation of Racket from the ACM Journal, with code.
- A Tractable Scheme Implementation (PDF), a paper discussing a Scheme implementation that focuses on reliability and tractability. It builds an interpreter that will generate a sort of bytecode on the fly. This bytecode will then be immediately executed by a VM. The name derives from the fact that the original version was built in 48 hours. The full source code is available on the website of the project.
- Create a useful language and all the supporting tools, a series of articles that start from scratch and teach you everything from parsing to build an editor with autocompletion, while building a compiler targeting the JVM.
- There is a great deal of documentation for Racket that can help you to start using it, even if you don’t know any programming language.
- There is a good amount documentation for Xtext that can help you to start using it, including a couple of 15 minutes tutorials.
- There is a great deal of documentation for JetBrains MPS, including specialized guides such as one for expert language designers. There is a video channel with videos to help you use the software and an introduction on Creating your first language in JetBrains MPS.
- Make a language in one hour: stacker, the tutorial provides a tour of Racket and its workflow.
- Create Your Own Programming Language, an article that shows a simple and hacky way of creating a programming language using JavaCC to create a parser and the Java reflection capabilities. It’s clearly not the proper way of doing it, but it presents all the steps and it’s easy to follow.
- Writing Your Own Toy Compiler Using Flex, Bison and LLVM, it does what it says, using the proper tools (flex, bison, LLVM, etc.) but it’s slightly outdated since it’s from 2009. If you want to understand the general picture and how everything fit together this is still a good place where to start.
- Designing a Programming Language I, “Designing a language and building an interpreter from beginning to end”. It is more than an article and less than a book. It has a good mix of theory and practice and it implements what it calls Duck Programming Language (inspired from Duck-Typing). A Part ii, that explained how to create a compiler, was planned but never finished.
- Writing a compiler in Ruby, bottom up, a 45-parts series of articles on creating a compiler with Ruby. For some reason it starts bottom up, that is to say from the code generation to end up with the parser. This is the reverse of the traditional (and logical) way of doing things. It’s peculiar, but also very down-to-earth.
- Implementing Programming Languages Using C# 4.0, the approach is a simple one and the libraries are quite outdated, but it’s a neat article to read a good introduction on how to build an interpreter in C#.
- How to create your own virtual machine! (PDF), this tutorial explains how to create a virtual machine in C#. It’s surprisingly interesting, although not necessarily with a practical application.
- How to create pragmatic, lightweight languages, the focus here is on making a language that works in practice. It explains how to generate bytecode, target the LLVM, build an editor for your language. Once you read the book you should know everything you need to make a usable, productive language. Incidentally, we have written this book.
- How To Create Your Own Freaking Awesome Programming Language, it’s a 100-page PDF and a screencast that teach how to create a programming language using Ruby or the JVM. If you like the quick-and-dirty approach this book will get you started in little time.
- Writing Compilers and Interpreters: A Software Engineering Approach, 3rd edition, it’s a pragmatic book that still teaches the proper approach to compilers/interpreters. Only that instead of an academic focus, it has an engineering one. This means that it’s full of Java code and there is also UML sprinkled here and there. Both the techniques and the code are slightly outdated, but this is still the best book if you are a software engineer and you need to actually do something that works correctly right now, that is to say in a few months after the proper review process has completed.
- Language Implementation Patterns, this is a book from the author of ANTLR, which is also a computer science professor. So it’s a book with a mix of theory and practice, that guides you from start to finish, from parsing to compilers and interpreters. As the name implies, it focuses on explaining the known working patterns that are used in building this kind of software, more than directly explaining all the theory followed by a practical application. It’s the book to get if you need something that really works right now. It’s even recommended by Guido van Rossum, the designer of Python.
- Build Your Own Lisp, it’s a very peculiar book meant to teach you how to use the C language and how to build you own programming language, using a mini-Lisp as the main example. You can read it for free online or buy it. It’s meant you to teach about C, but you have to be already familiar with programming. There is even a picture of Mike Tyson (because… lisp): it’s all so weird, but fascinating.
- Beautiful Racket: how to make your own programming languages with Racket, it’s a good and continually updated online book on how to use Racket to build a programming language. The book is composed of a series of tutorials and parts of explanation and reference. It’s the kind of book that is technically free, but you should pay for it if you use it.
- Programming Languages: Application and Interpretation, an interesting book that explains how to create a programming language from scratch using Racket. The author is a teacher, but of the good and understandable kind. In fact, there is also a series of recordings of the companion lectures, that sometimes have questionable audio. There is an updated version of the book and of the recordings, but the new book has a different focus, because it want also to teach about programming language. It also doesn’t uses Racket. If you don’t know any programming at all you may want to read the new version, if you are an experienced one you may prefer the old one.
- Implementing Domain-Specific Languages with Xtext and Xtend, 2nd edition, is a great book for people that want to learn with examples and using a test-driven approach. It covers all levels of designing a DSL, from the design of the type system, to parsing and building a compiler.
- Implementing Programming Languages, is an introduction to building compilers and interpreters with the JVM as the main target. There are related materials (presentations, source code, etc.) in a dedicated webpage. It has a good balance of theory and practice, but it’s explicitly meant as a textbook. So don’t expect much reusable code. It’s the typical textbook also in the sense that it can be a great and productive read if you already have the necessary background (or a teacher), otherwise you risk ending up confused.
- Implementing functional languages: a tutorial, a free book that explains how to create a simple functional programming language from the parsing to the interpreter and compiler. On the other hand: “this book gives a practical approach to understanding implementations of non-strict functional languages using lazy graph reduction”. Also, expect a lot of math.
- DSL Engineering, a great book that explains the theory and practice of building DSLs using language workbenches, such as MPS and Xtext. This means that other than traditional design aspects, such as parsing and interpreters, it covers things like how to create an IDE or how to test your DSL. It’s especially useful to software engineers, because it also discusses software engineering and business related aspects of DSLs. That is to say it talks about why a company should build a DSL.
- Lisp in Small Pieces, an interesting book that explain in details how to design and implement a language of the Lisp family. It describes “11 interpreters and 2 compilers” and many advanced implementation details such as the optimization of the compiler. It’s obviously most useful to people interested in creating a Lisp-related language, but it can be an interesting reading for everybody.
Here you have the most complete collection of high-quality resources on creating programming languages. You have just to decide what you are going to read first.
At this point we have two advices for you:
- Get started. It does not matter how many amazing resources we will send you, if you do not take the time to practice, trying and learning from your mistake you will never create a programming language
- If you are interested in building programming languages you should subscribe to our newsletter. You will receive updates on new articles, more resources, ideas, advices and ultimately become part of a community that share your interests on building languages
You should have all you need to get started. If you have questions, advices or ideas to share feel free to write at [email protected]. We read and answer every email.
Our thanks to Krishna and the people on Hacker News for a few good suggestions. | <urn:uuid:da6248ac-1356-4db8-9f6d-4e94e0bf4471> | CC-MAIN-2022-33 | https://tomassetti.me/resources-create-programming-languages/?3 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00697.warc.gz | en | 0.924311 | 5,522 | 3.28125 | 3 |
Arguments over the definition of obstructive sleep apnoea/hypopnoea syndrome (OSAHS) have still not been satisfactorily resolved. As a result, robust estimates of the prevalence of OSAHS are not possible. New approaches are needed to identify those who have “CPAP responsive” disease, enabling more accurate estimates to be made of the prevalence of the sleep apnoea syndrome in the community.
- obstructive sleep apnoea/hypopnoea syndrome
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
The treatment of obstructive sleep apnoea/hypopnoea syndrome (OSAHS) has moved from relative certainty to a time of uncertainty over what really constitutes significant symptomatic disease. During the early years of OSAHS recognition, apnoeic events leading to abnormal sleep and consequential sleepiness seemed an easy scenario to understand. Furthermore, treating the apnoeas led to spectacular improvements in symptoms and there seemed little to argue about. Little by little these simple concepts have been whittled away so that we are no longer sure of how to relate the spectrum of upper airway narrowing at night to the impairment of health and thus treatment decisions.
Problems first began to arise with the definition of sleep apnoeic events. In the early days the episodes of complete upper airway collapse with apnoea and subsequent arousal were easy to see and understand. Simple oronasal thermistors and chest/abdomen strain gauges identified hundreds of obstructive events,1 and conventional epoch based sleep staging showed impairment of general sleep architecture with less slow wave sleep and more sleep stage changes.1 Studies on unmatched (mainly young) normal subjects established an apparent narrow band of normality for numbers of apnoeas per hour of sleep, and the original definition of sleep apnoea (>5/h of >10 second apnoeas) rapidly became internationally accepted.2
Sleep induced upper airway narrowing
Unfortunately much has happened since then to expose how oversimplified this was. Upper airway narrowing with sleep onset is a normal phenomenon,3 with various factors conspiring to accentuate it (such as obesity, craniofacial abnormalities, tonsillar enlargement, etc). These accentuating phenomena are not “all or none” in nature, and thus upper airway narrowing during sleep is a continuously variable phenomenon within a population and, indeed, to some extent across nights within an individual. The points along this continuum at which inspiratory resistance rises, snoring develops, hypoventilation ensues, or full apnoeas occur are fairly arbitrary but relatively easily identifiable. There is no obvious reason to expect onset of symptoms to be linked with the onset of these arbitrary points along the continuum. Thus, although there has been a replacement of apnoeas with ever more sensitive indices of disturbed breathing, these have not led to obviously better correlations with the patients’ presenting symptoms and their management.
It was recognised early on that full obstructive apnoeas were not necessary to provoke arousals, and that obstructive hypopnoeas could do the same.4 These were harder to define than apnoeas as the level of hypoventilation required to define an event was quite arbitrary—usually 50% reduced—and could vary considerably with the transducers and actual analysis algorithms used in the sleep study.5,6 The addition of attendant hypoxic dipping was advocated to improve consistency, but again the threshold to define a dip was arbitrary and 2%, 3%, or 4% have been used with very different event rates resulting.7
With the appreciation that the dominant symptom of OSAHS—sleepiness—was due to sleep fragmentation, attention turned to defining respiratory related sleep fragmentation in the hope that this would better represent the disease activity. The finding that increases in upper airway resistance and resultant rises in inspiratory effort can lead to recurrent microarousals without evidence of apnoeas, hypopnoeas, or even hypoxia8 led to the concept of respiratory related arousals (RERAs). This approach allows a variety of different respiratory events to be counted if they appear to lead to arousal. The so-called “upper airway resistance syndrome” is also included where increasing inspiratory effort alone (measured either with oesophageal manometry or by detecting inspiratory flow limitation) is an event as long as it seems to lead to an arousal. There is argument over the existence of such subtle respiratory abnormalities and whether the use of thermistors to record airflow led to under recognition in the first place.9,10 This is likely to be part of the explanation, but there is no doubt that increased inspiratory effort alone can provoke an arousal, although the clinical importance of such events is not clear.11 Hosselet et al12 have investigated the best predictor of “OSAH syndrome” in 37 sleep clinic patients with OSAHS, some with no snoring or sleepiness and some with snoring and sleepiness. They found that counting apnoeas, hypopnoeas, and flow limitation events together best predicted sleepiness in a subsequent group of 103 patients, but only with a specificity and sensitivity of 60% and 71%, respectively.
Much effort has also been spent trying to define the minimum degree of sleep disturbance that might lead to daytime symptoms, and microarousals were originally defined as detectable lightening of the EEG for 3 seconds or more.13 Reducing the length of EEG changes required to define a microarousal to 1.5 seconds (with EMG increases however small) was only minimally better at predicting daytime performance (simple reaction time).14 On the other hand, the Edinburgh group have produced some evidence in normal subjects that one night of sleep fragmentation, insufficient to produce EEG changes but enough to cause transient rises in blood pressure (so called “autonomic arousals”), can have small effects on daytime function.15
It was hoped that this increasing sophistication in measuring all the events consequent upon upper airway narrowing would improve our ability to correlate with, and predict, daytime symptoms. This, disappointingly, has not really been the case and a new gold standard to replace the apnoea/hypopnoea index (AHI), despite its clear limitations, has not evolved out of the enormous effort by many laboratories to try and measure the critical events. Our ability to correlate sleep study indices with daytime measures of sleepiness rarely rises above an r value of 0.4—that is, less than 20% of symptoms across a sleep clinic population appear explicable on the basis of the sleep study, our primary diagnostic tool.
The cause of our failure to correlate sleep studies to symptoms may also lie in our inability to measure the symptoms very well. Patients tell us they are sleepy, but sleepiness can be contributed to by many other problems from depression to the arrival of a new baby. There are many objective tests of sleepiness and vigilance that it was hoped would improve our ability to relate symptoms to the sleep study. These, too, have been disappointing with objective tests—for example, the multiple sleep latency test (MSLT), multiple wakefulness test, or driving simulators—appearing to correlate no better with sleep study indices than subjective scores such as the Epworth Sleepiness Scale (ESS).16–18 Some of this poor correlation between sleep study indices and daytime function tests may be due not only to their poor measurement, but also to night to night variation in OSAHS severity and day to day variation in performance producing extra “noise” in the data.
Finally, inter-individual sensitivity is probably also an important factor with some individuals coping well with a degree of sleep fragmentation that renders another dangerously sleepy.19,20 In addition, epidemiological surveys show a considerable spread of sleepiness (as measured with the ESS) within a population21–23 and, perhaps, those already at the sleepy end of the spectrum are more likely to be troubled by additional sleep fragmentation
To make matters worse, there is now increasing evidence that sleep fragmentation and sleepiness are not the only adverse consequences of OSAHS. For example, OSAHS produces rises in blood pressure,24 both at night and during the day, and treatment of the OSAHS reduces it.25,26 Much epidemiological work also suggests that there might be further cardiovascular consequences27,28 although definitive interventional trials are unlikely to be possible, leaving a significant doubt as to the part played by confounders in cross sectional or case control studies. Therefore, in defining the disease and identifying patients for treatment, should hypertension as well as symptoms be included, as has been suggested in recent guidelines, despite the limitations of the data?29 Unfortunately the correlation between sleep study indices and blood pressure is even worse than with sleepiness.30
In addition, there may be other effects of sleep apnoea—for example, on catecholamine levels,31 insulin resistance,32 leptin levels,33 clotting factors,34 and several other potential cardiovascular risk factors. However, the evidence for these remains at a circumstantial level with virtually no controlled intervention studies and more detailed studies sometimes reveal alternative explanations.35 Finally, the part played by sleep induced upper airway narrowing and flow limitation and a consequential small rise in arterial carbon dioxide tension (paco2) on the blood pressure in pre-eclampsia has been suggested by Connolly et al,36 further expanding the “syndrome” of obstructive sleep apnoea.
There are now several large datasets correlating adverse outcomes with the AHI, such as the Wisconsin Cohort,28,37,38 the Sleep Heart Health Study,27,39 and the Pennsylvania study.40 For historical reasons the AHI has been used as the predictor and perhaps a measure of hypoxia, such as time below a certain threshold. These studies have perpetuated the AHI as the measure of OSAHS, yet they do have the ability to explore the predictive value of other measures such as Sao2 dipping, microarousals, movement arousals, etc. Such detailed analysis would help the field enormously—either to establish that there is a clear “best” predictor or that simple alternative measures work just as well. This would allow the wider use of simpler and cheaper sleep studies to the considerable benefit of patients.
The failure of the specialty to harden up on its disease definitions, perhaps due to a level of intellectual rigour not always present in other areas and disorders, has led to significant problems with those who purchase health care.41 It is unfortunate that arguments over precise definitions and outcomes have obscured the simple observation that very large numbers of patients with moderate to severe sleep apnoea have derived enormous benefit from the definitive treatment—nasal CPAP—an effect now clearly proven in robust placebo controlled trials25,26 and meta-analyses.42
Given this depressing lack of correlation across populations between the supposed cause of OSAHS and its supposed symptoms, can we define accurately what sleep apnoea syndrome is, particularly when making clinical decisions? The answer is probably not, because we do not know how to pick out those individuals who have relevant symptoms and will respond to nasal CPAP and want to go on using it. This is arguably the end point of interest—that is, what is “CPAP responsive disease” since CPAP reverses the primary abnormality? This is rather similar to the state that interstitial lung disease was in a few years ago, when ultimately clinicians were forced to do a steroid trial regardless of the histological, lavage, or CT appearances as they were unhelpful in predicting therapeutic response or prognosis.
A recent study by Bennett et al17 looked at predictors of CPAP response in a group of sleep clinic referrals with a wide range of OSAHS severity. When improvements in sleepiness (subjective or objective) were considered, simple sleep study derivatives such as >4% Sao2 dip rate and a body movement index proved the most predictive. Thus, in a sleep clinic population with symptoms compatible with OSAHS (snoring and sleepiness), the oxygen desaturation index was the best and easiest means of defining the disease. Even so, the correlation was no better than 0.6, showing that less than 40% of the response to CPAP could be predicted from the sleep study. This has led to the suggestion that the sleep study is unhelpful and that a trial of CPAP is the only way to be sure of finding treatable disease, leading to the expression (attributed to Phil Westbrook) “if in doubt, blow up the snout”. With modern auto-titrating machines such an approach is seductive. However, this would have several drawbacks. First, as shown in controlled trials,25,26 there is an enormous placebo effect on subjective end points such as the ESS and SF36 score but not on objective tests of sleepiness such as the MWT. This implies that the proper assessment of a CPAP trial would need objective measures of sleepiness, greatly increasing the cost. Alternatively, one could work to try and reduce the placebo effect by explaining the possibility of little or no response to the patient, but perhaps running the risk of a real responder giving up in the early difficult days assuming that he/she was not going to benefit. Secondly, many patients present with compatible symptoms without anything remotely resembling OSAHS on a sleep study which would mean many unnecessary CPAP trials. In our experience such individuals have degrees of depression or poor sleep hygiene and also happen to snore. Furthermore, obesity alone is associated with sleepiness in the absence of OSAHS.43 Some form of sleep study to raise the pre-test probability of a successful CPAP trial therefore seems necessary.
This has led to many sleep clinics adopting a highly pragmatic approach to the management of sleep apnoea—for example, using the sleep study merely as an identifier of some abnormality from heavy snoring through to frank OSAHS that might explain the patient’s symptoms. The patient’s symptoms are the more important part of this equation and there is good evidence that sleepiness and its resolution determines the success of CPAP more than the sleep study.44–47 This means that counting events and setting thresholds becomes unhelpful: a severely symptomatic patient with only heavy snoring and increased arousals might warrant a trial of nasal CPAP, whereas an individual with few or no symptoms would need to demonstrate considerable amounts of OSAHS and hypoxic dipping to justify persuading him to have a trial of CPAP. A wide variety of sleep study technologies can be used to assess sleep apnoea to this level of precision.48
This is clearly a relatively unsatisfactory state of affairs, but there really is no evidence that greater precision in sleep studies leads to a greater precision in disease definition and management. Sleep specialists should not be embarrassed to function in this very clinical way. There are many areas of clinical medicine where the synthesis of experience and less than perfect tests determine treatment. Thus, for pragmatic reasons, the only current valid definition of OSA syndrome is “sleep induced upper airway narrowing leading to symptomatic sleep disturbance”. Even this does not allow for the evolving area of cardiovascular complications perhaps being a reason to treat OSAHS, but as yet there is no evidence that treating OSAHS with CPAP reduces adverse cardiovascular outcomes better then conventional management of hypertension and other relevant risk factors which are often best addressed through simple lifestyle measures such as weight loss and dietary changes.
PREVALENCE OF OSAHS
It is clear from the above that the definition of relevant disease is not easy, hence making robust estimates of OSAHS prevalence is not possible. One can take the simple approach and base the prevalence estimates on sleep study indices alone. This has been done by many centres using very different technologies ranging from full polysomnography49 to simple oximetry.50 The results of many these studies have been reviewed in recent years.51–53 Essentially, the prevalences vary depending on the definition of “events” and the population under study. If a tough definition is used—essentially identifying those individuals with moderate to severe disease who have clear sleepiness that would lead to a CPAP prescription if had they presented to a sleep clinic—then the prevalence is probably about 0.5% in a UK population of middle aged men (mean age 48.2 years) with a mean body mass index (BMI) of 24.9 and perhaps 1.5% in a similar population (mean age 52 years) but with a mean BMI of 27.1.23,50. If an all inclusive definition is used based entirely on polysomnographically defined apnoeas and hypopnoeas >5/hour and no symptoms, then the prevalence rises to 24% in a US male population with mean BMI of about 30.49 In this latter study a more realistic prevalence estimate was attempted by defining “OSAH syndrome” as the combination of >5/hour apnoeas/hypopnoeas in conjunction with significant sleepiness (found in 10% of men and 12% of women). However, the overlap between these two states was actually no more common than would be expected by chance,52 making the point that irregular breathing at night and sleepiness are both common and not necessarily related. The prevalence based on this combined definition was 4% in men and 2% in women.
More recent epidemiological studies54–56 published since the last reviews in this journal have provided similar prevalence estimates which have not really advanced the situation much further and are unlikely to do so until we can better define the condition. Our recent attempt in this area tried to correlate sleepiness, using the validated sleepiness score ESS, with more sensitive measures of sleep fragmentation based on autonomic markers of arousal and measures of inspiratory effort overnight.23 This also failed to improve the identification of a significant link between sleepiness and obstructed breathing, although the latter was weakly correlated with change in blood pressure overnight. Several other epidemiological studies have been performed on non-white groups57–62 and have often found higher prevalences in Far Eastern and African populations, which are only partly explained by simple measures of body habitus but perhaps more by differences in craniofacial shape.63
Other more recent large studies have also failed to find a strong link between sleep apnoea activity and daytime measures of sleepiness and quality of life.22 The large Sleep Heart Health Study (n = 5777) looked at sleepiness (ESS) across four grades of AHI severity and found that the mean value rose from 7.2 in those with an AHI of <5 to 9.3 in those with an AHI of ⩾30, only a 2.1 point rise in a scale ranging from 0 to 24. They also showed an independent effect of snoring, having allowed for the AHI, again demonstrating the limitations of a one night AHI measurement in defining symptomatic disease.64 These results emphasise that “sleep study OSAH”, as found in epidemiological studies, is not the same phenomenon as “sleep clinic OSAH” which presents usually because of significant symptoms. However, the Wisconsin Sleep Cohort Study did identify a correlation between sleep apnoea and multiple motor vehicle accidents, although correlation does not prove causation and the exact interpretation of these data has been questioned.65 In this study an AHI of >15 versus no sleep disordered breathing or snoring increased the odds of having had multiple motor vehicle accidents by 7.3 (mainly in men) where the prevalence of such accident histories was 2.6% overall. Other epidemiological studies have also found poor or no correlation between AHI and symptoms or sleepiness.66
Other interesting epidemiological data have come from studying the prevalence of sleep apnoea in women aged 20–100 and the effects of the menopause. In a study from Pennsylvania and Madrid67 the prevalence of OSAHS was estimated from a two stage process. Telephone interviews were used to assess pre-test probability based on conventional predictors that were shown to predict the prevalence of OSAHS to some degree (snoring, daytime sleepiness, obesity, hypertension, and menopause). The highest scoring groups were then relatively oversampled to provide a total of 1000 subjects who agreed to undergo sleep laboratory polysomnography studies. The definition of hypopnoea required a 4% fall in Sao2 and an AHI of ⩾10/h was their arbitrary cut off for defining OSAHS and symptoms were not required. These results were also compared with previous data from their laboratory on 741 men. Men had a prevalence of 3.9% and women 1.2% (ratio 3.3:1). However, in premenopausal women (and those on HRT) the rate was 0.6%, rising to 2.7% in postmenopausal women not on HRT. As would be predicted from all other studies, obesity also had a considerable effect in both sexes. These data were supported by similar effects on the prevalence of just snoring without OSAHS. Another interesting finding was that all the premenopausal women and those on HRT who had an AHI of ⩾15 had a BMI over 32. In contrast, in postmenopausal women without HRT who had an AHI of ⩾15 the prevalence of a BMI over 32 was less than 50% (similar to the men). This generates the hypothesis that the female sex hormones are perhaps protective to some extent against non-BMI related risk factors for OSAHS. Earlier studies had shown more severe OSAHS in postmenopausal women68 and reductions in indices of sleep disordered breathing from HRT (oestrogen and progesterone) in healthy postmenopausal women.69 Conversely, testosterone may provoke OSAHS, perhaps by effects on the upper airway.70
In conclusion, it is likely that symptomatic sleep apnoea in men worthy of CPAP treatment has a prevalence of 1–2%, depending on prevailing obesity, with the prevalence in women being perhaps one third of this. Supporting this at an anecdotal level, in a local Oxford general practice of 4500 patients (1840 aged 35–75 years) where awareness of OSAHS is high, there are seven patients (five men and two women) on CPAP with three others who have tried it but decided the hassles outweighed the advantages. This gives a prevalence of 0.6% men and 0.3% women currently on CPAP long term (unpublished observations). Much more impressively, Thoranin Gislason who runs the OSAHS service for the whole of Iceland with a population of about 250 000 has made a diagnosis of OSAHS in 2350 individuals between 1987 and 1999, of whom 886 were put on CPAP. There are about 51 000 Icelandic men aged between 35 and 65, of whom about 1.3% are on CPAP (personal communication).
EVOLUTION OF OSAHS
Most data on the evolution of OSAHS come from cross sectional surveys which assume that age related differences are due to the ageing process itself. Few studies have looked at individuals over a period of time and looked for evidence of progression and relevant risk factors for this. Cross sectional studies on prevalence do show effects of age, independent of the unfortunate propensity for a rising BMI with age. Some studies show an approximate doubling of AHI every 10 years or so,54,71 although some have found a smaller rise with age.49 If symptoms are included in the definition of OSAHS, then the prevalence seems to fall above the age of 60 or so (fig 1). This is the experience of most clinics looking after patients with OSAHS where the peak age of presentation is about 50 and the prevalence falls off quite steeply above this age. It is not clear if this is because older patients simply do not complain of their symptoms or because it genuinely does not have such adverse consequences as in younger patients. Bixler et al54 have also shown that the nature of OSAHS in the elderly changes, with less severe falls in Sao2 with each apnoea than in younger patients, particularly in those with the highest AHI values, which were not explained by differences in BMI, for example.
The Wisconsin cohort study has looked at the same population on more than one occasion 4 years apart. A 10% weight gain predicted a 32% increase in AHI, whereas a 10% loss in weight predicted a 26% decrease in AHI.72 This is similar to interventional data in patients with OSAHS where weight loss has considerable effects on OSAHS severity.73,74 Clinical experience suggests that, before presentation, many patients with OSAHS have experienced a considerable and relatively sudden weight gain (fig 2). It is not clear whether this was the precipitant of their OSAHS or resulted from it. Lindberg et al75 also showed in a questionnaire based epidemiology study in 2668 men that over 10 years snoring increased from 15% to 20% and that weight gain was an important predictor of this increased prevalence. In a subsample of this study, 38 subjects with symptoms of OSAHS who had undergone polysomnography 10 years previously were re-contacted. Of the 29 who had not received treatment originally, only four had an AHI of ⩾5 whereas 10 years later there were 13 (p<0.01), but there seemed to be no predictors of this decline.76 In contrast, in a study from the Netherlands in which measurements were made 8 years apart, there was little change in thermistor defined AHI in a subgroup considered at higher risk of OSAHS, with some deteriorating and some improving.77
Pendlebury et al78 restudied a group of sleep clinic attendees about 1 year after the initial presentation to see if their OSAHS had worsened or not. Unfortunately, this study group consisted only of those not put on CPAP because of less severe disease in the initial sleep study. Given that the mean AHI of the whole clinic population would have been higher at presentation (including those with more severe disease who did go onto CPAP), this apparent deterioration is probably an example of regression to the mean (of their population). Sforza et al79 restudied after at least 5 years 32 patients with OSAHS who had refused treatment following diagnosis. On average there was no change in AHI, BMI, or indices of oxygenation, and most of the individual changes were explained by regression to the mean of the group. There was no correlation between change in BMI and change in AHI. In addition, there were no changes in blood pressure or objective sleepiness using the MSLT. There is thus conflicting evidence as to whether OSAHS inevitably worsens in the absence of weight gain, although it probably does if weight does rise. Once again, different populations and clinical patients versus epidemiological studies may explain some of the conflicts in the data.
In conclusion, recent work has added little to the reviews of epidemiological studies in this journal written some seven years ago.51,80 It is more widely recognised that AHI poorly defines the syndrome resulting from upper airway narrowing during sleep, and this has made life much more difficult for those seeking to estimate the so far unrecognised health burden of OSAHS. It is hoped that new approaches will begin to identify those who have “CPAP responsive” disease or significant cardiovascular risk. We will then be in a better position to make true estimates of sleep apnoea syndrome in the community. | <urn:uuid:80cf6c3a-d813-406c-8934-e352ba472fcb> | CC-MAIN-2022-33 | https://thorax.bmj.com/content/59/1/73?ijkey=9979f4c489b8678b07274383afc8fbee4d75d30c&keytype2=tf_ipsecsha | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00294.warc.gz | en | 0.958671 | 5,806 | 2.53125 | 3 |
Self Portrait of Benjamin West, c. 1763
October 10, 1738|
Springfield, Province of Pennsylvania
|Died||March 11, 1820
London, United Kingdom
|Known for||Historical painting|
King George III
Benjamin West PRA (October 10, 1738 – March 11, 1820) was an American artist, who painted famous historical scenes such as The Death of Nelson, The Death of General Wolfe, and Benjamin Franklin Drawing Electricity from the Sky.
Entirely self-taught, West soon gained valuable patronage, and he toured Europe, eventually settling in London. He impressed George III and was largely responsible for the launch of the Royal Academy, of which he became the second president (after Sir Joshua Reynolds). He was appointed historical painter to the court and Surveyor of the King's Pictures.
West also painted religious subjects, as in his huge work The Preservation of St Paul after a Shipwreck at Malta, at the Chapel of St Peter and St Paul in Greenwich, and Christ Healing the Sick, presented to the National Gallery.
|Introducing Benjamin West, Royal Academy of Art|
|Lecture 7. Benjamin West's Agrippina Landing at Brundisium with the Ashes of Germanicus, 57:08, Yale University|
West was born in Springfield, Pennsylvania, in a house that is now in the borough of Swarthmore on the campus of Swarthmore College, as the tenth child of an innkeeper and his wife. The family later moved to Newtown Square, Pennsylvania, where his father was the proprietor of the Square Tavern, still standing in that town. West told the novelist John Galt, with whom, late in his life, he collaborated on a memoir, The Life and Studies of Benjamin West (1816, 1820) that, when he was a child, Native Americans showed him how to make paint by mixing some clay from the river bank with bear grease in a pot. Benjamin West was an autodidact; while excelling at the arts, "he had little [formal] education and, even when president of the Royal Academy, could scarcely spell". One day his mother left him alone with his little sister Sally. Benjamin discovered some bottles of ink and began to paint Sally's portrait. When his mother came home, she noticed the painting, picked it up and said, “Why, it’s Sally!” and kissed him. Later, he noted, "My mother's kiss made me a painter."
From 1746 to 1759, West worked in Pennsylvania, mostly painting portraits. While West was in Lancaster in 1756, his patron, a gunsmith named William Henry, encouraged him to paint a Death of Socrates based on an engraving in Charles Rollin's Ancient History. His resulting composition, which significantly differs from the source, has been called "the most ambitious and interesting painting produced in colonial America". Dr William Smith, then the provost of the College of Philadelphia, saw the painting in Henry's house and decided to become West's patron, offering him education and, more importantly, connections with wealthy and politically connected Pennsylvanians. During this time West met John Wollaston, a famous painter who had immigrated from London. West learned Wollaston's techniques for painting the shimmer of silk and satin, and also adopted some of "his mannerisms, the most prominent of which was to give all his subjects large almond-shaped eyes, which clients thought very chic".
West was a close friend of Benjamin Franklin, whose portrait he painted. Franklin was the godfather of West's second son, Benjamin.
Italian "Grand Tour"
Sponsored by Smith and William Allen, then reputed to be the wealthiest man in Philadelphia, West traveled to Italy in 1760 in the company of the Scot William Patoun, a painter who later became an art collector. In common with many artists, architects, and lovers of the fine arts at that time he conducted a Grand Tour. West expanded his repertoire by copying works of Italian painters such as Titian and Raphael direct from the originals. In Rome he met a number of international neo-classical artists including German-born Anton Rafael Mengs, Scottish Gavin Hamilton, and Austrian Angelica Kauffman.
In August 1763, West arrived in England, on what he initially intended as a visit on his way back to America. In fact, he never returned to America. He stayed for a month at Bath with William Allen, who was also in the country, and visited his half-brother Thomas West at Reading at the urging of his father. In London he was introduced to Richard Wilson and his student Joshua Reynolds. He moved into a house in Bedford Street, Covent Garden. The first picture he painted in England Angelica and Medora, along with a portrait of General Monckton, and his Cymon and Iphigenia, painted in Rome, were shown at the exhibition in Spring Gardens in 1764.
Dr Markham, then Headmaster of Westminster School, introduced West to Samuel Johnson, Edmund Burke, Thomas Newton, Bishop of Bristol, James Johnson, Bishop of Worcester, and Robert Hay Drummond, Archbishop of York. All three prelates commissioned work from him. In 1766 West proposed a scheme to decorate St Paul's Cathedral with paintings. It was rejected by the Bishop of London, but his idea of painting an altarpiece for St Stephen Walbrook was accepted. At around this time he also received acclaim for his classical subjects, such as Orestes and Pylades and The Continence of Scipio.
Benjamin West was known in England as the "American Raphael". His Raphaelesque painting of Archangel Michael Binding the Devil is in the collection of Trinity College, Cambridge. He said that "Art is the representation of human beauty, ideally perfect in design, graceful and noble in attitude."
Drummond tried to raise subscriptions to fund an annuity for West, so that he could give up portraiture and devote himself entirely to more ambitious compositions. Having failed in this, he tried—with greater success—to convince King George III to patronise West. West was soon on good terms with the king, and the two men conducted long discussions on the state of art in England, including the idea of the establishment of a Royal Academy. The academy came into being in 1768, with West one of the primary leaders of an opposition group formed out of the existing Society of Artists of Great Britain. In the same year, he was elected to membership in the American Philosophical Society. Joshua Reynolds was its first president. In a story related by Henry Angelo I (1756–1835) in his book of reminiscences, the actor David Garrick, who was a friend of Angelo's father, the Italian sword master Domenico Angelo, memorably sketched for the teenaged Henry the following exchange: one day the painter Francesco Zuccarelli, on one of his visits to Domenico, got into a dispute with his fellow royal academician Johan Zoffany about the merit of West's 1769 painting The Departure of Regulus, his first commission for the king. Zuccarelli exclaimed, "Here is a painter who promises to rival Nicolas Poussin", while Zoffany tauntingly replied, "A figo for Poussin, West has already beaten him out of the field."
In 1772, King George appointed him historical painter to the court at an annual fee of £1,000. He painted a series of eight large canvases showing exfrom the life of Edward III for St George's Hall at Windsor Castle, and proposed a cycle of 36 works on the theme of "the progress of revealed religion" for a chapel at the castle, of which 28 were eventually executed. He also painted nine portraits of members of the royal family, including two of the king himself. He was Surveyor of the King's Pictures from 1791 until his death.
The Death of General Wolfe
West painted his most famous, and possibly most influential painting, The Death of General Wolfe, in 1770 and it exhibited at the Royal Academy in 1771. The painting became one of the most frequently reproduced images of the period. It returned to the French and Indian War setting of his General Johnson Saving a Wounded French Officer from the Tomahawk of a North American Indian of 1768. When the American Revolution broke out in 1776 he remained ambivalent, and neither spoke out for or against the Revolutionary War in his land of birth.
West became known for his large scale history paintings, which use expressive figures, colours and compositional schemes to help the spectator to identify with the scene represented. West called this "epic representation". His 1778 work The Battle of the Boyne portrayed William of Orange's victory at the Battle of the Boyne in 1690, and strongly influenced subsequent images of William. In 1806 he produced The Death of Nelson, to commemorate Horatio Nelson's death at the Battle of Trafalgar.
Later religious painting
St Paul's Church, in the Jewellery Quarter, Birmingham, has an important enamelled stained glass east window made in 1791 by Francis Eginton, modelled on an altarpiece painted c. 1786 by West, now in the Dallas Museum of Art. It shows the Conversion of Paul. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1791.
West is also well known for his huge work in the Chapel of St Peter and St Paul which now forms part of the Old Royal Naval College in Greenwich, London. His work, The Preservation of St Paul after a Shipwreck at Malta, measures 25 ft by 14 ft and illustrates the Acts of the Apostles: 27 & 28. West also provided the designs for the other paintings executed by Biaggio Rebecca in the chapel.
Following a loss of royal patronage at the beginning of the 19th century, West began a series of large-scale religious works. The first, Christ Healing the Sick was originally intended as a gift to Pennsylvania Hospital in Philadelphia; instead he sold it to the British Institution for £3,000, which in turn presented it to the National Gallery. West then made a copy to send to Philadelphia. The success of the picture led him to paint a series of even larger works, including his Death on the Pale Horse, exhibited in 1817.
Though initially snubbed by Sir Joshua Reynolds, founding President of the Royal Academy, and by some other Academicians who felt he was over-ambitious, West was elected President of the Royal Academy on the death of Reynolds in 1792. He resigned in 1805, to be replaced by a fierce rival, architect James Wyatt. However West was again elected president the following year, and served until his death.
Many American artists studied under him in London, including Ralph Earl and later his son, Ralph Eleaser Whiteside Earl, Samuel Morse, Robert Fulton, Charles Willson Peale, Rembrandt Peale, Matthew Pratt, Gilbert Stuart, John Trumbull, Washington Allston, Thomas Sully, John Green, and Abraham Delanoy.
West died at his house in Newman Street, London, on March 11, 1820, and was buried in St Paul's Cathedral.He had been offered a knighthood by the British Crown, but declined it, believing that he should instead be made a peer.
Robert Monckton, 1762
Two Officers and a Groom in a Landscape, 1777, Princeton University Art Museum
Dr Richard Price, DD, FRS - Benjamin West.jpg
Welsh moral philosopher Richard Price, 1784
King Lear and Cordelia, 1793
Cupid and Psyche, 1808
John Eardley Wilmot, 1812
Benjamin West, The Battle of La Hogue, c. 1778, NGA 45885.jpg
The Battle of La Hogue, c. 1778, National Gallery of Art
- John Sedley, view
- Portrait of a Gentleman, view
- Presentation of the Queen of Sheba at the Court of King Solomon, view
- The Envoys Returning from the Promised Land, view
- "Introducing Benjamin West". Royal Academy of Art. Retrieved February 19, 2013.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Benjamin West Explore Pennsylvania
- Hughes, Robert (1997). American Visions: The Epic History of Art in America. Alfred A. Knopf. p. 70. ISBN 0-679-42627-2
- p. 176 of African-American Orators: A Bio-critical Sourcebook, by Richard W. Leeman, 1996.
- Allen Staley, "Benjamin West," in Benjamin West: American Painter at the English Court (Baltimore, 1989), 28. For more on this painting, see Scott Paul Gordon, "Martial Art: Benjamin West's Death of Socrates, Colonial Politics, and the Puzzles of Patronage," William and Mary Quarterly 65, 1 (2008): 65–100.
- Hughes (1997), American Visions, p. 71
- Lister, Raymond (1989). British Romantic Painting. Cambridge University Press. ISBN 978-0521356879.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Galt, vol. 2, p. 1
- Galt, vol. 2, p. 2
- Lieutenant-General The Honourable Robert Monckton
- Knight, Charles, ed. (1858). "West, Benjamin". The English Cyclopædia. Biography – Volume VI. London: Bradbury and Evans.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Galt, vol. 2, pp. 6–7
- Galt, vol. 2, p. 9
- Galt, p. 15
- Now in the collections of the Tate Gallery and the Fitzwilliam Museum respectively
- "Trinity College, University of Cambridge". BBC Your Paintings. Archived from the original on May 11, 2014. Retrieved February 12, 2018.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Shinn, Earl (1880). The World's Art from the International Exhibition. A.W. Lovering.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Galt, vol. 2, p. 20
- Galt, vol. 2, pp. 33–34
- Bell, Whitfield J., and Charles Greifenstein, Jr. Patriot-Improvers: Biographical Sketches of Members of the American Philosophical Society. 3 vols. Philadelphia: American Philosophical Society, 1997, 2:193–200.
- Angelo (1828), pp. 360–61.
- Birmingham Museum of Art (2010). Birmingham Museum of Art: Guide to the Collection. London: Giles. p. 104. ISBN 978-1-904832-77-5. Archived from the original on September 10, 2011. Retrieved July 19, 2011.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Black, Jeremy (2007). Culture in Eighteenth-Century England: A Subject for Taste. London: Continuum. p. 36. ISBN 9781852855345.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "Dallas Museum of Art, accession number 1990.232". Collections.dallasmuseumofart.org. Archived from the original on October 25, 2012. Retrieved September 7, 2012.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "St Paul's website – Features of St Paul's Church". Saintpaulbrum.org. Archived from the original on November 15, 2012. Retrieved September 7, 2012.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "Book of Members, 1780–2010: Chapter W" (PDF). American Academy of Arts and Sciences. Retrieved July 28, 2014.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- This first version was transferred to the Tate Gallery where it was destroyed in a flood in 1928.
- "The Joseph Downs Collection". Winterthur Library. Retrieved March 24, 2008.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Richard H. Saunders; Ellen Gross Miles; National Portrait Gallery (Smithsonian Institution) (1987). American colonial portraits, 1700–1776. Published by the Smithsonian Institution Press for the National Portrait Gallery. ISBN 978-0-87474-695-2.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "Memorials of St Paul's Cathedral" Sinclair, W. p. 465: London; Chapman & Hall, Ltd; 1909.
- "Benjamin West PRA (1738 - 1820)", royalacademy.org.uk. Retrieved 31 December 2018.
- Angelo, Henry (1828). Reminiscences of Henry Angelo, with memoirs of his late father and friends, including numerous original anecdotes and curious traits of the most celebrated characters that have flourished during the last eighty years (Vol. 1). London: H. Colburn.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- John Galt: The life and studies of Benjamin West ... prior to his arrival in England; Publisher: Moses Thomas, Philadelphia (1816)
- John Galt: The Life, Studies, and Works of Benjamin West, Esq., President of the Royal Academy of London Publisher: Printed for T. Cadell and W. Davies. (1820)
- John Galt: The progress of genius : or authentic memoirs of the early life of Benjamin West. Abridged for the use of young persons. Publisher: Leonard C. Bowles. Boston (1832)
- John Galt, The Life and Studies of Benjamin West, Esq. (1816).
- Helmut von Erffa and Allen Staley, The Paintings of Benjamin West (New Haven, 1986).
- Ann Uhry Abrams, The Valiant Hero: Benjamin West and Grand-Style History Painting (Washington, 1985).
- James Thomas Flexner, "Benjamin West's American Neo-Classicism," New-York Historical Society Quarterly 36, 1 (1952), 5–41, rept. in America's Old Masters (New York, 1967), 315–40.
- Susan Rather. Benjamin West, John Galt, and the Biography of 1816. The Art Bulletin, Vol. 86, No. 2 (Jun. 2004), pp. 324–45
- Sherman, Frederic Fairchild, American Painters of Yesterday and Today, 1919, Priv. print in New York. Chapter: Benjamin West:https://archive.org/stream/americanpainters00sheriala#page/62/mode/2up
|Wikimedia Commons has media related to:|
|Wikisource has the text of the 1911 Encyclopædia Britannica article West, Benjamin.|
- Dictionary of National Biography. London: Smith, Elder & Co. 1885–1900.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> .
- Script error: The function "Canada" does not exist.
- The Winterthur Library Overview of an archival collection on Benjamin West.
- Royal Academy Collections website Loyd Grossman talking about West's work
- Union List of Artist Names, Getty Vocabularies. ULAN Full Record Display for Benjamin West. Getty Vocabulary Program, Getty Research Institute. Los Angeles, California.
- The Benjamin West Drawings Collection, including 33 of his drawings and sketches, is available for research use at the Historical Society of Pennsylvania.
- Documenting the Gilded Age: New York City Exhibitions at the Turn of the 20th Century. A New York Art Resources Consortium project. Annotations and a pencil sketch of a West painting in an exhibition catalog.
- Paintings by Benjamin West at the Art UK site
|President of the Royal Academy
|President of the Royal Academy
Sir Thomas Lawrence | <urn:uuid:6454c811-de03-43f1-89ee-6379af13aa82> | CC-MAIN-2022-33 | https://infogalactic.com/info/Benjamin_West | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572221.38/warc/CC-MAIN-20220816060335-20220816090335-00097.warc.gz | en | 0.923739 | 4,572 | 2.5625 | 3 |
The family Ochotona and 30 currently recognized species (Hoffman and Smith, 2005). There are more than 30 extinct genera that have been identified as far back as the Eocene, one of which, Prolagus, went extinct in the late 18th century (Dawson, 1969; Ge et al., 2012). Today, Ochotonidae represents approximately 1/3 of lagomorph diversity (Smith, 2008). Their range is primarily in Asia although there are two North American species, American pikas and collared pikas (Smith et al., 1990). They range in weight from 70 to 300 g and are usually less than 285 mm in length (Smith, 2008). There is no known sexual dimorphism (Vaughan et al., 2011). The main differences from leporids are their (i) small size, (ii) small, rounded ears, (iii) concealed tails, (iv) lack of supraorbital processes, and (v) 2, rather than 3, upper molars (Smith, 2008). There are two main ecotypes, one of which is associated with rocky habitats and the other with meadow, steppe, forest, and shrub habitats. Each ecotype is associated with specific life history traits as well as behavior. Most species fall within one of these ecotypes, although there are some species which exhibit intermediate characteristics (Smith, 2008). (Ge, et al., 2012; Hoffman and Smith, 2005; Smith, et al., 1990; Smith, 2008; Vaughan, et al., 2011)comprises the pikas, including one extant genus
Although the historic range of ochotonids included Asia, Europe, northern Africa, and North America, today ochotonids are found only in Asia and the high mountains of western North America. Their center of diversity is China, where 24 species are found (Smith, 2008). In Asia, pikas are found as far west as Iran, south into India and Myanmar, and into northern Russia. The two Nearctic species are found in the central Alaskan Range, the Canadian Rockies, and the Rockies, Sierra Nevadas, and Great Basin in the continental United States (IUCN, 2011). (IUCN, 2011)
Ochotonids are found in two distinct habitats: talus habitat or in meadow, steppe, forest, and shrub habitats. Talus-dwellers inhabit the crevices between rocks on mountain slopes. These species forage in the alpine meadows that abut the rocks or from the vegetation that grows between the rocks. They are found across a wide altitudinal gradient from below 90 to above 6000 m (Nowak and Wilson, 1991). Species that are typically found in talus habitats are alpine pikas, silver pikas, collared pikas, Chinese red pikas, Glover’s pikas, Himalayan pikas, northern pikas, Ili pikas, large-eared pikas, American pikas, Royle’s pikas, and Turkestan red pikas (Smith, 2008). (Nowak and Wilson, 1991; Smith, 2008)
Non-talus dwelling pikas are found in a variety of vegetated habitats where they forage and produce burrows. The meadows they occupy are also typically at high elevation. The meadow-burrowing pikas are all found in Asia and include Gansu pikas, black-lipped pikas, Daurian pikas, Kozlov’s pikas, Ladak pikas, Muli pikas, Nubra pikas, steppe pikas, Moupin pikas, and Thomas’s pikas (Smith, 2008). (Nowak and Wilson, 1991; Smith, 2008)
Some species, including Pallas's pikas and Afghan pikas are known to occur in both habitat types and are referred to as intermediate species (Smith, 2008). Although intermediate in habitat, these species exhibit the life-history traits and behavior of meadow-dwelling pikas. (Nowak and Wilson, 1991; Smith, 2008)
Lagomorpha (Huchon et al., 2002; Meredith et al. 2011). The other is Leporidae (rabbits and hares). Lagomorpha and Rodentia make up the clade Glires (Meng et al., 2003). Glires and Archonta make up the clade Euarchontaglires (Murphy et al. 2001). Ochotonidae was first described in 1897 by Oldfield Thomas. Synonyms include Lagomina Gray, 1825; Lagomyidae Lillijeborg, 1866; and Prolaginae Gureev, 1960 (Hoffman and Smith, 2005). (Hoffman and Smith, 2005; Huchon, et al., 2002; Lanier and Olson, 2009; Lissovsky, et al., 2007; Meng and Wyss, 2001; Meredith, et al., 2011; Murphy, et al., 2001; Niu, et al., 2004; Yu, et al., 2000)is one of two families in the order
The relationships within the family and within the genus Ochotona are less well understood. Recent molecular phylogenies include Yu et al. (2000), Niu et al. (2004), Lissovsky et al. (2007), and Lanier and Olson (2009). Current understanding is that there are three subgenera within Ochotona based on both morphological and molecular evidence (Yu et al., 2000). The relationships between these and the independence of some species is still highly debated (Hoffman and Smith, 2005). (Hoffman and Smith, 2005; Lanier and Olson, 2009; Lissovsky, et al., 2007; Niu, et al., 2004; Yu, et al., 2000)
Ochotonids exhibit little physical variation. They are generally small, ranging in body length from 125 to 300 mm and weighing 70 to 300 g (Nowak and Wilson, 1991; Smith, 2008). Unlike leporids, pikas lack a visible tail and have short rounded ears with large, valvular flaps and openings at the level of the skull (Vaughan et al. 2011). The ears are only weakly movable (Diersing, 1984) and their nostrils can be completely closed (Nowak and Wilson, 1991). They have short limbs with the hind limbs barely longer than the forelimbs (Nowak and Wilson, 1991). They have 5 front digits and 4 hind digits all with curved claws (Vaughan et al., 2011). The soles of the feet are covered by long hair but the distal pads are exposed (Diersing, 1984). They are digitigrade while running but plantigrade during slow movement (Vaughan et al., 2011). Ochotonids have 22 thoracolumbar vertebrae and lack a pubic symphysis (Diersing, 1984). (Diersing, 1984; MacArthur and Wang, 1973; Nowak and Wilson, 1991; Vaughan, et al., 2011)
The skull is generally similar to that of leporids. It is flattened, exhibits fenestration, and is constricted between the orbits (Vaughan et al., 2011). The ochotonid tooth formula is 2/1 0/0 3/2 2/3=26. The first incisors are ever-growing and completely enameled, while the second are small, peg-like, and directly behind the first. The cutting edge of the first incisor is v-shaped (Nowak and Wilson, 1991). They have a long post-incisor diastema and hypsodont, rootless cheek teeth. Occlusion is limited to one side at a time, with associated large masseter and pterygoideus muscles allowing for transverse movement while the cheekteeth have transverse ridges and basins (Vaughan et al., 2011). The zygomatic arch is slender and not vertically expanded. The jugal is long and projects more than halfway from the zygomatic root of the squamosal to the external auditory meatus (Diersing, 1984). Unlike leporids, pikas lack a supraorbital process. Their rostrum is short and narrow and the maxilla has a single large fenestra (Vaughan et al., 2011). The auditory bulla, which is fused with the petrosal, are spongiose and porous. The bony auditory meatus is laterally directed and not strongly tubular (Diersing, 1984). (Diersing, 1984; Nowak and Wilson, 1991; Vaughan, et al., 2011)
Pikas exhibit no sexual dimorphism (Nowak and Wilson, 1991). Males lack a scrotum and both sexes have a cloaca, which opens on a mobile apex supported by a rod of tail vertebrae (Diersing, 1984; Vaughan et al., 2011). Females have between 4 and 6 mammae, with one pair inguinal and one to two pairs pectoral (Nowak and Wilson, 1991). Ochotonid coats consist of long, dense, fine fur and are usually grayish brown, although they vary inter- and intra-specifically depending on habitat. Some ochotonids go through two molts, with darker fur during the summer and grayer pelage in the winter (Diersing, 1984). (Diersing, 1984; Nowak and Wilson, 1991; Vaughan, et al., 2011)
Physiologically, pikas have a high metabolic rate. They also have low thermal conductance and, even at moderately high temperatures, low ability to dissipate heat (MacArthur and Wang, 1973). (MacArthur and Wang, 1973)
Most talus-dwelling pika species are monogamous or polygynous (Gliwicz, Witczuk, and Pagacz, 2005; Smith, 2008). There are some notable exceptions, including documented cases of polygynandry in collared pikas (Zgurski and Hik, 2012). In contrast, meadow-dwelling pikas exhibit monogamous, polygynous, polyandrous, or polygynandrous mating systems, depending on the sex ratio at the beginning of the breeding season (Smith and Dobson, 2004). (Gliwicz, et al., 2005; Smith and Dobson, 2004; Smith, 2008; Zgurski and Hik, 2012)
The talus-dwelling species, such as American pikas, exhibit low annual production of offspring (Smith 1988). Typically, talus-dwelling pikas produce only one successfully weaned litter of 1 to 5 young a year. On average, approximately 2 young per mother are successfully weaned per year (Smith, 2008). Juveniles reach sexual maturity as yearlings (Smith et al., 1990). Some talus-dwelling species exhibit absentee maternal care typical of lagomorphs (Whitworth 1984). The gestation period of American pikas, for example, is 30.5 days (Smith, 1988) and their breeding season lasts between late April and the end of July (Markham and Whicker, 1973). In contrast, meadow-dwelling species have much higher potential reproductive output, but it varies depending on environmental conditions. They can produce litters that are twice as large as those of talus-dwellers up to every three weeks during the reproductive season. The reproductive season of O. curzoniae, a meadow-dwelling species, generally lasts from March to late August but can vary between years and sites (Yang et al., 2007). On average, multiple litters are produced each year and most young are successfully weaned (Smith, 2008). Further increasing their reproductive output, juveniles born early in the breeding season will reach sexual maturity and breed during the summer of their birth (Smith et al., 1990). (Diersing, 1984; Markham and Whicker, 1973; Smith, et al., 1990; Smith, 1988; Smith, 2008; Whitworth, 1984; Yang, et al., 2007)
Some talus-dwelling species exhibit absentee maternal care typical of lagomorphs (Whitworth 1984). Males and females of some meadow-dwelling species participate in affiliative behavior with juveniles as well as mate guarding and defending territories (e.g. Smith and Gao, 1991). Juveniles of meadow-dwelling species also continue to live on the parental territory through at least their first year (Smith, 2008). (Smith and Gao, 1991; Smith, 2008; Whitworth, 1984)
The average mortality of talus-dwelling species is low and many are long lived compared to most small mammals (Smith et al., 1990). American pikas live on average 3 to 4 years but have been known to live up to 7 years (Forsyth et al, 2005). Meadow-dwelling species experience high annual mortality and few individuals live more than two years (Smith, 1988). (Forsyth, et al., 2005; Smith, et al., 1990; Smith, 1988)
North American talus-dwelling pikas occupy and defend territories individually, particularly against members of the same sex. Except for when they come together to mate, these talus-dwelling pikas are relatively asocial (Smith et al., 1990). Dominance does not extend beyond an individual’s territory. Most social interactions are aggressive and chases and fights result from conspecific intrusion, and the theft of vegetation from the haypiles of conspecifics. Talus-dwelling ochotonids use vocalizations and scent-marking to demarcate their territories, which are relatively large and make up about ½ of their home range (Svendsen 1979; Smith, 2008). Territories are usually established near the edge of the talus/vegetation border and vary in size depending on species and the productivity of the adjoining vegetation (Smith, 2008). They are typically between 450 and 525 m^2 (Gliwicz, Witczuk, and Pagacz, 2005).
Some Asian talus-dwelling pika species defend territories as pairs. The pair uses the same main shelter and spend most of their time in the same area. They cooperate in hay-storage and communicate using vocalizations, but are asocial outside of the pair. Primarily the males demarcate the territory and defend it against intruders. These territories are typically larger than those of individual pikas, around 900 m^2 per pair, and these pikas live at much higher densities. (For a more complete discussion see Gliwicz, Witczuk, and Pagacz (2005).)
In contrast, the Asian meadow-dwelling species are considered to exhibit highly social family groups, consisting of adults as well as young of the year in communal burrows (Smith, 2008). These species live at much higher densities (more than 300/ha) than the talus-dwelling species and experience more variation in population density over seasons and between years (Nowak and Wilson, 1991). Meadow-dwelling pika exhibit both affiliative behaviors, such as allogrooming, nose rubbing, and various forms of contact, within family groups, as well as aggressive territorial behaviors toward non-family members. In addition, family members communicate with vocalizations, which can elicit affiliative contact (Smith, 2008). They also defend territories as a family unit and share communal hay piles (Smith et al., 1990). Their territories are also demarcated by scent-marking and vocalizations.
Both ecotypes are poor dispersers and typically do not range far from their natal territory. In talus-dwellers, an individual with control of a territory typically maintains it for life, and upon it’s death will be replaced by a juvenile born in a nearby territory and usually of the same sex (Smith, 1974; Smith, 2008). In meadow-dwellers, juveniles will stay in their home burrow for the first year and then less than half will disperse to nearby territories. Males are more likely to disperse, but even then typically move only a few territories away (Smith, 2008).
Pikas do not hibernate during the winter, but instead stay active in their burrows or rocky crevices. During this time they consume the food caches that they collected during the summer (Smith et al., 1990). Ochotonids are primarily diurnal, but can be active at all times of day as well as throughout the year (Nowak, 1991). They are frequently observed sunning themselves on rocks during warmer months (Diersing, 1984; Nowak, 1991). (Diersing, 1984; Gliwicz, et al., 2005; Nowak and Wilson, 1991; Smith, et al., 1990; Smith, 1974; Smith, 2008; Svedsen, 1979)
Most pika species vocalize both for predator alarms and territory defense (Smith et al., 1990; Nowak, 1991; Trefry and Hik, 2009). They produce a high-pitched 'eek' or 'kie' that is ventriloquial in character (Diersing, 1984). They have also been demonstrated to eavesdrop on the alarm calls of heterospecifics, such as marmots and ground squirrels (Trefry and Hik, 2009). Ochotonids can also communicate danger by drumming on the ground with their hind feet (Diersing, 1984). Meadow-dwelling, burrowing species produce multiple types of vocalizations, many of which are used in socializing with conspecifics (Smith, 2008). Low chattering and mewing noises have also been reported (Diersing, 1984). Both ecotypes also use scent-marking (Smith, 2008). (Diersing, 1984; Nowak and Wilson, 1991; Smith, et al., 1990; Smith, 2008; Trefry and Hik, 2009)
Pikas are generalist herbivores and typically collect caches of vegetation, which they live off of during the winter. They consume leaves and stems of forbs and shrubs as well as seeds and leaves of grasses; sometimes they also consume small amounts of animal matter (Diersing, 1984). Like most leporids, they produce two types of feces: soft caecotroph and hard pellets (Smith, 2008). During the summer, after the breeding season, pikas accumulate large stores of many different plants in their haypiles, which they then store for winter consumption. Their foraging patterns varies throughout the season in accordance with which plants are available, preferred, and/or have the highest nutritional content, selecting for higher caloric, lipid, water, and protein content (Smith and Weston, 1990). The foraging habits of pikas affect plant communities. Pikas alter which plants are collected while foraging as well as how far they go to forage, depending on whether they are being immediately consumed or are being added to a haypile. This variation results in a mosaic of plant community composition (Huntly, Smith and Ivins, 1986). This selective foraging has been demonstrated to stabilize plant community composition and slow the process of succession, as well as reduce the number of seeds in the soil (Huntly, Smith and Ivins, 1986; Khlebnikov and Shtilmark, 1965). (Huntly, et al., 1986; Khlebnikov and Shtilmark, 1965; Smith and Weston, 1990; Smith, 2008)
Pikas serve as an important food source to both birds and mammals in all of the habitats they occupy. Meadow-dwelling pikas, in particular, can be a preferred food or buffer species throughout the year, but are especially important prey in the winter as they are still active while similarly sized rodents hibernate (Smith et al., 1990). During high-density years, burrowing pikas can be the most important food source for Asian steppe predators, sometimes making up more than 80% of a predator’s diet (Sokolov, 1965). In addition to being prey for small to medium-sized carnivores, pikas are also often consumed by larger carnivores, including wolves and brown bears (Smith et al., 1990). (Smith, et al., 1990; Sokolov, 1965)
In addition to the important ecosystem roles that ochotonids serve as consumers and as prey, they also alter their environments through bioturbative ecosystem engineering. The burrowing of meadow-dwelling pikas improves soil quality and reduces erosion (Smith and Foggin, 1999). The accumulation and decomposition of leftover caches and the feces in burrow systems also helps increase the organic content of soil (Smith et al., 1990). In addition to their abiotic benefits, pika burrows are used by other mammals and birds and their caches are often consumed by other herbivores (Smith et al., 1990). The haypiles of talus-dwelling pikas also improve soil quality upon decomposition, thereby facilitating plant colonization of the talus (Smith et al., 1990). (Lai and Smith, 2003; Smith and Foggin, 1999; Smith, et al., 1990)
Traditionally, pikas were a valuable source of fur throughout Asia and in particular the Soviet Union (Smith et al., 1990). Additionally, some traditional herdsmen selectively graze their livestock in the winter on pika meadows where haypiles are exposed above the snow (Loukashkin, 1940). (Loukashkin, 1940; Smith, et al., 1990)
Some ochotonid species are considered pests in Asian countries, where they are believed to compete with livestock for forage, erode soil, and negatively affect agricultural crops such as apple trees and wheat (Smith et al., 1990). It has been demonstrated that pikas can harm agricultural crops (Smith et al., 1990) but no control studies have been conducted that support other claims. Pika foraging has been implicated in accelerating range deterioration but only in areas that were already overgrazed (Shi, 1983; Zhong, Zhou and Sun, 1985). Millions of hectares have been subject to poisoning in an effort to control pika numbers with mixed results, including extermination of non-target species (Smith et al., 1990). (Shi, 1983; Smith, et al., 1990; Zhong, et al., 1985)
Today, four ochotonid species (silver pikas, Hoffmann's pikas, Ili pikas, Kozlov's pikas) are classified as endangered or critically endangered due to habitat loss, poisoning, or climate change (Smith, 2008; IUCN, 2011). Additionally, many subspecies are threatened due to low vagility and its effects on stochastic metapopulation dynamics (Smith, 2008). Not enough is known about many species (10% are still considered data deficient by the IUCN) to truly assess their conservation status. Until the systematics of the family is better understood it will be hard to determine the outlook for many populations. Due to their low tolerance for high temperatures and low vagility, ochotonids are considered especially vulnerable to warming so the need for conservation efforts is expected to increase with climate change (Holtcamp, 2010). (Holtcamp, 2010; IUCN, 2011; Smith, 2008)
Aspen Reese (author), Yale University, Eric Sargis (editor), Yale University, Hayley Lanier (editor), University of Wyoming - Casper, Tanya Dewey (editor), University of Michigan-Ann Arbor.
living in the Nearctic biogeographic province, the northern part of the New World. This includes Greenland, the Canadian Arctic islands, and all of the North American as far south as the highlands of central Mexico.
living in the northern part of the Old World. In otherwords, Europe and Asia and northern Africa.
uses sound to communicate
young are born in a relatively underdeveloped state; they are unable to feed or care for themselves or locomote independently for a period of time after birth/hatching. In birds, naked and helpless after hatching.
having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria.
uses smells or other chemicals to communicate
having markings, coloration, shapes, or other features that cause an animal to be camouflaged in its natural environment; being difficult to see or otherwise detect.
animals that use metabolically generated heat to regulate body temperature independently of ambient temperature. Endothermy is a synapomorphy of the Mammalia, although it may have arisen in a (now extinct) synapsid ancestor; the fossil record does not distinguish these possibilities. Convergent in birds.
parental care is carried out by females
an animal that mainly eats leaves.
forest biomes are dominated by trees, otherwise forest biomes can vary widely in amount of precipitation and seasonality.
An animal that eats mainly plants or parts of plants.
offspring are produced in more than one group (litters, clutches, etc.) and across multiple seasons (or other periods hospitable to reproduction). Iteroparous animals must, by definition, survive over multiple seasons (or periodic condition changes).
a species whose presence or absence strongly affects populations of other species in that area such that the extirpation of the keystone species in an area will result in the ultimate extirpation of many more species in that area (Example: sea otter).
parental care is carried out by males
Having one mate at a time.
having the capacity to move from one place to another.
This terrestrial biome includes summits of high mountains, either without vegetation or covered by low, tundra-like vegetation.
the area in which the animal is naturally found, the region in which it is endemic.
Referring to a mating system in which a female mates with several males during one breeding season (compare polygynous).
the kind of polygamy in which a female pairs with several males, each of which also pairs with several different females.
having more than one female as a mate at one time
communicates by producing scents from special gland(s) and placing them on a surface whether others can smell or taste them
breeding is confined to a particular season
reproduction that includes combining the genetic contribution of two individuals, a male and a female
associates with others of its species; forms social groups.
digs and breaks up soil so air and water can get in
places a food item in a special place to be eaten later. Also called "hoarding"
uses touch to communicate
that region of the Earth between 23.5 degrees North and 60 degrees North (between the Tropic of Cancer and the Arctic Circle) and between 23.5 degrees South and 60 degrees South (between the Tropic of Capricorn and the Antarctic Circle).
Living on the ground.
defends an area within the home range, occupied by a single animals or group of animals of the same species and held through overt defense, display, or advertisement
A terrestrial biome. Savannas are grasslands with scattered individual trees that do not form a closed canopy. Extensive savannas are found in parts of subtropical and tropical Africa and South America, and in Australia.
A grassland with scattered trees or scattered clumps of trees, a type of community intermediate between grassland and forest. See also Tropical savanna and grassland biome.
A terrestrial biome found in temperate latitudes (>23.5° N or S latitude). Vegetation is made up mostly of grasses, the height and species diversity of which depend largely on the amount of moisture available. Fire and grazing are important in the long-term maintenance of grasslands.
movements of a hard surface that are produced by animals as signals to others
uses sight to communicate
Diersing, V. 1984. Lagomorphs. Pp. 241-248 in S Anderson, J Jones Jr., eds. Orders and Families of Recent Mammals of the World. New York: John Wiley & Sons.
Forsyth, N., F. Elder, J. Shay, W. Wright. 2005. Lagomorphs (rabbits, pikas and hares) do not use telomere-directed replicative aging in vitro. Mechanisms of Aging and Development, 126: 685-691.
Ge, D., Z. Zhang, L. Xia, Q. Zhang, Y. Ma, Q. Yang. 2012. Did the expansion of C4 plants drive extinction and massive range contraction of micromammals? Inferences from food preference and historical biogeography of pikas. Paleogeography, Paleoclimatology, Paleoecology, 326: 160-171.
Gliwicz, J., J. Witczuk, S. Pagacz. 2005. Spatial behaviour of the rock-dwelling pika (Ochotona hyperborea). Journal of Zoology, 267: 113-120.
Hoffman, R., A. Smith. 2005. Family Ochotonidae. Pp. 185-193 in D Wilson, D Reeder, eds. Mammal Species of the World: A Taxonomic and Geographic Reference. Baltimore: Johns Hopkins University Press.
Holtcamp, W. 2010. Silence of the pikas. Bioscience, 60: 8-12.
Huchon, D., O. Madsen, M. Sibbald, K. Ament, M. Stanhope, F. Catzeflis, W. de Jong, E. Douzery. 2002. Rodent phylogeny and a timescale for the evolution of Glires: Evidence from an extensive taxon sampling using three nuclear genes. Molecular Biology and Evolution, 19: 1053-1065.
Huntly, N., A. Smith, B. Ivins. 1986. Foraging behavior of the pika (Ochotona princeps), with comparisons of grazing versus haying. Journal of Mammalogy, 67: 139-148.
IUCN, 2011. "IUCN Red List of Threatened Species" (On-line). Accessed January 15, 2012 at http://www.iucnredlist.org.
Khlebnikov, A., F. Shtilmark. 1965. [Fauna of Siberian pine forests in Siberia and its use]. Moscow-Leningrad: Nauka.
Lai, C., A. Smith. 2003. Keystone status of plateau pikas (Ochotona curzoniae): effect of control on biodiversity of native birds. Biodiversity and Conservation, 12: 1901-1912.
Lanier, H., L. Olson. 2009. Inferring divergence times within pikas (Ochotona spp.) using mtDNA and relaxed molecular dating techniques. Molecular Phylogenetics and Evolution, 53: 1-12.
Lissovsky, A., N. Ivanova, A. Borisenko. 2007. Molecular phylogenetics and taxonomy of the subgenus Pika (Ochotona, Lagomorpha). Journal of Mammalogy, 88: 1195-1204.
Loukashkin, A. 1940. On the pikas of North Manchuria. Journal of Mammalogy, 21: 402-405.
MacArthur, R., L. Wang. 1973. Physiology of thermoregulation in pika, Ochotona princeps. Canadian Journal of Zoology, 51: 11-16.
Markham, O., F. Whicker. 1973. Seasonal data on reproduction and body weights of pikas (Ochotona princeps). Journal of Mammalogy, 54: 496-498.
Meng, J., A. Wyss. 2001. The Morphology of Tribosphenomys (Rodentiaformes, Mammalia): Phylogenetic Implications for Basal Glires. Journal of Mammalian Evolution, 8: 1-71.
Meredith, R., J. Janeck, J. Gatesy, O. Ryder, C. Fisher, E. Teeling, A. Goodbla, E. Eizirik, T. Simao, T. Stadler, D. Rabosky, R. Honeycutt, J. Flynn, C. Ingram, C. Steiner, T. Williams, T. Robinson, A. Burk-Herrick, M. Westerman, N. Ayoub, M. Springer, W. Murphy. 2011. Impacts of the Cretaceous Terrestrial Revolution and KPg Extinction on Mammal Diversification. Science, 334: 521-524.
Murphy, W., E. Eizirik, W. Johnson, Y. Zhang, O. Ryder, S. O'Brien. 2001. Molecular phylogenetics and the origins of placental mammals. Nature, 409: 614-618.
Niu, Y., F. Wei, M. Li, X. Liu, Z. Feng. 2004. Phylogeny of pikas (Lagomorpha, Ochotona) inferred from mitochondrial cytochrome b sequences. Folia Zoologica, 53: 141-155.
Nowak, R., D. Wilson. 1991. Walker’s Mammals of the World.. Baltimore: Johns Hopkins University Press.
Shi, Y. 1983. [On the influence of rangeland vegetation to the density of plateau pikas (Ochotona cuzoniae)]. Acta Theriologica Sinica, 3: 181-187.
Smith, A. 1988. Patterns of pika (genus Ochotona) life history variation. Pp. 233-256 in M Boyce, ed. Evolution of Life Histories of Mammals: Theory and Pattern. New Haven: Yale University Press.
Smith, A. 1974. The distribution and dispersal of pikas: influences of behavior and climate. Ecology, 55: 1368-1376.
Smith, A. 2008. The world of pikas. Pp. 89-102 in P Alves, N Ferrand, K Hackland, eds. Lagomorph Biology: Evolution, Ecology, and Conservation. Berlin: Springer-Verlag.
Smith, A., F. Dobson. 2004. Social dynamics in the plateau pika. Pp. 1016-1019 in M Bekoff, ed. Encyclopedia of Behavior, Vol. 3, 1 Edition. Westport, CT: Greenwood Publishing Group.
Smith, A., J. Foggin. 1999. The plateau pika (Ochotona curzoniae) is a keystone species for biodiversity on the Tibetan plateau.. Animal Conservation, 2: 235-240.
Smith, A., N. Formozov, R. Hoffmann, C. Zheng, M. Erbajeva. 1990. The pikas. Pp. 14-60 in J Chapman, J Flux, eds. Rabbits, Hares and Pikas: Status Survey and Conservation Action Plan. Gland, Switzerland: International Union for the Conservation of Nature.
Smith, A., W. Gao. 1991. Social relationships of adult Black-Lipped Pikas (Ochotona curzoniae). Journal of Mammalogy, 72: 231-247.
Smith, A., M. Weston. 1990. Ochotona Princeps. Mammalian Species, 352: 1-2.
Sokolov, V. 1965. [Fauna of Siberian pine forests and its use]. Moscow-Leningrad: Nauka.
Svedsen, G. 1979. Territoriality and behavior in a population of pikas (Ochotona princeps). Journal of Mammalogy, 60: 324-330.
Trefry, S., D. Hik. 2009. Eavesdropping on the neighborhood: collard pika (Ochotona collaris) responses to playback calls of conspecifics and heterospecifics. Ethology, 115: 928-938.
Vaughan, T., J. Ryan, N. Czaplewski. 2011. Mammalogy. Sudbury, MA: Jones and Bartlett Publishers.
Whitworth, M. 1984. Maternal care and behavioral development in pikas (Ochotona princeps). Animal Behavior, 32: 743-752.
Yang, S., B. Yin, Y. Cao, Y. Zhang, J. Wang, W. Wei. 2007. [Reproduction and behavior of plateau pikas (Ochotona curzoniae Hodgson) under predation risk: A field experiment]. Polish Journal of Ecology, 55: 127-138.
Yu, N., C. Zheng, Y. Zhang, W. Li. 2000. Molecular systematics of pikas (genus Ochotona) inferred from mitochondrial DNA sequences. Molecular Phylogenetics and Evolution, 16: 85-95.
Zgurski, J., D. Hik. 2012. Polygynandry and even-sexed dispersal in a population of collared pikas, Ochotona collaris. Animal Behavior, 83: 1075-1082.
Zhong, W., Q. Zhou, C. Sun. 1985. [The basic characteristics of the rodent pests on the pasture in Inner Mongolia and the ecological strategies of controlling]. Acta Theriologica Sinica, 5: 241-249. | <urn:uuid:405b77e7-a780-44f9-9168-3060a5ba34c4> | CC-MAIN-2022-33 | https://www.animaldiversity.org/accounts/Ochotonidae/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570741.21/warc/CC-MAIN-20220808001418-20220808031418-00094.warc.gz | en | 0.878749 | 7,995 | 3.65625 | 4 |
Ever heard of union workers and labor unions? Yes, do you know those endless dialogues between some groups and the management/government? That’s the job of a labor union. Therefore, this article will focus on the highest-paid workers in the USA as well as information on labor unions.
Table of contents
- What Is a Labor Union?
- What are the sections of workers that a union might be organized into?
- Who Constitutes Trade Unions and Workers?
- What Is Union Job?
- Benefits of Labour Union
- Is A Union Job Worth It?
- Do Union Workers Make Good Money?
- How Much Do Union Workers Make?
- How To Become A Union Worker
- Highest Paid Union Workers
- 4. Doctors
- 5. Lawyers
- 6. Marine Service Technician
- 7. Airline Manager
- 8. Firefighter
- 9. Customer Service Representatives
- 10. Plumbing Technician
- 11. Police Officer
- 12. Truck Driver Trainer
- 13. Nuclear Power Reactor Operator
- 14. Nurse
- 15. Nurse Practitioner
- 16. Construction Managers
- 17. Producer
- 18. Machine Parts Process Lead
- 19. Powerplant Operators
- 20. Film or Television Director
- 21. Electric Project Manager
- FAQs on Highest Paid Union Workers
What Is a Labor Union?
A trade union is otherwise known as a labor union in American English and is often simply referred to as a union.
It is an organization of workers who have come together for the purpose of achieving common goals, such as protecting the integrity of their trade, improving safety standards, and attaining better wages, benefits (such as health care, vacation, and retirement), and working conditions.
This is through the increased bargaining power commanded by solidarity among workers that have identified with the union.
Commonly, trade unions fund the formal organization, head office, and legal team functions of the trade union through regular fees or union dues contributed by the members of the Union.
The workplace volunteers make up the delegated staff of the trade union representation and they are appointed by members in democratic elections.
A trade union’s elect-leadership sets up a bargaining committee to bargain with the employer on behalf of union members (rank and file members) and take on a labor contract negotiation (collective bargaining) with employers.
What are the sections of workers that a union might be organized into?
The basic purpose of a trade union and any such association or union is the “maintenance or improvement of the conditions of the employment of its workers”.
Unions may organize the following section of workers:
- craft unionism: particular section of skilled workers
- general unionism: this comprises a number of workers from different trades.
- industrial unionism: all workers within a particular industry
The agreements reached by a union are obligatory on the following:
- rank and file members,
- the employer and
- other non-members workers, in some cases.
Who Constitutes Trade Unions and Workers?
From inception, trade unions are known for a constitution that bears the details of the governance of their bargaining unit and also has governance at various levels of government depending on the industry in which they are legally binding to their negotiations and functioning.
Trade unions originated from Great Britain but became popular in many countries during the Industrial Revolution.
Trade unions may consist of the following:
- Individual workers,
- past workers, students,
- apprentices or
- unemployed individuals.
The density of trade unions is dependent on the percentage of workers belonging to a trade union. Trade unions are denser in the Nordic countries.
What Is Union Job?
A union job is a job where an individual is a part of a labor union with other workers who are members of the Union as well.
Via labor unions, workers are united, making room for the voices of individual workers to be heard and made a goal of the union, probably.
Unionized workers commonly elect representatives to bring concerns of their members to the union’s attention.
Benefits of Labour Union
These are the benefits derived from being a member of a labor union, inclusive is that you become one of the highest-paid union workers.
The benefits include:
1. Collective Bargaining
Collective bargaining is among the reasons why you should become a union member and is the heart and soul of the labor union.
This occurs when a group of individuals, such as the workforce at a company, comes together to increase its negotiating power.
For instance, a single worker might see the need to implement a certain new safety measure in his factory, but he might have limited power to have the company oblige to his request of installing the new measure.
If the need for the new measure is related to the entire workforce, they can come together to pressure the company to install it and with that the probability of the company complying with the request is high.
Labor unions band workers together, making room for the voices of individual workers to be heard and possibly made into a goal of the union.
2. Higher Wages
Among the top benefits of being a union, worker is that you are placed at a better salary (higher wages) above what’s obtainable for non-union counterparts.
The earnings of union workers are about 20 percent more in terms of wages (excluding benefits) compared to other individuals in similar jobs that aren’t supported by a union and they are also more likely to enjoy consistent pay raises on a regular basis.
This is obtainable due to collective bargaining between the trade union (on behalf of the employees) and the employer that results in an agreement between the two parties set out clear terms regarding pay and wages.
For non-union workers, there can be a fixed wage by the employer without any formal bargaining process or input from the employee.
3. Better Benefits
On average, workers who are members of a union are more likely to enjoy better benefits compared to non-union employees. These better benefits include health, retirement accounts, and paid sick leave.
The record from the U.S. Department of Labor has it that 77 percent of union workers get pensions (guaranteed continued payments) after their retirement from the job, compared to only 20 percent of non-union workers.
Again, representatives of a union work out these details as a part of the collective bargaining agreement with the employer.
4. Union Representatives
One of the reasons why you should be a union worker is that union representatives work on your behalf peradventure there is any personal issue that you have with your employer.
Non-union employees, have to reach out to the company’s human resources department in such cases for assistance. But, it’s important to keep in mind that the department is part of the company and might have little or nothing to do.
To resolve the issue, a union representative will initiate a meeting between you and the employer and talk things over.
Is A Union Job Worth It?
Yes, a union job is worth it. Union workers earn better wages and benefits than workers who aren’t union members.
They are given the power to negotiate for more favorable working conditions and other benefits through collective bargaining through which the negotiating power of workers is increased.
Do Union Workers Make Good Money?
Yes, union workers make good money.
Union workers make good money because of the power of unionism.
Union workers get about 20 percent more in terms of wages (excluding benefits) compared to other individuals in similar jobs that aren’t supported by a union and they are also more likely to enjoy consistent pay raises regularly.
How Much Do Union Workers Make?
According to reports by BLS in 2019, on average, union workers earned approximately $1,095 weekly, while non-union workers earned closer to $892. Quite a huge gap, right?
Union workers are known for earning higher union wages compared to what non-union workers earn.
How To Become A Union Worker
To get a union job is the same as becoming a union worker.
Many individuals are after getting union jobs for the security they provide among many other benefits.
To become a union worker, you are to follow these steps:
- Find a local labor union
- Sign up for an apprenticeship program
- Check job boards
- Identity with a trade union in your industry.
- Visit the Union Jobs website
1. Find a local labor union
To find a local labor union in America, you can find one on the website of the American Federation of Labor and Congress of Industrial Organization (AFL-CIO).
Your local labor union will provide guidance to employers and their websites with job postings. It’s advisable to reach out to union officials for assistance.
Such local labor websites also have the option to request that a union organizer contact you to help.
2. Sign up for an apprenticeship program
Why do you need to sign up for an apprenticeship program? As one who wants to become a union worker, having experience and training in some trades will make you rightly positioned to be hired by many employers
Your apprenticeship program has the ability to connect you with unions in the industry or employers who are unionized and are looking to hire skilled professionals.
There are a number of apprenticeship programs you can find. To do that visit https://www.apprenticeship.gov/ for available apprenticeship opportunities near you.
3. Check job boards
While hunting for a union job, make sure to include union language in your search to enable you to find job opportunities at employers that are affiliated with a union.
Some of these employers can come up with job listings to capture information about their affiliated union to enable job seekers to have access to the details beforehand.
Aside from the aforementioned ways, you can also apply for jobs on an employer’s website. You can visit the website of some places you want to work in to find out whether they belong to a union.
If that is the case, they would most likely, have information well displayed and you can apply to become an employee there.
4. Connect with a union in your industry
There are a number of unions that exist for certain industries and you can find them by performing an alike search.
Take on an option exploration and reach out to the unions to find out whether they have any resources for you or insight into which of their affiliated companies are taking people.
Highest Paid Union Workers
The Wealth Circle (TWC) brings to you the list of 25 highest-paid union workers.
National average salary: $17,192 per year
Actors and actresses are among the highest-paid union workers and are professionals who convey the emotions, behaviors, and mannerisms of other individuals in creative settings.
They work in film, television, theater, and other productions.
Actors and actresses are known for having an understanding of human emotions with which they excite people, make them laugh, and elicit other strong emotions from their audience.
Again, they are known for having a union that makes for sought-after achievement.
2. High School Teacher
Average salary: $48,736 per year
Primary duties: High school teachers are otherwise known as secondary school teachers
Part of their responsibility is to prepare lessons, map out assignments, and grade students’ work.
Among the training that high school teachers receive is in differentiation techniques, instruction delivery, and behavior management.
In addition to the responsibilities of high school teachers are offering academic support and advancement opportunities for their students.
Median annual wage: $91,010
The said median annual wage is according to the U.S. Bureau of Labor Statistics (BLS).
According to BLS too, there is a projection that the engineering field will have employment growth of about 140,000 new jobs over the next decade.
The bottom line about being an engineer is that it is well worth the time and effort it takes.
An Engineer uses math and science to solve different technical problems. The main duties of an engineer include:
- Coming up with new products for companies or individuals to use,
- Maintaining current products to enhance use and
- Designing new machines for the improvement of an organization’s efficiencies.
The national average salary of doctors in the United States: $175,632
The duties and responsibilities of a doctor are specific most times and vary by specialty.
Though this is the case, all doctors need to be able to perform the following duties:
- Symptom assessment
- Conditions diagnosis
- Treatment prescription and administration.
- Provide follow-up care to patients, refer them to other providers in times when such is required, and interpret their laboratory results.
- Work together with physician assistants, nurse practitioners, registered nurses, and other health professionals
- Prescribe medication
- Stay updated with medical technology and research.
The median annual wage for lawyers: $126,930.
Some of the duties of a lawyer include the following:
- Interpretation of laws, rulings, and regulations for individuals and businesses.
- Presentation and summarization of cases to judges and juries.
- Represent clients in court or before government agencies.
6. Marine Service Technician
National average salary: $41,920 per year
Marine service technicians are among the highest-paid union workers and their job responsibility is to maintain and repair a wide variety of boats and other water vessels.
Also, they ensure the safety of marine passengers through equipment inspection, ensuring that navigational tools the functionality, and making any necessary repairs.
They do the following daily
- Diagnostic test-run
- Report writing
- Submission of findings
7. Airline Manager
National average salary: $42,663 per year
Primary duties: An airline manager is a professional who works in the maintenance department of a commercial airline.
Their job responsibility is to oversee the operations and communications needed for ensuring safe air travel.
Airline managers are known for the following:
- Regular test equipment
- Engagement in conversations with airline personnel
- Proper documentation and
- Overseeing maintenance repairs and installations.
National average salary: $44,313 per year
Primary duties: Firefighters are individuals in a community who respond to fire-related emergencies.
They drive fire trucks and other emergency vehicles, but more expressly, they extinguish fires in homes, commercial buildings, cars, or nature.
The training they receive is such that it equips them to rescue people and animals from emergency situations like a car accident.
9. Customer Service Representatives
National average salary: $46,289 per year
Primary duties: Customer service representatives are individuals hired by a company for the purpose of interacting with clients and companies and are among the highest-paid union workers.
On a regular note, they process orders, deliver billing statements, collect payments, respond to customer complaints and answer questions relating to products and delivery.
10. Plumbing Technician
National average salary: $47,820 per year
Primary duties: Plumbing technicians, more usually known as plumbers, are skilled laborers who install, maintain, and repair pipes and other materials used for carrying water or sewage.
This set of union workers regularly coordinates with contractors, electricians, and other professionals in the construction industry.
11. Police Officer
National average salary: $54,074 per year
Primary duties: A police officer is a law enforcement employee and is among the highest-paid union workers.
They work in teams to secure communities. They patrol areas, conduct investigations, respond to service calls, and enforce federal and state.
On a regular note they do the following: observe, meditate, gather evidence, and report their findings.
12. Truck Driver Trainer
National average salary: $51,409 per year
Primary duties: The job responsibility of a truck driver trainer is to deliver instruction to people who want to become professional truck drivers.
They do the following:
- study state guidelines,
- administer practice tests,
- conduct practice driving sessions and
- complete forms.
13. Nuclear Power Reactor Operator
National average salary: $59,682 per year
Primary duties: The operators of a Nuclear-power reactor are people who manage the control systems for nuclear reactors.
Their daily responsibilities include
- dispatch of orders,
- performance of maintenance checks and
- submission of reports.
National average salary: $62,380 per year
Primary duties: Nurses are among the highest-paid union workers and are healthcare professionals who work in various settings to provide care to patients.
The duties that a nurse performs regularly include;
- vital checks,
- Administration of medicine
- performance of patient screenings.
- Recording of diligent notes,
- Report writing and
- Relating treatment plans to patients and their advocates.
15. Nurse Practitioner
National average salary: $73,300 per year according to the Bureau of Labor and statistics
Primary duties: Nurse practitioners are among the highest paid union workers and are health care professionals who work in various settings to provide healthcare for patients.
They assess and diagnose patients alongside doctors and other medical professionals
You asked the difference between a nurse and a nurse practitioner, right?
Nurse practitioners differ from registered nurses with advanced training and responsibility as the minimum educational requirement for a nurse practitioner is a master’s degree.
16. Construction Managers
National average salary: $80,303 per year
Primary duties: A construction manager is a skilled laborer who is in charge of overseeing the operations relating to construction sites.
They organize and plan project and site details, and are the head of crew members and other personnel.
The duties of construction managers include the following:
- drafting plans,
- ensuring site and crew member safety and managing client needs.
- delegation of responsibilities and
- ensuring that Projects are to completion on time and under budget.
National average salary: $74,420 per year according to the Bureau of Labor and statistics
Primary duties: Producers are creative professionals who are famous for their creativity. They work in film, theater, or television.
The duties of producers include
- overseeing film production and assist
- assisting directors and writers in the development of an acting project.
- scripts review, budgets analysis, talent procurement and notes taking.
18. Machine Parts Process Lead
National average salary: $77,988 per year
Primary duties: A machine parts process lead is among the highest-paid union workers and is a job involving the manufacturing process.
Machine parts process leads regularly perform quality control checks and communicate with assembly workers and machine operators.
19. Powerplant Operators
National average salary: $87,140 per year
Primary duties: Power plant operators are individuals who work with machines and equipment in power-generating plants.
These powerplant operators control systems and monitor boilers, generators, and turbines.
On a daily basis, the responsibilities of a powerplant operator include running safety checks, cleaning equipment, and submitting reports or logs.
20. Film or Television Director
National average salary: $96,050 per year
Primary duties: A television or film director is an individual who is in charge of overseeing all aspects of transforming a script or concept into a fully realized production for film, television, commercials, music videos, and beyond.
Via the use of creative vision, industry knowledge, and strong leadership skills, the film or television director coordinates all aspects of production.
The production aspects include the following:
- finalizing scripting,
- casting and locations,
- directing the camera and acting on location,
- overseeing sound and video editing in post-production.
21. Electric Project Manager
National average salary: $83,890 per year
Primary duties: Electric project managers are among the highest-paid union workers and professionals who work on construction sites.
The duties of an electric project manager include
- Preparation of reports,
- Presentation of documents
- Estimation and forecast of schedules, project costs, and overall performance.
FAQs on Highest Paid Union Workers
According to the BLS 2019 reports, on average, union workers earned roughly $1,095 weekly, while nonunion workers earned about $892.
Unions may organize the following section of workers:
craft unionism: particular section of skilled workers
general unionism: this comprises a number of workers from different trades.
industrial unionism: all workers within a particular industry
These duties of a trade union may include the following:
negotiation of wages, work rules, occupational health and safety standards for workers
increasing co-operation and well-being among workers
securing facilities for workers
establishing avenues for workers and employers to meet up.
provision of Labor Welfare
The agreements negotiated by a union are binding on the following:
rank and file members,
the employer and
other non-members workers, in some cases.
A union job is a job where an individual is a part of a labor union with other workers who are members of the Union as well. Unions provide employees with support for their work conditions, benefits, wages, etc.
Trade union jobs are among the sought-after jobs in the world and present a lot of appealing benefits. So, you would do yourself good by getting rightly positioned to get a trade union job. | <urn:uuid:539965b8-41b8-424b-8943-f48305a2224f> | CC-MAIN-2022-33 | https://kiiky.com/wealth/highest-paid-union-workers/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571190.0/warc/CC-MAIN-20220810131127-20220810161127-00097.warc.gz | en | 0.947413 | 4,491 | 3.15625 | 3 |
What vaccine is my child eligible for?
The Pfizer-BioNTech COVID-19 vaccine:
- Children 6 months – 4 years of age are eligible for three shots of the Pfizer-BioNTech vaccine. The initial two doses are administered 3-8 weeks apart, followed by a third dose administered at least 8 weeks after the second dose.
- Children 5 years of age and older are eligible for a two-dose series of the Pfizer-BioNTech vaccine given 3-8 weeks apart.
- Children 5 years of age and older who are moderately to severely immunocompromised should receive an additional dose of the Pfizer-BioNTech vaccine at least 28 days after the final dose (second dose) of their initial, primary vaccine series.
The Moderna COVID-19 vaccine:
Children 6 months – 5 years of age are eligible for two shots of the Moderna vaccine, given 4-8 weeks apart.
- Children 6 months of age and older who are moderately to severely immunocompromised should receive an additional dose of the Moderna vaccine at least 4 weeks after the final dose (second dose) of their initial, primary vaccine series.
- Children 5 years of age and older should receive a booster dose of the Pfizer-BioNTech vaccine at least 5 months after completing their initial Pfizer-BioNTech vaccine series. Children who have received Moderna COVID-19 vaccine as their primary series are not eligible for a booster dose at this time.
Which vaccine should my child get?
All of the COVID-19 vaccines available for children 6 months and older are safe, effective, and recommended. Depending on your child’s age, they may be eligible for either the Pfizer-BioNTech or Moderna COVID-19 vaccine.
If your child is eligible for both the Pfizer-BioNTech and Moderna vaccine, parents and guardians can choose which vaccine they’d like their child to receive or consult with their child’s health care provider or vaccine administrator if they have questions. NYSDOH recommends parents and guardians get their child vaccinated with whichever vaccine is available.
Where can I find COVID-19 vaccine for my child?
COVID-19 vaccines for children 6 months of age and older are free and widely available statewide, including at including through pediatricians, family physicians, local county health departments, federally qualified health centers, and pharmacies enrolled in the Federal Retail Pharmacy Program.
Parents and guardians are encouraged to contact their child’s healthcare provider about scheduling a vaccine appointment for children under five years of age.
Parents and guardians can also visit vaccines.gov, text their ZIP code to 438829, or call 1-800-232-0233 to find nearby locations. Please note, due to federal regulation, some pharmacies are only able to vaccinate children three years and older. If you are scheduling a vaccine appointment at a pharmacy for your child three years and older, you may need an authorization code from your pediatrician to validate their age. Make sure the provider is administering the vaccine to children under five years of age.
Is the COVID-19 vaccine free?
All COVID-19 vaccines are free and available at no cost. There is also no charge for the injection or administration of the vaccine. This includes the COVID-19 vaccine for children. Health care providers who give COVID-19 vaccines must vaccinate everyone – whether or not they have health insurance.
What You Should Know
Can children really catch COVID-19?
Yes. Individuals of all ages, including babies, toddlers, and children and teens of all ages can contract the virus that causes COVID-19 as well as spread it to others.
What are the risks of my child being unvaccinated?
Those who are unvaccinated have the greatest risk of infection and severe disease from COVID-19, including hospitalization and death. This is true for children of all ages, including babies and toddlers. Children are also at risk of a dangerous inflammatory condition called MIS-C which can occur several weeks after COVID-19 infection.
Vaccination will help protect your little ones and reduce their risk of severe disease, hospitalizations, or developing long-term COVID-19 complications.
That’s why NYSDOH, CDC, and pediatricians across New York and around the country, including the American Academy of Pediatrics, recommend that all eligible babies, toddlers, and children 6 months and older stay up to date with their COVID-19 vaccines.
What is long COVID, and is my child at risk?
Children who contract COVID-19 may be at risk of long COVID. Symptoms associated with long COVID can vary widely, from cardiovascular symptoms like heart palpitations to difficulty breathing and excessive fatigue and can include difficulty concentrating or other psychological symptoms. Long COVID symptoms can occur even if the initial COVID illness is not severe and can last for months or even a year. Scientists are still working to understand long COVID.
What about immunocompromised children?
- Children 5 years of age and older who are moderately to severely immunocompromised should receive an additional dose of the Pfizer-BioNTech or Moderna vaccine at least 4 weeks after the second dose of their primary vaccine series.
- Children ages 6 months through 4 years old who receive Moderna vaccine should receive an additional dose of Moderna vaccine at least 4 weeks after the second dose of their primary vaccine series.
- Children ages 6 months through 4 years old who receive a 3-dose primary series of Pfizer-BioNTech vaccine are not eligible for an additional dose at this time.
What immunocompromising conditions currently qualify children 6 months – 11-years-old to be eligible for an additional dose of the COVID-19 vaccine?
Consistent with CDC's guidance, this includes moderately or severely immunocompromised due to a medical condition or receipt of immunosuppressive medications or treatments. Specifically, immunocompromising conditions may include:
- Been receiving active cancer treatment for tumors or cancers of the blood
- Received an organ transplant and are taking medicine to suppress the immune system
- Received a stem cell transplant within the last 2 years or are taking medicine to suppress the immune system
- Moderate or severe primary immunodeficiency (such as DiGeorge syndrome, Wiskott-Aldrich syndrome)
- Advanced or untreated HIV infection
- Active treatment with high-dose corticosteroids or other drugs that may suppress your immune response
Because of the risk of COVID-19 infection in this population, immunocompromised people should continue to be counseled regarding the potential for a reduced immune response after vaccination and the importance of additional protective measures, regardless of the decision to receive an additional dose of the COVID-19 vaccine. Prevention measures include wearing a well-fitting mask, staying six feet apart from others they don't live with, and avoiding crowds and poorly ventilated indoor spaces until advised otherwise by their healthcare provider particularly in areas of increased transmission. Close contacts of immunocompromised people should be strongly encouraged to be vaccinated against COVID-19.
Parents or guardians with questions are encouraged to consult with their child's health care provider.
Safety and Efficacy
Is the COVID-19 vaccine safe for children?
Yes. COVID-19 vaccines have undergone – and will continue to undergo – the most intensive safety monitoring in U.S. history. The U.S. Food and Drug Administration’s (FDA) evaluation and analysis of the safety, effectiveness and manufacturing data of these vaccines was rigorous and comprehensive, supporting the authorizations for administering the COVID-19 vaccine down to children 6 months of age.
The CDC Director and its Advisory Committee on Immunization Practices’ (ACIP) recommend that all children 6 months and older should receive a COVID-19 vaccine. Additional information can also be found on www.cdc.gov.
Is the vaccine effective for children?
Yes, the COVID-19 vaccines authorized for children down to 6 months of age are safe, effective, and the best way for you to protect your child from the virus. Parents and guardians can learn more about the safety and effectiveness of the COVID-19 vaccines for children on the FDA's website.
What are the side effects my child may experience after being vaccinated?
Your child may not notice any changes in how they feel after getting the vaccine. But it’s also possible to feel a little “under the weather.” This can happen after any vaccine. It’s also important to know that children 6 months – 11 years of age receive a smaller dose of the COVID-19 vaccine than adolescents and adults 12 and older.
After the COVID-19 vaccine, your child may have:
- A sore arm where they got the shot
- A headache
- Nausea and vomiting
These side effects are not dangerous and just a sign of your child’s immune system doing its job. Parents and guardians are encouraged to speak with their child’s pediatrician or primary health care provider if they have questions.
Will the vaccine give my child COVID?
No. None of the COVID-19 vaccines—including the mRNA Pfizer-BioNTech and Moderna vaccines authorized for children down to 6 months of age—can give your child COVID-19.
None of the vaccines are made up of materials that can cause disease. For example, the first vaccines authorized for emergency use by the FDA use a small, harmless part of the virus’ genetic material called ‘mRNA’. This is not the virus. mRNA vaccines teach your or your child’s body to create a virus protein. Your or your child’s immune system develops antibodies against these proteins to fight the virus that causes COVID-19 if you or your child are exposed to it. That is called an immune response.
If my child is 11, should I wait for them to turn 12 so they can receive a larger dose of the Pfizer-BioNTech COVID-19 vaccine?
No. Parents and guardians should get their children ages 6 months – 11 years vaccinated as soon as possible with the appropriate dosage, which is based on their age at the time each vaccine dose is administered. Our nation’s best medical and health experts have worked to ensure that the vaccine doses for 5 – 11-year-olds are safe and effective – offering our children excellent protection against COVID-19 and generating a strong immune response.
If my child’s weight is closer to a 12-year-old’s weight, would that qualify them for the higher dose of the COVID-19 vaccine? Or, if my teen is underweight, should they get the lower dose?
No. In fact, weight is not a factor in determining the right dosage amount for your child. Instead, the dosage amount is based on age because age is what reflects the maturity of your child’s immune system. That’s why eligible children ages 6 months – 11 years should get vaccinated as soon as possible with the appropriate dosage, which is based on their age at the time each vaccine dose is administered.
If my child turns 12 in between their first and second doses, what should they get for their second dose?
Eligible children ages 6 months – 11 years should get vaccinated as soon as possible with the appropriate dosage, which is based on their age at the time each vaccine dose is administered. This means if a child who is 11 turns 12 between their first and second vaccine dose, then that child should receive the dosage amount for a 12-year-old for their second dose.
Do the Pfizer-BioNTech or Moderna COVID-19 vaccines contain animal-based ingredients?
No! The Pfizer-BioNTech and Moderna COVID-19 vaccines contains no human or animal products, preservatives, or adjuvants and utilizes no ingredients of human or animal origin.
Do COVID-19 vaccines contain mercury/thimerosal?
No! There are no preservatives such as mercury/thimerosal in any of the currently available COVID-19 vaccines, including the Pfizer-BioNTech and Moderna vaccines available for children down to 6 months of age.
Can my children receive the COVID-19 vaccine at the same time they receive other vaccines?
Yes. According to the CDC, there is no recommendation that any spacing is needed for your child to receive the COVID-19 vaccines and other vaccines. This means your child can get the COVID-19 and other vaccines—such as their seasonal flu shot—at the same or any time. This includes together, before, or after other vaccines.
My child tested positive for COVID-19 and/or COVID-19 antibodies. Do they still need the vaccine?
Yes! The CDC recommends that individuals get vaccinated even if they have already had COVID-19, because they can be infected more than once. While your child may have some immunity after recovering from COVID-19, we don’t know how long this protection will last. Vaccination is safe, including in a child who has already been infected. Children who get COVID-19 are at risk of serious illnesses, and some have debilitating symptoms that persist for months.
Is it better for my child to get natural immunity to COVID-19, rather than immunity from a vaccine?
No! All children, including children who have already had COVID-19, should get vaccinated.
Children who get COVID-19 are at risk of serious illness, and some have debilitating symptoms that persist for months. While your child may have some immunity after recovering from COVID-19, we don’t know how long this protection lasts. Getting vaccinated against COVID-19 is safe and effective, and will protect your child against the virus.
Can my child get the COVID-19 vaccine if they are sick?
If your child is sick with COVID-19, they should wait to be vaccinated until they have recovered and are no longer isolated. If your child is sick with an illness other than COVID-19, you can check with your child’s pediatrician or primary health care provider for advice on when your child should be vaccinated.
Can my child attend preschool/school while they have side effects from getting the COVID-19 vaccine?
Children can attend school following COVID-19 vaccination if they feel well enough to attend school and do not have a fever or other COVID-19 symptoms. If your child experiences the following symptoms, they should not attend school. These symptoms would not be expected from the COVID-19 vaccine, but might be seen with COVID-19 illness or other viral illnesses:
- Runny nose
- Shortness of breath
- Sore throat
- Loss of taste
- Loss of smell
Side effects from the COVID-19 vaccine are not severe or dangerous, and it is not likely that they would cause a child to miss school.
What if my child is exposed to COVID-19 before vaccination or between doses?
If your child is exposed to COVID-19 before vaccination, they should complete their quarantine before starting the vaccination series. If your child is exposed to COVID-19 after receiving a vaccine dose but before finishing the series, they should complete their quarantine before getting their next COVID-19 dose. It’s okay to delay the next dose beyond the recommended interval for this reason.
Please let your child’s health care provider or vaccine administrator (e.g., clinic where your child will be receiving the vaccine) know if you need to reschedule their vaccine appointment due to quarantine. Completing quarantine before getting the COVID-19 will help protect those around your child from infection.
What if my child is infected with the virus that causes COVID-19 before vaccination or between doses?
If your child has a COVID-19 infection, whether symptomatic or not, before any dose in the vaccination series, they should complete their isolation before getting the dose. Additionally, you should talk with your child’s healthcare provider about the possibility of delaying the dose for 3 months from the date your child’s symptoms started or (if your child didn’t have symptoms) the date of the positive test.
Please let your child’s health care provider or vaccine administrator (e.g., clinic where your child will be receiving the vaccine) know if you need to reschedule their vaccine appointment due to isolation. Completing isolation before getting the COVID-19 will help protect those around your child from infection.
What’s in the Pfizer-BioNTech COVID-19 vaccine?
The Pfizer-BioNTech COVID-19 vaccine includes the following ingredients:
- mRNA: mRNA is not the virus itself. The mRNA vaccines (like the Pfizer-BioNTech vaccine) teach your body to create proteins. Your body recognizes these proteins and jumps into action, making antibodies that help you fight the virus, which is called an immune response. It reproduces the same immune response that happens in a natural infection without actually infecting your body.
- Lipids: fat-like substances that protect the mRNA and provide a bit of greasy exterior that helps the mRNA slide inside the cells. The following lipids are in the Pfizer-BioNTech COVID-19 vaccine: lipids ((4-hydroxybutyl)azanediyl)bis(hexane-6,1-diyl)bis(2-hexyldecanoate), 2 [(polyethylene glycol)-2000]-N,N-ditetradecylacetamide, 1,2-Distearoyl-sn-glycero-3- phosphocholine, and cholesterol), potassium chloride, monobasic potassium phosphate, sodium chloride, dibasic sodium phosphate dihydrate, and sucrose.
- Salts: help balance the acidity in your body. The following salts are in the Pfizer-BioNTech COVID-19 vaccine: potassium chloride, monobasic potassium phosphate, sodium chloride, and dibasic sodium phosphate dihydrate.
- Sugar: helps the molecules keep their shape during freezing. The following sugars are in the Pfizer-BioNTech COVID-19 vaccine: sucrose (table sugar).
For a simple breakdown of the ingredients in the Pfizer-BioNTech COVID-19 vaccine, see this infographic here.
How were the vaccines developed so quickly?
There are many factors that combined to allow the COVID-19 vaccine to be developed quickly and safely.
- Researchers got a head start on developing a vaccine because the virus that causes COVID-19 is similar to other existing viruses.
- Research about the new virus was shared almost immediately with scientists all over the world, which allowed work to begin on a vaccine right away.
- Some researchers were able to run phase one and two trials at the same time.
- The studies on COVID-19 included a larger number of people than other recent vaccine trials, meaning there were a larger number of people in the trials over a shorter period of time.
- The federal government allowed manufacturing of the most promising vaccines to begin while the studies were ongoing. That means that when it was authorized it could be offered to the public almost immediately.
This does not mean the COVID-19 vaccine is not safe. The COVID-19 vaccine is safe and effective and will protect your child against the virus.
What are mRNA COVID-19 vaccines, including the Pfizer-BioNTech and Moderna vaccines, that are authorized for children down to 6 months of age?
The Pfizer-BioNTech and Moderna mRNA vaccines help your child’s body protect itself against future infection. Your child’s body gains protection without getting seriously sick with COVID-19.
On the surface of the virus that causes COVID-19 is a “spike protein.” When your little one gets vaccinated, the mRNA vaccine instructs your cells to make a harmless piece of this protein. The “spike protein” is then displayed by some of your cells. The mRNA in the vaccine degrades quickly.
Your immune system will recognize that this protein does not belong. It will then make antibodies against it. This is similar to what happens if you get naturally infected with the virus that causes COVID-19. In a natural infection the virus itself forces your cells to make the spike protein along with other viral proteins.
What happens inside my child’s body when they get a vaccine, such as a COVID-19 vaccine?
Vaccines teach our cells how to make a protein. This protein, or piece of the protein, will trigger an immune response in your body. The process is sometimes called either a blueprint or instructions. The body uses this information to create a response to keep you safe from the virus. The vaccine itself then breaks down and falls apart in the body right away.
How can I be sure that the COVID-19 vaccine does not change my child’s DNA?
The COVID-19 vaccines do not change or interact with your or your child’s DNA in any way. Both mRNA (Pfizer-BioNTech and Moderna) and viral vector (Janssen/Johnson & Johnson) COVID-19 vaccines deliver instructions to our cells. However, the instructions never enter the nucleus of the cell, where DNA is located.
They tell our cells to start building protection against the virus that causes COVID-19. The vaccine itself breaks down and falls apart in the body right away.
Allergies and/or Reporting Adverse Events
Who should not get the Pfizer-BioNTech or Moderna COVID-19 vaccine?
According to the FDA, children should not get the Pfizer-BioNTech or Moderna COVID-19 vaccine if they:
- had a severe allergic reaction after a previous dose of the vaccine
- had a severe allergic reaction to any ingredient of the vaccine.
Is it possible for my child to have an allergic reaction?
There is a remote chance that the Pfizer-BioNTech COVID-19 or Moderna COVID-19 vaccine could cause an allergic reaction. People can have allergic reactions to any medication or biological product, including vaccines. Most allergic reactions occur shortly after a vaccine is administered, which is why the Centers for Disease Control and Prevention (CDC) recommends that persons with a history of anaphylaxis (due to any cause) are observed for 30 minutes after vaccination, while all other persons are observed for 15 minutes after vaccination. All vaccination sites must be equipped to ensure appropriate medical treatment is available in the event of an unlikely allergic reaction. The CDC recommends anyone with an allergy to "any component" of the vaccine not get the vaccine.
What are the signs of a severe allergic reaction to the Pfizer-BioNTech or Moderna COVID-19 vaccine?
The chance of a severe allergic reaction is remote. Severe allergic reactions usually occur within minutes after getting a dose of the Pfizer-BioNTech or Moderna COVID-19 vaccine. Signs of a severe allergic reaction can include:
- Difficulty breathing
- Swelling of your face and throat
- A fast heartbeat
- A bad rash all over your body
- Dizziness and weakness
What are the risks of my child having myocarditis and pericarditis from the Pfizer-BioNTech COVID-19 vaccine?
The chance of having either occur is very low. Cases of myocarditis (inflammation of the heart muscle) and pericarditis (inflammation of the lining outside the heart) have been reported both in adolescents and young adults who contracted the COVID-19 virus and in those receiving one of these two mRNA COVID-19 vaccines. These reports are rare, and the known and potential benefits of COVID-19 vaccination outweigh the known and potential risks, including the possible risk of myocarditis or pericarditis. The FDA advises that you tell the vaccination provider about your child’s medical conditions, including if your child has had myocarditis or pericarditis in the past. You should seek medical attention right away if your child has any of the following symptoms after receiving the Pfizer-BioNTech or Moderna COVID-19 Vaccine:
- Chest pain
- Shortness of breath
- Feelings of having a fast-beating, fluttering, or pounding heart
My child has allergies. Can they be vaccinated?
If your child has an allergy to any ingredient in the Pfizer-BioNTech or Moderna vaccine, or to a previous dose of the Pfizer-BioNTech or Moderna vaccine, they should not receive this vaccine. Here are ingredients for the Pfizer-BioNTech vaccine. Here are the ingredients for the Moderna vaccine.
If your child has had an immediate allergic reaction of any severity to another vaccine or injectable therapy, it is a precaution to getting a COVID-19 vaccine. This does not mean that your child cannot get the Pfizer-BioNTech vaccine. You should talk with your child’s health care provider about the risks and benefits of your child getting the Pfizer-BioNTech COVID-19 vaccine.
People with other allergies not related to a vaccine or other injectable therapy may get a COVID-19 vaccine. This includes allergies to food, pets, venom, environmental allergies, or allergies to medications taken by mouth. Children with these types of allergies may get the Pfizer-BioNTech COVID-19 vaccine. If you have any questions or concerns about the risks and benefits of the Pfizer-BioNTech COVID-19 vaccine for your child, you should speak with your child’s health care provider.
Can COVID-19 vaccines affect puberty and/or the future fertility of my child?
No. There is no evidence that any vaccine, including COVID-19 vaccines, cause fertility side effects. Additionally, the vaccine does not affect puberty. For more information, visit ny.gov/getthevaxfacts.
Parents, guardians, and community members should all be aware of good sources of information from trusted, credible organizations regarding the COVID-19 vaccine and children. New York State recommends the following links for those seeking more information, as well as resources that can help explain and guide conversations with their children.
Where can I find more information about the COVID-19 vaccines?
It is very important to know that the sources of COVID-19 vaccine information that you use are trusted sources of accurate information – so you can make informed decisions about your health and the health of your child.
In addition to this dedicated website for New York parents and guardians of children 5 – 11 years-old, visit New York State’s #GetTheVaxFacts page for credible, accurate information New York parents and guardians can trust: ny.gov/getthevaxfacts.
The Centers for Disease Control and Prevention is one of these trusted sources. Information on COVID-19 vaccines can be found on this CDC webpage.
- Here's information on COVID-19 vaccines for children and teens.
- CDC’s “Frequently Asked Questions about COVID-19 Vaccination” webpage is also very helpful.
The Children’s Hospital of Philadelphia (CHOP) has a Vaccine Education Center. On this website you can find trusted information about COVID-19 and COVID-19 vaccines.
- The CHOP Vaccine Education Center also has a mobile app, “Vaccines on the Go: What You Should Know.” It offers helpful information about vaccines, including COVID-19 vaccines.
The American Academy of Pediatrics (AAP) is another source of trusted information. Pediatricians provide information on AAP’s COVID-19 webpage. They cover many topics related to COVID-19 and children. The following are some of the topics related to COVID-19 vaccines:
- “The Science Behind COVID-19 Vaccines: Parent FAQs"
- “Getting Your Child Ready for the COVID-19 Vaccine"
- “State report of pediatric cases, hospitalizations and deaths”
- The “healthychildren.org” website has a FAQ webpage about the COVID-19 vaccines. The answers are from the Academy of Pediatrics.
For general resources and information:
- KidsHealth's "Resources to Help Explain COVID-19 to Children"
For helpful videos that can help parents and guardians explain the COVID-19 vaccine to children:
- University of Michigan's "All About Coronavirus: A Video for Kids and Their Families"
- Unicef's "COVID-19 Vaccines Explained in 4 Levels of Difficulty" | <urn:uuid:4b4d461a-7de6-40a2-9529-ecc9b77b8de3> | CC-MAIN-2022-33 | https://covid19vaccine.health.ny.gov/frequently-asked-questions-all-children-6-months-and-older | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573399.40/warc/CC-MAIN-20220818185216-20220818215216-00094.warc.gz | en | 0.935873 | 6,074 | 2.515625 | 3 |
What is a lip tie?
Lip frenulum’s (maxillary frenulum) are located between the upper jaw or maxilla and the inside of the upper lip. These lip frenulum’s can be broken down into 4 categories, but the most important aspect, in regard to breastfeeding, is how the frenulum impacts the ability for the lip to flange and function. The frenulum can vary in thickness, length and connection point from the lip and upper jaw. As you can imagine with so many variables trying to precisely diagnose a lip tie can be complex.
When assessing an infant to determine if a lip frenulum is negatively impacting nursing a simple exam to assess the range and ease of lip motion can be performed. The infant’s upper lip should roll out and up towards the tip of the nose with little resistance and minimal to no blanching of the frenulum in the area of where it connects to the maxilla. If the lip functions well, it will roll back, and the tip of the lip will be able to contact or come into very close contact with the nose tip. During nursing the infant must breathe through their nose and the lip does not even need to flange back as far as the tip of the nose.
Both of these labial frenulums allow for the upper lip to flange up and back towards the nose with little to no resistance and do not blanch the gum tissue. These are normal and functional frenulums.
If during the exam of the upper lip, the lip is unable to flange to just shy of the tip of the nose and blanching is seen on the gentle rolling back of the lip, the lip frenulum may be restricted and negatively impact the seal or ability for the mouth to have a wide gape or opening.
These labial frenulums are tighter and more restricted as the lip is reflected up and back towards the nose. Notice the blanching or white area in the gums and inability to roll up and back with ease.
What should breastfeeding be like for the infant?
You will learn to understand your infants hunger cues and behaviors in the first weeks of life. The infant will typically eat on a very set schedule and nursing sessions should be efficient and not take excessive amounts of time. Longer nursing sessions can burn valuable calories and cause irritation to the mother’s breasts. The latching of the infant may take a few tries to establish a proper latch, but once established should only require subtle adjustments. The mouth should open wide and accept the nipple and areola with a wide gape or opening. The tongue will grasp, stabilize and draw the nipple into the mouth and create a vacuum, which will elongated the nipple to the back of the infant’s mouth. Once the milk starts to flow, the tongue will continue the wave-like motion to maintain the vacuum and depth of latch. The infant will make a suck or two and then an audible swallow should be heard as the milk is swallowed. The sounds of “gulping” and “clicking” can signify a poor vacuum is in place and the child is swallowing more air then milk. The infant’s hands should be open and relax and eye contact maintained with the mother. Frustration, fatigue and quickly falling asleep at the breast are behaviors that are not common with an efficient and effective nursing infant. After feeds, the baby should be fairly easy to burp and be satiated and happy. Other common problems and concerns regarding the newborn are outlined below and talked about in more depth. The baby must have the ability to have a properly functioning tongue and oral motor coordination to efficiently breastfeed. Once a lactation consultant has properly assessed you and your baby and a suspected functional issue exists, you should consider looking further into a tongue and/or lip tie issue.
Why does my child make a clicking or gulping sound when nursing?
The tongue is needed to make a primary seal and the lips help make a secondary seal when nursing. The inability of the tongue to groove and elevate around the nipple and the upper lip to properly flange out does not allow for the baby to make a good seal at the breast. When the upper lip is curled in and remains curled in, this can allow for milk to leak out of the sides of the mouth or for air to be ingested and swallowed by the baby. You may notice small, darker triangles in the corners of the mouth if the lip is not fully flanged. The parent will need to typically flange and adjust the upper lip manually to properly position the upper lip. Even after a revision, the upper lip still may need to manually displaced until prior compensatory habits are unlearned and the facial muscular works less at the breast.
The tongue also plays a part in the maintenance of a seal because it pulls the nipple into the mouth and enables the baby to latch. The tongue needs to extend, groove and cup around the nipple to pull it into the mouth. If a tongue has limited ability to extend and elevate or cup around a nipple, or the finger when examined, this may also contribute to milk leakage and excessive air intake.
The clicking sound that is heard when the infant nurses can be a result of poor elevation of the tongue or a stronger letdown. As the tongue elevates to draw the nipple into the mouth and form a vacuum, the baby needs to maintain a wide open mouth and allow for the tongue to elevate. If the tongue is unable to maintain the elevation, each suck will make a click sound and this occurs as the tongue drops and breaks the vacuum. The infant will gulp air and swallow this when the system is not closed. This clicking and gulping can lead to ingested air and if not properly managed, lead to gassiness, excessive burping and even symptoms of reflux. This is referred to as Aerophagia Induced Reflux (A.I.R.).
The best way to think about it is the mouth has to make a tube, or closed system to effectively draw milk from the breast or bottle. The roof of the mouth or hard palate forms the top half of the tube and the cupped and grooved tongue forms the bottom half of the tube. These two halves must come together or the tube is not formed and no seal is produced.
Why is my baby having excessive gas, hiccups, fussiness or reflux?
Mild degrees of reflux, hiccups, gas and spit up are all normal for a newborn or infant, but the cause can be for a host of reasons and should be explored. These issues may be due to gastrointestinal issues, normal variations in muscle development and tone of the GI system, food sensitivities associated with the mother’s diet or from excess air intake during bottle and/or breastfeeding. If excess air is ingested, it must exit the body either as gas or burping. If the air is burped up, it can bring up stomach acid and cause discomfort and mimic reflux. The excess air can also distend the stomach and cause fussiness and irritation with the child, too. The child’s stomach may be distended or appear fuller when filled with excessive air after a feeding and mimic colic-like symptoms. We refer to this phenomenon of reflux that is caused by excessive air intake during nursing or bottle feeding as Aerophagia Induced Reflux (A.I.R.).
An excessive amount of or very frequent hiccups can be the result of excess air intake while feeding, too. The air intake will distend the stomach and it pushes on the diaphragm, which is the muscle used to fill and empty the lungs. When the stomach places pressure on the diaphragm, its rhythmic cycle can be broken and lead to hiccups, especially after feeding.
How does the procedure work and how long will it take?
After a thorough review of the mother’s feeding history and infant’s feeding history, birth and medical history the infant will be examined. After a full evaluation and discussing treatment options, a signed consent is obtained from the parent and the procedure can proceed. The infant is taken into a treatment room with the doctor and an assistant. The parents will wait in the exam room and review post-operative guidelines and instructions that they are given at the appointment.
The infant will be swaddled in the treatment room and protective eyewear placed on the infant, provider and assistant in the room. Once all eyewear is in place, the laser is turned on. The assistant will help stabilize the infant and maintain the swaddle during the procedure. The type of laser we use is a Diode Laser and is able to precisely and quickly release the upper lip. The entire procedure to release the upper lip will last about 10 to 15 seconds.
Some infants will benefit from the release of the upper lip to allow for a wider gape and an improved seal while breastfeeding. When revising the upper lip, we grasp the upper lip and gently roll it up towards the nose to reveal the upper lip frenulum. The release of the frenulum from the upper jaw to allow for an improved range of motion of the upper lip.
Post procedural bleeding is typically very rare but may occur and easily managed with light pressure.
Post Revision Wound Management
After the revision of the lip or tongue, the need for active wound care and stretching is mandatory. The reasoning behind these stretching exercises is to help minimize the wound from excessively contracting and tightening as the area heals.
No technique or frequency of stretches has been agreed upon or found to be more or less effective.
If wound site does not look or feel completely healed, at Day 14, continue stretches for one more week. Please see the post revision picture sequence page to see how the wound may heal over the course of 2 weeks.
You will have to try to find the best time to do the stretches for the infant. Stretching between switching breasts, halfway through a bottle OR while changing a diaper are typically the best three options. The stretches should be spaced out evenly through the day.
Stretching of the lip revision site:
The lip stretch is somewhat easier to access then the tongue, but may be slightly more sensitive when stretching. Please refer to the photo below for guidance in regard to the lip exercises.
Once the tongue has been stretched, take the index finger and gently place the finger up under the lip into the area of letter A or B. In a windshield washer motion, take the index finger and wipe it under the lip as you touch the wound site. It should feel smooth as you run you finger from side to side. Take the finger from letter A to letter B and back. Repeat this 4 times. To review: finger under the lip and from A to B and B to A, repeat 4 times.
Remove the finger from the mouth and place the infants head in both open palms. Take your thumbs the gently roll the lip up towards the nose and hold this position for 5 seconds. After rolling the lip back, you have completed the stretches.
Once you are comfortable with the stretches, it will take roughly 20-30 seconds to complete a full set of stretching.
Do lip frenulum's impact speech?
Lip frenulum’s have very little if any impact at all in regard to speech. The upper lip is involved in the “B” and “P” sound and is made as the lips come together in contact. In a normal resting position, the lips should be able to gently touch one another. The tongue can have a much greater impact on speech and jaw development, which will impact speech and articulation more profoundly than any lip frenulum.
One of the most common questions about lip frenulum revolves around a space or gap between the front teeth and the frenulum. Spacing in infant and children’s teeth is extremely beneficial and ideal. These baby teeth that are spread out and have spaces are easier to clean and the space between the baby teeth will be later occupied by the much wider adult teeth. Genetics play a large part in spacing between the front two teeth and is referred to as a diastema. If the child’s parent or grandparent has a prominent space between the two front teeth or the gap was corrected through braces or cosmetic dentistry, the infant will likely have a diastema as well later on in life. Revising or fixing the frenulum as an infant will NOT resolve the genetic cause of this diastema.
How does the lip frenulum effect the teeth and hygiene?
Lip frenulums can present as thicker, shorter and extend over the maxilla and onto the hard palate. At times this specific presentation can lead to great challenges for the parent to brush the upper teeth and can possibly impact the esthetics or smile of the child. It is extremely difficult to predict how the presentation of a labial frenulum can impact future hygiene and smile. If the lip is difficult to reflect back to access the teeth so they can be brushed, the risk for plaque buildup can be increased. Lip ties do NOT cause dental decay, but the longer plaque sits on those teeth and the more carbohydrates the bacteria in the plaque have access to it can lead to demineralization (white chalky lines on teeth) and dental caries or a cavity. When plaque sits on the teeth for too long the bacteria in the plaque use these carbohydrates and produce acid. This acid will breakdown and weaken the organic structure of the enamel of the teeth and over a period of time, lead to a white, chalky line under the plaque and then decay. Proper hygiene and diet are extremely important at a young age to help minimize or avoid these problems. Starting to brush once the first tooth erupts is a good practice and seeing a pediatric dentist at or around the 1st birthday is another great way to help monitor and avoid preventable dental issues.
The picture on the left shows an upper lip that is held tight to the upper jaw when smiling the inside of the upper lip is seen and the smile shows very little gingiva. The same patient on the right side shows a thick frenulum and dental decay on the front teeth.
The same patient immediate post-frenulectomy showing a level lip position and uniform display of the gingiva. The right side shows the revision site immediate post-frenulectomy with a laser. Notice no blanching between the front teeth and fillings placed on front teeth.
What is a tongue tie?
The tongue is an extremely important, complex and still not fully understood muscle that is the first part of the gastrointestional system. It plays a major part in feeding, oral hygiene, speech and craniofacial growth and development. The tongue is made up of 8 muscles that each function in a unique manner and collectively act together as one unit. Under the tongue a piece of tissue exists in virtually all humans and is referred to as a frenulum. This piece of tissue is a remnant from the embryologic development of the tongue and normal for all individuals to have some degree or frenulum present.
Tongue frenulums can be broken down into upwards of 5 degrees or catergories, but the most important aspect is functional impairment or impact on the tongue mobility and overall function. Terms like tongue tie or tethered oral tissue (TOT) are commonly used to describe these tight or restrictive pieces of tissue. The term slight or small tongue tie is a misnomer and does not depict how well or poorly the tongue is able to function. For the purposes of simplicity these can be broken down to functional and dysfunctional tongue frenulums.
A functional lingual frenulum will allow for proper movement and range of motion of the tongue. It will not restrict or negatively impact surrounding structures and may or may not be visually evident. If a tongue frenulum is seen, that does not mean it is necessarily a “tie” or restricting of the tongue motion or range. On the other hand, a tongue that is able to extend out, does not necessarily mean it is functional.
Dysfunctional Frenulums or “Ties”:
Many times the symptoms being experienced by the parent and/or the child and the actual feel of the frenulum are key in helping determine if the tongue frenulum is truly tied and impacting function. More anterior ties, or that attach closer to the tip of the tongue are easy to visually diagnose, but more posterior or submucosal frenulums are not always visually evident alone.
Some individuals will have a visually evident tongue tie or restriction, but may not presently experience any symptoms or problems. These cases are still important to assess and address due to longterm issues that may impact the child, which are discussed later. A complete assessment by a well-trained lactation consultant or medical professional should be conducted to ensure compensatory mechanisms are not masking underlying problems and putting the dyad at risk for future problems.
These pictures show a variety of lingual frenulums or tongue ties in infants. Notice the varied thickness, degree of webbing, connection points with the tongue and into lower jaw and restriction to tongue elevation.
Won’t the tie just stretch with time?
The frenulum is made up of fibrous tissue (Type 1 Collagen) that is equivalent to a rope. This tissue will stretch only about 3% and it is NOT a rubber band or elastic. The tongue will grow, gain more strength and mass as it is used after birth, but a restricted tongue will not spontaneously resolve in the important time period for nursing. Each frenulum will have varied lengths to it and a longer frenulum can allow a tongue to partially function, but a short and thicker frenulum, especially the submucosal variety, can have a detrimental effect on tongue mobility and function.
Often many questions arise about what exactly is a tongue tie or lip tie is and why it occurs, how it impacts feeding in the infant and growth and development of a person over the course of their life and how can it be resolved. A major goal of this website is to help educate and allow individuals to more fully understand the implications of proper development from a very early age (as a newborn) and how proper development, growth guidance and usage of the tongue and other orofacial muscles can have a lifelong positive impact on a person. On the contrary, the non-ideal positioning and usage of the tongue and other orofacial muscles can negatively impact growth and development of the head and neck and impact the entire body.
The maxillary frenulum or lip tie rarely is a cause for nursing difficulties ALONE. The lip tie is very easy to see and diagnose, but that does not mean it is the causative factor for the nursing discomfort or problems. The tongue must groove, extend out and draw the nipple into the mouth. Once in the mouth the tongue must elevate to form the bottom half of a tube and the roof of the mouth or palate from the top of the tube. When this tube is formed a closed system is in place and the tongue should make a wave motion to propel milk out of the elongated nipple. If the infant is unable to elevate the tongue, they will close the mouth down so the tongue comes in closer proximity to the roof of the mouth. With the closing of the mouth, the initial wide gape or opening will close and the baby will slide down the areola and towards the end of the nipple. This is ineffective in milk transfer, painful for the mother and will cause the mouth to purse down to maintain an external seal. This external seal is secondary to the internal or primary seal made by the tongue. As the depth of the latch deteriorates and becomes shallower the infant will tighten the facial muscles to hold a seal and these muscle will contract. This lip contraction on the areola and nipple can lead to very sore and irritated nipples and sucking blisters and callous on the babies lips. These are not necessarily caused by a lip tie, but they are a result of a poor latch from compromised tongue function. The lip should have the ability to flange or roll back close to the nares or nostril openings with minimal resistance. The lip DOES NOT need to flange the entire way back to the tip of the nose, because the infant must be able to breathe during breastfeeding.
The above images show the normal progression of an infant drawing the nipple into the mouth and elongating the nipple. It also shows the wave motion that the tongue needs to produce to actively express milk from the mother. The tongue needs to elevate toward the roof of the mouth while the infant maintains a wide open gape or mouth opening. If the tongue is unable to elevate AND the mouth stay open wide, the infant will close the mouth to bring the tongue closer to the roof of the mouth. This will lead to a shallow latch, nipple compression, pain and increasing frustrations during nursing.
Stretching of the tongue revision site:
Refer to the picture of the tongue below to help outline how to do the stretches.
Gently work your finger into the mouth and under the tongue. Push into the center of the diamond until you feel some resistance or give to the tissue. Hold that depth with the finger and that move the finger from the left (Letter A) to the right (Letter B) and back in a pendulum motion. Complete 4 full pendulum swings under the tongue and over the wound site, while contacting the area under the tongue. So to review: under the tongue, A to B then B to A (repeat 3 more times). Do not remove the finger from under the tongue, but instead push the finger back and up towards letter C and hold the tongue in place for 5 seconds. This will complete the tongue stretching.
What is a tongue tie?
The tongue is an extremely important, complex, and still not fully understood muscle that is the first part of the gastrointestinal system. It plays a major part in feeding, oral hygiene, speech, and craniofacial growth and development. The tongue is made up of 8 muscles that each function in a unique manner, and collectively act together as one unit. Under the tongue, a piece of tissue exists in virtually all humans, and is referred to as a frenulum. This piece of tissue is a remnant from the embryologic development of the tongue, and is normal for all individuals to have some type of frenulum present.
Tongue frenulums can be broken down into two categories: Functional, and Dysfunctional.
A functional lingual frenulum will allow for proper movement and range of motion of the tongue. It will not restrict or negatively impact surrounding structures and may or may not be visually evident. If a tongue frenulum is seen, that does not mean it is necessarily a “tie” or restricting the tongue’s motion or range. On the other hand, a tongue that is able to extend out, does not necessarily mean it is functional.
Dysfunctional Frenulums or “Ties”:
Many times, the symptoms being experienced by the patient, and the actual feel of the frenulum are key in helping determine if the tongue frenulum is truly tied, and impacting function. More anterior ties, or that attach closer to the tip of the tongue are easy to visually diagnose, but more posterior or submucosal frenulums are not always visually evident.
Some individuals will have a visually evident tongue tie or restriction but may not presently experience any symptoms or problems. These cases are still important to assess and address due to long-term issues that may impact the patient, which are discussed later. A complete assessment by a well-trained lactation consultant or medical professional should be conducted to ensure compensatory mechanisms are not masking underlying problems and putting the patient at risk for future problems.
These pictures show a variety of lingual frenulums or tongue ties in children. Notice the varied thickness, degree of webbing, connection points with the tongue and into lower jaw and restriction to tongue elevation.
Won’t the tie just stretch with time?
The frenulum is made up of fibrous tissue (Type 1 Collagen) that is equivalent to a rope. This tissue will stretch only about 3% and it is NOT a rubber band or elastic. The tongue will grow, gain more strength and mass as it is used after birth, but a restricted tongue will not spontaneously resolve in the important time period for nursing. Each frenulum will have varied lengths to it, and a longer frenulum can allow a tongue to partially function, but a short and thicker frenulum, especially the submucosal variety, can have a detrimental effect on tongue mobility and function which can affect speech, growth and development, and breathing.
How does the procedure work and how long will it take?
After a thorough review of the patient’s health history, the patient will be examined. After a full evaluation and discussing treatment options, a signed consent is obtained from the parent and the procedure can proceed. The patient is taken into a treatment room with the doctor, and an assistant. The parents will wait in the exam room, and review post-operative guidelines and instructions that they are given at the appointment. The patient, doctor, and assistant will be given protective eyewear. Once all eyewear is in place, the laser is turned on. The patient’s tongue is gently elevated with a small surgical tool called a groove tongue director. It allows for the tongue to be safely elevated, and isolates the frenulum under the tongue. The type of laser we use is a Diode laser and is able to precisely and quickly release the excessive or restrictive tissue under the tongue. The entire procedure to release the tongue is completed very quickly. Post procedural bleeding is typically very rare, but may occur, and easily managed with light pressure. | <urn:uuid:a8e8966f-dd2c-4316-a4fa-89b030f9cf92> | CC-MAIN-2022-33 | https://www.fantaseapediatricdental.com/lip-and-tongue-tie | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573197.34/warc/CC-MAIN-20220818124424-20220818154424-00696.warc.gz | en | 0.93143 | 5,346 | 2.640625 | 3 |
Whenever we talk about oral health, usually, our primary focus is teeth. In this short article, we have decided to put something else to the spotlight – The gums! For the people who have never really thought about it, gums are made from soft tissue that is protected by a coating of the oral mucous.
Often neglected; however, gums play an essential role in keeping our mouth healthy. The mucous membrane (protective layer) can stop harmful bacteria from sneaking into and leading to gum problems.
If you’re one of those individuals who believe that the dental consultants are telling you to brush twice a day and start flossing at least once per day, it’s more of a recommendation than a strict guideline; the tooth might be in danger of falling into the dangerous territory of gum recession.
What Is Recession??
However, if the gums are not looked after appropriately, they’re vulnerable to gum recession. This may cause the teeth to look longer and ugly, just like spikes.
In addition to aesthetic issues, your dental health will be affected due to exposed gum roots due to receding gums. It is a lot easier for bacteria to get inside the mouth and cause dental cavities and microbe infections. Some other signs or symptoms of gum recession are sensitive, irritated and bleeding gums.
What Can Cause Gum Recession?
Dr. Bakuri Explains Gingival Recession, “Gum Receding is the exposure of the teeth roots that are due to the withdrawal of gum tissues.”
Gingival recession is mostly caused by:
- Family Genes
- Tooth that is out of alignment
- Tooth grinding
- Injury on the teeth
- Too Aggressive flossing or brushing
- Severe gum disease
The Problems With Gum Recession
As the exposed roots, the gums are a lot more vulnerable and sensitive to cold, hot and even sweets. A lot more revealed roots weaken the tooth. First of all, let’s check what’s the real cause of receding gums.
Unhealthy Microbes Because Of Inappropriate Oral Hygiene Routine
Before treating the gingival recession, we will have to win a war against all of them. All these harmful bacteria start to harm the soft gum tissues, which are sturdily connected with the teeth bone. To put it briefly, gum disease caused by bad bacteria is among the most common causes of shrinking gums. And a receding gum line due to some gum diseases can easily be reversed using a solution given in this post.
“Don’t forget that the bad bacteria create lots of gum issues and various other health issues, and according to the ADHC, these types of harmful bacteria are a life-threat.”
Many of us brush at least two times a day; however, 83% of the United States people have gum diseases. This simply means we will need to adjust the way you thought about toothpaste. To fight against the continuous bacterial onslaught and to win fighting against these harmful microbes resulting gum infections, you should think about both these steps.
The First Step
The majority of us have been conditioned to believe that STORE BRAND mouthwash and toothpaste are the best they can use for gum problems. However, maybe you’ve ever read the cautions written on your favourite tube?
>> The Warnings
“Do You Want To Put A Thing Such As This Inside Your Mouth Having A Warning From Poison Control Centre?”
Here Are Some Information About Fluoride;
Fluoride is disallowed in different countries like Austria, Germany, France, and China. It’s not confirmed yet if it will help in stopping the cavities. Fluoride damages the connective tissue of gums; this shows that it may detach the connective tissue of gums from the tooth bone – Big problem if you are looking to recover your gum line.
Fluoride can be associated with bone tissue cancer, early aging, cancerous cell progress, infertility, brain malocclusions, and a lot more. Ingredients in-store brand tooth paste generally are sodium monododecyl sulphate and triclosan.
Why Commercial Mouth Rinse And Tooth Paste aren’t The Perfect Choice For Already Shrinking Gums?
Alcohol-based oral rinses are effective in removing damaging bacteria, but the problem is – it may create a dry mouth condition that accelerates the growth of harmful microbes.
We Need Precisely The Contrary, We Need
We require saliva to win the battle against bad bacteria; commercial mouth rinse isn’t the solution for all gum diseases.
Use Nature’s Smile™ gum balm; it can remove the destructive bacteria 24 hours a day, everyday. The ingredients naturally have anti-fungal, anti-bacterial and anti-inflammatory attributes. It’s made from all-natural ingredients with healing properties to start improving the gum recuperation as soon as you start using it. This product is made of all-natural and organic ingredients which have healing properties to start gum growth once you start using it.
There is nothing like Nature’s Smile™ available in the market- it is simple to use, natural, and safe and creates a fresh sensation inside your mouth.
Therefore, Stop Using Chemical Based Products!
Why To Use Nature’s Smile™ Product?
The dental practitioner suggests Nature’s Smile™ as a good treatment method, that’s why the patients around the globe love it. This liquid miracle is an effective decoration of natural and herbal extracts combined to reverse shrinking gums. Whenever you use Nature’s Smile™, its ingredients begin to assault the microorganisms, which harm the gum line. Nature’s Smile™ offers 2 important advantages;
1) A remedy of shrinking gums,
2) An excellent refreshing flavor to eliminate halitosis.
Nature’s Smile™ has the ability to destroy the 22 types of infections that cause 99% of Gum related issues. Benefits, benefits and benefits;
This basically means the herbal extracts absorb deep into the gum tissues, and they are not easily washed away. The Lipid based ingredients give advanced protection to the gums and teeth against harmful bacteria and bad breath.
The components in this special herbal extract are approved by a number of scientific researches to have effective anti-bacterial and fungal properties. The selected active ingredients combined with anti-oxidants, emollients and vitamins will show you the results that you want to see.
It just takes a few min’s. Put one or two drops on the brush and brush for 2 min’s just like you do daily with regular tooth-paste and get rid of gum recession.
Saves You Lots Of Money On Unnecessary Gum Procedures
Nature’s Smile™ is so effective you can save a massive amount of your hard-earned money every year on unnecessary surgeries, deep cleaning as well as gum treatments.
The extracts in Nature’s Smile™ not just scientifically proved to destroy the unwanted organisms, but eliminate the specific pathogens that create teeth and breathe issues.
It’s Completely Natural
You will no more need to place harsh and cancer-causing chemical substances in your mouth (most of which can make the issue even worse. No, need to worry about putting harsh chemicals found in commercially made tooth-paste and oral rinse.
100% all-natural so Nature’s Smile™ is completely secure. Rapid and efficient solution for Gums, Teeth and Breathe issues. Nature’s Smile™ consists of an exclusive decoration of 20-25 herbs and plant extractions to fix gum recession, tooth decay and unpleasant mouth odor..
Fast And Efficient
Mostly, results start to appear in the first few days.
In the latest research, while using the ingredients of Nature’s Smile™; all the Periodontopathic bacterial strains have been killed within just Half a minute. 100 % Natural Concentrate. One bottle is enough for one month.
100% Money-Back Guarantee
This product includes 100% money-back guarantee, So buy with full confidence because there is absolutely no financial risk what-so-ever, you can avail, “No question asked Cash Back Guarantee”: if not happy with the effects.
Nature’s Smile™ Manufactured In USA
It is Manufactured fresh every day and delivered “from Nature’s Smile™, a company with above two decades in business and excellent BBB ratings.
Good Customer Service Team
The team at the Nature’s Smile™ is ready to answer any sort of questions throughout the usage. Buy it Today, it will be shipped via airmail.
***Scientific Research And References ***
1) Department of Microbiology, Tokyo dental college: Takahashi n, takarada k, kimizuka r, Kato t, Honma k, Okuda k, A comparison of the antimicrobial efficacies of essential oils against harmful bacteria. Japan. Oral Microbiol dental microbial Immunol. 2004 Feb; 19(1): 61-4
(2) The Uni of Switzerland, Zurich. “oral Microbiol Immunol”… Shapiro s, Meier a, Guggenheim b. The anti-microbial activity of essential oils and essential oil ingredients towards oral microbes. 1994 Aug; 9(4): 202-82. The University of Zurich, Switzerland. “oral Microbiol Immunol”… Guggenheim b, Shapiro s, Meier a. The anti-bacterial activity of essential oils and essential oil components towards dental bacteria. 1994 August; 9(4): 202-8
Viana gs. J Cordeiro ln, Punica granatum from herb pharmacothermenezes sm (pomegranate) draw out is active against oral plaque.. 2006; 6(2): 76-92
(4) J Initial research. Int acad periodontol: sastravaha g, sangtherapitikul p. bouncing p, Adjunctive gum treatment with Punica granatum and Centella Asiatica ingredients. 2003 oct; 5(4): 106-15
(5) J Punica granatum extractions in supporting gum therapy some study by int acad periodontol: Gassmann g, sastravaha g, grim wd, sangtherapitikul p. an adjunctive periodontal remedy with Centella Asiatica and so. 2005
6)J Ethnopharmacol: Varani J, Lansky EP, Aslam MN. Pomegranate extract as a cosmeceutical source: pomegranate extract fractions boost growth and procollagen synthesis and protect against matrix metalloproteinase-1 production throughout human tissue cells. Ethnopharmacol: Lansky EP, Aslam MN, Varani J.: J Ethnopharmacol. 2006 February 20; 103(3): 311-8. J Ethnopharmacol. 2006 February 20; 103(3): 311-8.
7) Higher Education of Texas medical: Walter j. Loesche, the study of dental decay and gum problems, medical microbiology. Fourth version, phase 99; 1996. department at Galveston
(8) “sodium dodecyl sulphate and triclosan have negative effects research study by Babich h and Babich jp. in vitro cytotoxicity scientific research with gingival tissues” (May 16, 1997) 91(3): 189-196
9) helichrysum italicum, j Ethnopharmacol: from conventional apply to scientific data. 2014; 151(1): 54-65. epub The year 2013 Nov14
(10) Sci Rep. myrrh and Frankincense: Irritation via regulation by myrrh and Sci Rep. Frankincense, via regulation of the metabolic profiling and the MAPK signalling pathway 2015 September 2; 5: 13668. DOI: Ten. 1038/srep13668
(11) Fragrance and Flavour Journal 25, 13-19: Manuel Yolanda Ruiz Navajas, Viuda-Martos, Juana Fernández-López y José A, Elena Sánchez Zapata. Pérez-Álvarez. “Antioxidant activity of essential oils of four to five spice plants popular in a Mediterranean sea diet”. enero “Antioxidant activity of important oils of five spice plants widely used on a Mediterranean diet”. Fragrance and Flavour Journal 25, 13-19 – February 2010.: Pérez-Álvarez, Yolanda Ruiz Navajas, Manuel Viuda-Martos, Juana Fernández-López y José A, Elena Sánchez Zapata.
Receding gums are a common problem. As the decorative issue may be to just tackle the discomfort associated with bad breath, for the significant sufferer with this problem the root cause is a whole lot more serious and needs to be addressed immediately to avoid a gum illness from turning out to be a substantially more severe medical problem.
Gingivitis, or the common condition, affects approximately 3-5 million Americans and is among the significant health problems affecting adults now. In some instances, Receding gums are only a symptom of another disease. Other times the cause is very different. Read more about Can You Fix Receding Gums?
If you guess your receding gums have a more profound underlying cause, simply take your case into your medical care provider. If you’re taking oral medications your doctor has prescribed, then your dentist should have the ability to recommend alternative medicines which are more appropriate for your problem.
How To Fix Receding Gums At Home?
Treatment for this problem could be difficult because the illness moves across your mouth’s oral cavity, that isn’t always easy to access by simple brushing and flossing. If there is an obvious underlying illness such as diabetes, an ailment such as plaque buildup can certainly exacerbate the problem, and there’s a increased risk of developing gingivitis in people for this condition.
Symptoms can also be confused with an oral tumor. Even though cyst removal can be a common surgical procedure, receding gums usually do not need the procedure because they won’t cause any pain, and some patients have been embarrassed by their situation.
It’s also important to note that numerous symptoms may frequently be caused by different conditions that are similar to what is causing the receding gums. Chronic hepatitis, for instance, could wreak havoc around the teeth and gum , and also a swelling outline could be another symptom of this illness. Knowing this will help you figure out the actual reason for your symptoms, and will allow one to start treatment for your condition right away in case you believe that it may be sinusitis associated with.
How To Fix Receding Gums Without Surgery?
Some instances of chronic stress or anxiety could also lead to gum diseases. Many individuals who suffer from chronic pain and neurological damage due to surgery or alternative health treatments, or mental conditions, may also suffer from this condition.
As these are so predominant in our society, so it is not surprising that over 3 million people in the U.S. suffer with gum disease, and there are many methods to take care of it. It’s very important to know what you are working with so that you can get the treatment you will need to avoid serious complications.
When there are many treatment options designed for the receding gums, no one treatment may cure the illness altogether. Most of the time, once the condition has been treated, the disease will probably resolve it self. There is a higher risk of gum disease in adults over 60 years old, and though most of those cases suffer from deficiency of regular oral hygiene, in the event the problem has persisted for a time period, it could turn into a far more significant problem.
There are additional reasons why gum disease might grow, such as a personal accident to the gum or even an overgrowth of bacteria. Gum disease symptoms are not always consistent, and a individual can experience symptoms at the early hours, before eating, and suddenly in the afternoon.
How To Fix Receding Gums?
Some people are able to grab the condition and cure it immediately, but the others might never have the ability to find the condition, much less cure it. In any instance, choosing the proper treatment for receding gums would be crucial to your health. Even though this illness might just change you for a brief time, it’s important to see that when left untreated, it can become much more serious.
Once you learn you’ve got gum disease, ask your physician as soon as possible.
People suffering from Receding Gums do not need to be suffering. And they certainly won’t need to go through years of pain and discomfort. You can find dental hygiene products available today that can help.
Receding gums may cause more pain. The treatment for these states is to make sure you get a fantastic diet plan and wash teeth. The ideal types of dental treatments can also be very helpful.
Only a while spent and it really is all well and good, but it will be a day of advantages.
Natural Ways To Fix Receding Gums
You also need to make sure you’re employing a sterile treatment. Whenever you’re infected with plaque when there is just a buildup of tartar you will observe that the yellowish and white tartar which insures the teeth becomes noticeable. Your dentist will let you know just what has to be done to take it off and keep it from reoccurring.
The dentist should be able to supply you with the very ideal treatment that will stop the infection and relieve the sensitivity.
For those who have a severe periodontal disease, then there is a larger prospect of you getting infections or suffering from tooth loss. The tooth loss might be due to trauma to your mouth. You may be experiencing hard times in regards to brushing your teeth.You should have the ability to find some helpful tips here on how best to enhance the state of your teeth.
Many people can attest to the fact that no one would like to visit your dentist’s clinic. However, it is a necessary measure in any oral hygiene treatments. Receding gums treatment does not need to become painful. It is possible to get yourself a excellent treatment in your home. You do not need to invest hours at your dentist’s clinic.
Ways To Fix Receding Gums
You do not have to fret about what the next receding gums treatment will be. That you do not need to see the dentist after every treatment. It’s very simple to maintain your treatment routine. You don’t have to be concerned about the simple fact your teeth might have affected with gum disease. You will not need to think about owning a pit.
Your dentist will be able to help you with a receding gums treatment. You only need to know what you are dealing with and exactly what steps you will need to take to continue to keep the problem from happening again.
You could not see a difference on your gums from 1 day to the next. When thinking about the way to grow back receding gums, it’s essential to first find out what’s causing your own gums to recede. You may possibly have noticed your gums might also bleed profusely when brushing or flossing. You might also observe some pain or your gums are notably tender. Besides a mount, receding gums can likewise be revived by way of a periodontist. The most ideal way to stop from receding gums is by means of good oral hygiene habits, especially brushing 2 times every day, flossing at least one time each time, and having a professional cleaning and check up twice each year.
How Fix Receding Gums?
Because the gums play such an essential role in your mouth, ensuring that they’re healthy is an essential portion of maintaining overall oral health. Eventually, receding gums could also be treated with an flap surgery. They may frequently be treated with a duvet.
When you’re pregnant, then the gums are more at risk of bacteria. The gums are also described as the gingivae. Receding gums are often rather bemused. They’re not something you need to discount, and they won’t disappear on their own. They’re an indicator of poor dental health and fitness and when due care is not given, it might also result in tooth loss. They have been typical and usually unnoticed from an early stage. What’s more, it is going to help alleviate inflamed gums and eliminates some bacteria that may lead to periodontal difficulties.
If it, also it can keep you from being able to reseat it upon your own tooth properly. Fixing your teeth may also cause grayness. Interestingly , teeth can also trigger sinus infections. Only because someone has a tooth that’s loose does not mean they can eliminate the tooth.
How To Fix Receding Gums From Brushing Too Hard?
Otherwise, if it’s a tooth loosened from blunt-force injury (like being a punch or a auto accident)then maybe you ought to check into taking away the tooth and receiving dentures or perhaps a dental implant to repair your missing tooth challenge. At the event the wounded tooth is only marginally loose, it will most likely tense up by it self. | <urn:uuid:a1f97982-fa2c-4244-84e9-0c7390454c91> | CC-MAIN-2022-33 | https://dietfoodmeals.net/2020/02/25/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572021.17/warc/CC-MAIN-20220814083156-20220814113156-00096.warc.gz | en | 0.922589 | 4,552 | 3.28125 | 3 |
What Does That Mean? — Glossary
The ABC’s of breast cancer — A glossary of the breast cancer terms you have questions about. We’re always updating this list with more decoded terms, so be sure to check back!
If there’s something you want to see on here, reach out to email@example.com
Adjuvant therapy is a secondary method of treatment that is performed after the primary treatment method to lower the risk of the cancer returning. Examples include: chemotherapy and radiation therapy.
Advanced Breast Cancer
Advanced breast cancer is another term used to describe metastatic breast cancer (see definition). It is not a different type of breast cancer. Rather, it is the most advanced stage (stage IV) of breast cancer.
Advanced Care Planning
Advanced care planning is the process of deciding ahead of time the kind of health and personal care you want in the future if you were to reach the point where you can no longer speak for yourself.
Adverse events are also known as side effects, an adverse event is an unexpected or dangerous reaction to a treatment or medication. They are captured in the clinical trial and should be reported as part of the safety profile of the treatment.
Axillary Lymph Node Dissection
Lymph nodes are small glands located in the underarm area. When someone’s lymph nodes are removed it’s called an axillary lymph node dissection. This surgery is usually combined with a mastectomy (see “mastectomy” definition).
Benign is an adjective used to describe a tumour that is non-cancerous.
Also known as a molecular marker, a biomarker can be proteins, gene mutations, gene rearrangements, extra copies of genes, missing genes or other molecules that can affect how cancer cells grow, multiply, die and respond to other compounds in the body. Biomarkers can be found in the blood, tissue and other body fluids and can be used to help identify to which treatment the cancer may respond.
A biopsy is the removal of tissues and/or cells to be examined by a pathologist (see “pathology” definition). This is usually the only way to tell for certain if a lump in your breast is benign or cancerous.
A blinded study is a type of study in which it is not known what type of treatment the participant is given. In a single-blinded study, the patients do not know. In a double-blinded study, both patients and physicians do not know. (An open-label study is the opposite of a blinded study)
There are two BRCA genes: BRCA 1 and BRCA 2. Everyone has them and they protect us from getting breast and ovarian cancers. However, BRCA gene mutations can be inherited, which actually increase a person’s risk of developing these cancers, as well as prostate cancer. Women and those assigned female at birth with BRCA gene mutations have a 40%-85% chance of developing breast cancer.
Breast cancer is cancer (the overproduction of cells) produced in the breast tissue. The majority of people diagnosed with breast cancer have random mutations arising during DNA replication in normal, noncancerous stem cells.
A “breastie” is a term used in the breast cancer community to describe your breast cancer best friend, combining breast cancer + best friend = breastie!
A carcinoma is a type of cancer that begins in skin cells or in the tissue lining organs like the liver or kidneys.
A CDK (cyclin-dependent kinase) inhibitor is any chemical that inhibits the function of CDKs. They are used to treat cancers by preventing the over proliferation of cancer cells. They are also called AT7519M. Right now there are many breakthrough breast cancer CDK inhibitors coming to market.
Chemotherapy (or chemo for short) is a type of cancer treatment that that stops cancer cells from dividing or kills them altogether. Chemo has many forms, like: an injection, a pill, or infusion.
Clinical trials are a way to test new medical advancements (treatment, prevention, screening, diagnosis, etc.) in humans before making them readily available to all. They are an experiment or research study to test a new idea. It could evaluate the safety of a new treatment or how well it works. Also, cancer screening, prevention and diagnosis are all studied in clinical trials.
Complete response refers to the disappearance of all signs of cancer in response to treatment. It may not mean the cancer has been cured. Also knows as complete remission.
DCIS (ductal carcinoma in situ), occurs when abnormal cells are found in the lining of the breast duct, but have not spread to other parts of the breast tissue (non-invasive).
De Novo is a term, often associated with metastatic breast cancer (see definition), to describe a person’s first occurrence of cancer.
EBC (Early Breast Cancer)
EBC is breast cancer that is contained in the breast. It has been detected before it’s spread to the lymph nodes or the armpit.
Event Free Survival (EFS)
Event-free survival (EFS) is a potential “surrogate” endpoint in clinical trials. It refers to the time from randomization in a clinical trial to any occurrence that would end the patient’s participation in the study.
An endpoint is an event or outcome that can be objectively measured in a clinical trial. Examples include survival, improvement in quality of life, relief of symptoms and disappearance of the tumour.
End of Life Care
End of life care is treatment that focuses on improving a patient’s quality life after their illness has become terminal and/or incurable.
Estrogen Receptor Positive
Estrogen Receptor Positive (ER+) is used to describe breast cancer cells that may receive signals from the hormone estrogen to promote their growth.
FEC is a chemotherapy treatment that combines three chemotherapy drugs: 5 fluorouracil, epirubicin, and cyclophosphamide – which help stop cancer cells from growing or kill them altogether.
Genes hold a person’s DNA and are the basic function of hereditary.
Genetic counselling is a series of conversations between a trained health professional and a person who is concerned about their risk of inheriting a disease.
Germline testing (also known as genetic testing), usually occurs after genetic counseling to determine a persons familial risk of cancer.
Genetic testing examines cells or tissue for inherited changes in the genes, chromosomes or proteins that may have harmful, beneficial, neutral or no effect on a person’s health. Genetic tests are designed to detects a single gene mutation (such as BRCA1), while genomic testing looks at all of the genes and how they interact with each other.
Genomics is when a person’s (or organism’s) complete set of DNA, including all of its genes, is studied. This helps to understand how the genes interact with each other and how diseases, including cancer, form and can lead to new ways to diagnose, treat and prevent disease.
Genomic testing examines the complete set of DNA, including all of its genes, how they interact with each other and how it affects a person’s health. Genomic testing can sometimes identify treatments that target the specific type of cancer and prevent normal cells from being harmed.
A germline mutation is a gene change or variant in the body’s cells – egg cells in those assigned female at birth and sperm cells in those assigned male at birth – that are incorporated into all the DNA cells when they are passed from parents to their children at the time of conception. Cancers caused by germline mutations are called inherited or hereditary cancers.
If you’re HER2-positive, your cancer cells make an excess amount of the HER2 protein. Originally made to control a breast cell’s growth, when the HER2 protein doesn’t work properly, breast cells can overproduce. This breast cancer tends to be aggressive, but there have been important breakthroughs in treatment.
High risk refers to the certain factors that increase a person’s risk of developing breast cancer – more so than just the average person. They include: genetic, familial, and personal factors.
Hormone Therapy (HT) uses drugs to block the production of estrogen and other female hormones that promote the growth of certain kinds of cancer cells after surgery.
A program that gives special care to people who are near the end of life and have stopped treatment to cure or control their disease. Hospice offers physical, emotional, social, and spiritual support for patients and their families. The main goal of hospice care is to control pain and other symptoms of illness so patients can be as comfortable and alert as possible. It is usually given at home, but may also be given in a hospice center, hospital, or nursing home.
IDC (Invasive Ductal Carcinoma)
IDC (or invasive ductal carcinoma) is a type of breast cancer that starts in the milk ducts of the breast and has spread to surrounding breast tissues. It is the most common type of breast cancer.
ILC (Invasive Lobular Carcinoma)
ILC (or invasive lobular carcinoma) is a type of breast cancer that starts in the milk-producing lobules of the breast and has spread to surrounding breast tissue. It is the second most common type of breast cancer, next to IDC (invasive ductal carcinoma).
Immunotherapy is a form of cancer treatment the body’s immune system to fight cancer by boosting it with treatments and substances that improve the body’s natural response to illness.
Invasive Breast Cancer
Breast cancer becomes invasive when it has moved from where it originally started to now affect surrounding normal tissues. The most common form is invasive ductal carcinoma (see definition).
Investigational refers to a drug or procedure which has been approved to be studied in human subjects in a clinical trial. It can include a new drug, dose, combination or a way to administer.
Locally advanced is a term to describe cancer that has spread from its original location to surrounding tissue or lymph nodes.
A lumpectomy is a breast cancer surgery that removes the cancer and part of the abnormal surrounding tissue, but not the entire breast.
Lymphedema is a common side effect of cancer treatment condition where the lymph nodes produce excess fluid, causing them to swell.
Malignant is an adjective used to describe a tumour that is cancerous. They can destroy and spread to surrounding tissues.
A mammogram is a form of breast cancer screening where an x-ray is taken of the breast.
A mastectomy is a surgery done to remove part or all of the breast that has cancer.
Menopause occurs when a menstruating person stops having menstrual periods because their ovaries no longer produce hormones. Natural menopause usually occurs around the age of 50. Sometimes, a side effect of cancer treatment for young people is early menopause.
When cancer cells metastasize, they spread to other parts of the body.
Metastatic Breast Cancer
Metastatic breast cancer (MBC) is cancer that’s spread to other parts of the body. When breast cancer is found outside the breast, it is still made up of breast cancer cells and still considered breast cancer.
NED (“No Evidence of Disease”)
NED (or “no evidence of disease”) is a term used when tests show no presence of cancer cells in someone who was previously being treated for cancer. NED has replaced the term remission because it is more accurate.
Neo-adjuvant therapy usually occurs before the primary cancer treatment (ie/ surgery). It includes chemotherapy, radiation therapy, and hormone therapy used to shrink the cancerous tumour.
Chemotherapy treatment can often kill non-cancerous cells in the body in addition to the cancerous ones. Neutropenia is often a side effect that occurs when the body’s white blood cell count is too low because of this.
An observational study is a type of study in which researchers observe certain outcomes in individuals. There is no effort to shape the results (e.g. no treatment is given).
An oncologist is a doctor who specializes in diagnosing and treating cancer with chemotherapy or in some cases immunotherapy.
An oophorectomy is surgery to remove one or both of a woman’s ovaries.
Overall survival (OS) represents the length of time from either the date of diagnosis or the start of treatment for a disease, such as breast cancer, that patients are still alive. In clinical trial data, OS is reported as the participants’ average length of time.
Palliative care given specifically to people suffering from a life-long or life-threatening illness. This kind of care focuses on improving patients’ quality of life (see definition), focusing on all aspects of having an illness: physical, psychological, spiritual, etc.
Partial response refers to a decrease in the tumour size or the extent of cancer in the body. Also known as partial remission.
Pathology is the study of diseases by examining tissues under a microscope.
Peer support connects people living with cancer or people with other illnesses to others who have gone through it. Peer support is not based on psychiatric models and diagnostic criteria.
Phase refers to the stage of the clinical trial. Pre-clinical phase trials test potential treatments before it is tested in humans. Phase 1 trials study the potential and how safe the new treatment is in a small group of people. Phase 2 trials test the treatment in larger groups of people to show the benefit and optimal dose. Phase 3 trials test the new treatment against the current standard of care. Phase 4, also known as real-word evidence, is the data collected once the treatment is approved and patients are using it.
Placebo is a “pretend” treatment that does not contain any active medication. It is given as a control to evaluate a new treatment in a clinical trial. It is often in the form of a tablet, injection or infusion that contains harmless ingredients.
Progesterone is a hormone that plays a role in the menstrual cycle and pregnancy. If the breast cancer is Progesterone receptor-positive (PR+), that means its cells may receive signals from progesterone that promote their growth.
A prognosis is typically given by a doctor. It indicates the likely course that a disease or illness will take, including the chances of it recurring.
Progression-free survival (PFS) represents the length of time during and after the treatment of a disease such as breast cancer that patients live with the disease, but it does not get worse or progress. In clinical trial data, PFS is reported as the participants’ average length of time.
Quality of Life (QOL)
Radiation Therapy uses high-energy X-rays to destroy any cancer cells that may remain in the breast after surgery. This reduces the chance of recurrence.
Randomized Clinical Trial
Randomized clinical trial is a way to fairly compare the effects of different treatments by dividing clinical trial participants by chance. At the time of the trial, it is not known which treatment is best.
Real-world evidence (RWE) is the clinical evidence related to the real-world data collected once the drug or treatment is approved, marketed and used by patients outside of a controlled clinical trial setting. It can assess the potential benefits and risks of a treatment. Real-world data can come from a variety of sources, including electronic medical record, disease or product registries and patient-reported data.
Response rate refers to the percentage of people in a clinical trial whose cancer shrinks or disappears in response to the treatment.
Rethink Young Women’s Network (RYWN) is a safe, private community of young women that have personal experience with breast cancer at any stage. It is a place to get support, have your questions answered, connect with other breasties and engage in meaningful conversations with others who get what you’re going through.
Situ is used to describe a tumour, cancer, etc. which is confined to where it first started. In other words, it hasn’t spread. For example, ductal carcinoma in situ, is a breast cancer found in the milk ducts which has not spread to other tissues or parts of the body.
A somatic mutation is a change in a person’s DNA, most often caused by tobacco use, exposure to ultraviolet light or radiation, viruses, chemicals exposure and aging. They are the most common cause of cancer (but don’t always cause cancer). The mutation can occur in any of the body’s cells excepts the germ cells (egg cells in those assigned female at birth and sperm cells in those assigned male at birth) and therefore are not passed onto children.
Stage is a term used to diagnose how advanced breast cancer is. It’s determined by tumour size, the number of lymph nodes affected, and whether it has spread to other tissues and/or parts of the body.
Statistically significant is a mathematical term that signifies when a difference is greater than what would be expected to happen by chance alone.
The state of being a survivor. Survivorship refers to a community of individuals who are living their best lives post-cancer and the resources and tools they can use to do so.
Tamoxifen is a cancer drug used to treat and/or prevent certain types of breast cancer. It’s often used to treat ductal carcinoma in situ (see definition) or to prevent breast cancer in those who are high risk (see definition) for developing the disease.
The term “thriver” is used in the metastatic breast cancer (MBC) community to differentiate from Survivor. Survivor can imply that cancer is cured and they are no longer living with the disease. Those with MBC live with their cancer but can be thriving.
A tumour is a group of abnormal cells that forms when cells divide and multiply too quickly or don’t die when they should. Tumours can be benign (non-cancerous) or malignant (cancerous).
An ultrasound is a breast cancer screening method that uses sound waves to examine the tissues. It tends to be a better screening method for young women or those assigned female at birth under 40, who’s breast density prevents tissues from showing up properly on a mammogram.
Values (patient values)
Patient values refers to what patients value when it comes to cancer treatment and how they measure quality of life (see definition).
Washout period is a term used to describe the process where a patient in a clinical trial (see definition) is taken off a drug in order to give it time to leave their system.
Xeloda is a cancer treatment often used for stage III colon cancer patients. However, it’s also often used to treat metastatic breast cancer (see definition) in those whose situation hasn’t improved with the use of any other anti-cancer drugs.
The Rethink definition of young refers to people at a specific stage of life when there are demands on family, friends, careers, education and fertility. Most of these people are pre-menopausal. Rethink believes that the advocacy work we do with young people helps to improve access and create change for all.
Zoladex is a hormone therapy drug used to treat estrogen receptor-positive breast cancer (see definition) by halting the production of estrogen in the ovaries. | <urn:uuid:437c331d-f390-42a8-ba4a-94b2f7978172> | CC-MAIN-2022-33 | https://rethinkbreastcancer.com/glossary/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572161.46/warc/CC-MAIN-20220815054743-20220815084743-00497.warc.gz | en | 0.943969 | 4,383 | 2.921875 | 3 |
All human beings experience anger. Anger is a normal, natural emotion which helps us recognise that we, or people and things we care about, are being treated badly. It is a hostility which we can feel towards people, but also towards animals and inert objects.
Anger can be an urgent feeling, which can arise quickly and which feels it demands us to act, or a slow burn which constantly affects our thoughts. It is often physically as well as emotionally uncomfortable, as it has physical as well as psychological components.
Anger can be good if it helps you right wrongs, deal with problems and express negative feelings. However, it can also be bad, as it can be harmful both to you and to others, damaging relationships and affecting your ability to succeed as you hope.
The way we manage anger is something learned through life, and is affected by our experiences. However, human beings are constantly capable of learning better strategies to deal with anger, to use anger more positively and to both recognise and avoid, its possible harmful effects.
This leaflet describes some anger management strategies. However, if you feel your anger is, or is at risk of, harming you or others, then consider seeking help through anger management counselling, which will help you understand the source of your anger and to put these, and other, strategies into practice.
What causes anger?
Human emotions are not just caused by circulating levels of hormones like adrenaline. Adrenaline levels are raised in anger because anger causes physical and mental (rather than sexually, although this can sometimes happen for some people) arousal. Adrenaline is the dominant hormone of all kinds of arousal. Known as the fight or flight hormone, it is involved in excitement as well as fear, happiness and desire as well as anger and stress.
We don't react to raised adrenaline levels in the same way every time. Our physical bodies may react in similar ways - with a thudding heart, sweating, fast breathing and so on - but our perception of whether we feel this as anger (or as another emotion) is affected by the thinking, processing and feeling parts of our brains, by our memories, by our moods and by our personalities.
Some of these processes can be consciously changed, some of them are very deeply ingrained, even automatic. They all flavour the way we experience high levels of adrenaline.
Why do some people get more angry than others?
Anger is something we feel at all ages, from small childhood to great age. How we deal with anger depends on how much it overwhelms our normal thinking and planning, on how we have learned to respond, and also on what we choose to do. Sometimes we act before we choose.
What are 'issues with anger'?
People sometimes talk about having 'issues with anger', meaning that either you or others are uncomfortable with or worried about your anger, or that you are seen as being angry more often than is 'normal'.
Issues with anger include:
- Feeling angry a lot of the time.
- Feeling stressed, tired and even physically unwell because of your anger.
- Having a 'short fuse' - reacting with anger quickly or disproportionately to things that distress or challenge you.
- Directing your anger the wrong way - for instance, at the wrong person, or at things rather than people.
- Displaying verbal or physical aggression, which may intimidate others.
- If you feel very angry but are unable to express it, you are likely to feel both physically and psychologically unwell. Symptoms like poor sleep, waking early, feeling agitated, experiencing nausea or heartburn, and a thudding heart (palpitations) are common.
Why won't my anger go away?
If someone deliberately treats you unfairly it is normal to feel angry. Often this kind of anger dissipates quickly, and you calm down.
Sometimes, however, the trigger for your anger isn't something that just happened, but something more general in your life or circumstances, or a past experience which is still causing you distress. When this is the case, you may seem to become suddenly angry about very small things, but the real cause of your anger is something deeper, and 'slow-burning'.
This kind of lasting anger can be difficult to deal with alone. It usually means you have not been able to resolve or come to terms with the cause of your anger. That might be because you have been treated unjustly, and it may seem that there is nothing that you can do to fix this. When this is the case it makes sense to get help. Counselling and talking therapies can help you understand your anger and the causes of your anger.
What makes anger worse?
Anger can be made more powerful by:
- Things which decrease inhibitions, like alcohol and recreational drugs.
- Things which affect overall mood, like fluctuating hormones, existing stress, anxiety and depression, disappointment and grief.
- Things that affect general well-being, such as tiredness and physical illness.
- Things that stop us expending physical energy, such as being 'trapped' at a desk all day.
- Repeated frustrations, when things keep going wrong.
- Feeling helpless. The urge to change things we cannot change can become anger, and not being able to change things which are unfair causes anger.
- Things which make life uncertain, risky or frightening, such as grief, fear, war, domestic violence, post-traumatic stress disorder, relationship breakdown and worries about financial security.
- Anger about previous experiences or harms, which can be 're-ignited' when something happens to remind you.
- Repeated provocation by others or by 'things', such as the car that fails to start or the computer that crashes.
- Depersonalisation of others - when you stop seeing the person you are angry with as another human being. This is seen in 'road rage' when drivers threaten other drivers in a way that they would not if they were not inside a car. It is also seen when people form gangs and start seeing the other gang as less 'human'. This is called tribalism. If we don't see others as human, this can remove the inhibitions which tend to stop us expressing anger towards other humans.
Dealing with anger can be made more challenging by:
- Lack of experience at managing anger.
- Negative experiences regarding anger, particularly childhood experiences.
- Learning in the past that anger is managed with violence.
- Being angry with things we can't change, like bereavement or physical illness.
- Feeling helpless to change things.
- If you have experienced violence before, you may have learned to fight back without thinking.
- Other mental health issues.
- Difficulties in communication.
- Lack of trusted individuals to talk to.
Looking for a counsellor?
Video appointments with qualified counsellors are now available in Patient Access
What is anger management for?
Managing anger involves learning techniques to put you back in control of your actions, so that anger does not control you. They include:
- Recognising anger.
- Learning to defuse anger.
- Learning to think before you act.
- Understanding and addressing the causes of your anger.
- Learning to use anger constructively.
This leaflet lists (below) some approaches to tackling these points.
How do I recognise anger?
The first steps in anger management involve recognising the symptoms of anger.
- Feeling enraged. This seems obvious - but you may also experience anger mainly as something else, such as hurt, sadness, or feeling threatened, anxious or afraid.
- Physical symptoms: your heart beats faster and you breathe more quickly. You might notice tension in your shoulders, jaw or neck, or clenching your fists. You may be unable to keep still, and feel an urge to punch or kick something.
- Trembling or shivering.
- A powerful urge to do something, particularly to frighten or intimidate someone, or to make a loud angry or frightening noise.
- Sweating, together with feeling suddenly hot or cold.
Anger is a stressful emotion, and many of the symptoms of anger are also symptoms of stress. Both involve high circulating levels of adrenaline.
How do I defuse anger?
Anger management techniques involve helping you manage and disperse your anger when it takes hold of you and might otherwise make you act rashly or harmfully. There are many techniques.
- Some are aimed at helping you to stop and think before you act.
- Some are aimed at using and therefore dispersing the surge of adrenaline that goes with your anger.
- Some are aimed particularly at young people and children, others work for all ages.
Different techniques will work better for different people.
Counting gives you time to cool down, so you can think more clearly and let your first impulse to react pass. Impulses are urges to act without thinking. Sometimes - for instance if you are a combat soldier - you need to be trained to act without thinking first. If you know your life is at risk then there may not be time to think. However, in civilian life there is usually time to think, and the end result of most things is better if you think before you react.
Take even breaths. Breathe out for longer than you breathe in, and relax as you breathe out. You automatically breathe in more than out when you're feeling angry, and the trick is to breathe out more than in, which will calm you down.
Sometimes, anger can lead to hyperventilation. This is the very opposite of calming breathing - ie when we hyperventilate, we breathe too deeply and too much and, as a result, feel increasingly anxious and unwell. For more information on hyperventilation, see the separate leaflet called Dealing with Breathing Problems.
Attending classes involving learned techniques like yoga and meditation can also help your ability to use breathing techniques to calm you down.
A few moments of quiet time might help you feel better prepared to find solutions. If you are involved in an argument and you feel anger taking over, suggest that you both take five minutes, perhaps have a glass of water or a cup of tea, and then talk.
During the time out, step back from the situation. Is the argument over something trivial or something huge? If you are on two completely opposite sides can you imagine any middle ground you can accept?
Do you want to stay angry with this person. If not, be prepared to tell them that you don't like feeling angry and would like to find a solution if they would too. It doesn't mean you have to give in - you may still have to agree to differ, but without anger.
When you are angry you are full of adrenaline. Physical activity can help disperse this, and will reduce the stress that can cause you to become angry.
If you feel your anger building up, go for a brisk walk or even a run or a swim. Maybe the person with whom you are angry could do the same.
If you are involved in an argument consider taking tame out and using that for a short walk or run.
Generally, increasing your exercise levels on a regular basis will tend to defuse the adrenaline that keeps you angry, and will help you feel less angry in the long term. Some successful athletes say that they took up sport to help channel their anger as teenagers.
How do I express my anger better?
Express your anger calmly
Once you have thought, express your anger in a calm non-confrontational way, ie say clearly and directly what it is that concerns you, without trying to tell others what they must or must not do.
Stick with 'I' statements rather than telling others what they have done wrong or blaming them, as this is then less likely to make THEM react in a way that increases tension and anger in you both.
Be respectful and be specific
For example: 'I am upset that you arrived home late when I was expecting to go out,' instead of 'you're always late and you don't care what I want.'
For example: 'I am tired and feeling overwhelmed by housework. I feel I am doing more than my share.'
Avoid accusations and try not to back the other person into a corner or make them defensive:
- Criticise behaviours not persons, so say 'you didn't tidy up', not 'you're lazy'.
- Try to avoid absolute words like always and never. For example: 'You never help with the housework.' 'You always answer back.'
- Try to avoid telling others what they should or must do.
- Don't say: 'It's not fair.'
- Try to avoid forcing them into saying what you want them to say. Try not to tell them you dislike, don't love, or hate them. If they are angry too, then they won't respond to these statements in the way you feel they should.
- Try not to tell them what you suppose their excuse is. For example: 'I suppose you're going to tell me you're too tired.' Try not to ask questions that are actually an accusation. For example: 'Why are you so lazy?'
- If you feel you need to make a demand, make it a demand that you try to solve the problem together, and set a time frame. For example: 'Later today, when we've both calmed down'.
Focus on trying to find solutions
Anger means that something needs to be resolved. Instead of focusing on what is upsetting, or what is wrong, focus on solutions. If you need to talk about the causes of the problems you can do that later, when nobody is angry.
Suggest that you are both angry and need to talk when you are calm. Take time out. Suggest a cup of tea and talking in ten minutes' time. Going for a walk can help disperse energy and make things less tense.
Don't hold a grudge
Forgiveness helps solve confrontations. So use apologies, if they are merited. You can apologise for losing your temper without apologising for being angry. It's fine to be angry, if the situation deserves it, but losing your temper is probably unhelpful.
If you can forgive someone who angered you, you can both learn from the situation and improve your relationship.
Humour is a fantastic reliever of tension - that's probably why humour exists in the human race. Avoid sarcasm, which can be hurtful, or 'friendly' insults, which may be misinterpreted if the other person is still angry.
How do I stop anger from controlling me?
You can't always prevent anger, but you can prevent it from becoming overwhelming, and stop it from controlling you.
Learning how to control your anger involves learning to manage it when it flares, using the kind of techniques described above. You can also practise relaxation skills, so that they come more easily to you in times of stress.
Practise relaxation skills
Relaxation skills can be learned and you can use them when you feel anger building. They include breathing exercises, playing music, imagining relaxing places, meditation and hypnosis, and simply repeating a calming word to yourself.
In the long term you will be less angry if you:
- Increase physical exercise levels. Consider contacting your local gym and attending a class, or taking up a sport.
- Take up and continue a regular relaxing physical and mental discipline such as yoga, meditation or mindfulness.
- Use relaxation techniques and anger management techniques so much that they become automatic.
- Identify and address the kind of things that make you angry. If these are particular situations, you may be able to avoid them, or recognise them in advance and plan better ways of dealing with them.
- If your anger is linked to use of alcohol, consider seeking support. You can read more about services offering support for difficulties with alcohol in the separate leaflet called Alcohol and Sensible Drinking. Consider a self-assessment tool to reflect on your use of alcohol. A self-assessment test regarding problems with alcohol may be useful.
- Find a better outlet for your anger. This means using your energy some other way, either physically or in using your mental energy to try to change your life.
- If you are angry at an injustice then you might find relief by channelling your energies into finding a way to put things right. Could you find a way of preventing that injustice affecting others?
- If you are angry at something you can't change, like the loss of someone dear, it may help to talk to someone neutral about this, someone to whom you can reveal your real feelings. Grief can make us very angry.
Where can I find help with my anger?
Learning to control anger is a challenge for everyone at times. However, you should seek help for anger issues if your anger seems out of control, causes you to do things you regret or hurts those around you.
The first port of call is usually your GP. They will want to try to find out what is making you angry, if there is an underlying reason for it, and you are able to identify this.
Your GP will want to talk with you to discover why you are angry, but also whether there are other factors contributing to your anger which also need to be addressed in order to help you get better. These other issues include mental health conditions like depression, anxiety and post-traumatic stress disorder. They include understanding whether drugs or alcohol are affecting your reactions.
Counselling and talking therapies can help you in managing your anger. There is limited counselling available on the NHS these days, and there may be a wait for this. There are various types of counselling, including psychotherapy, Gestalt therapy, family therapy. Some therapists offer therapy aimed at helping you manage past experiences such as counselling for survivors of child sexual abuse or sexual violence.
Cognitive behavioural therapy (CBT) is a particular kind of talking therapy which focusses on how your thoughts and attitudes affect your feelings. CBT can be helpful in managing anger. See the separate leaflet called Cognitive Behavioural Therapy (CBT).
Anger management therapy
This is a specific sort of counselling aimed at helping you change the way you react to the situations that make you angry.
Anger management is often done one-to-one or in small groups. It can involve counselling and cognitive behavioural therapy. Some anger management classes are run over one day or a weekend; others involve regular meetings over a month or so. Your GP will know what is available in your area, but you can also contact private therapists for help and advice.
If your anger is always directed at the same person then this suggests that the interaction between you may be generating the anger. Relationship counselling or couples therapy may be helpful in order to help you understand why you are directing your anger at each other, and whether you are in fact not angry with each other but with something else.
If anger in your relationship is making you scared, or you are or have experienced domestic violence, then consider seeking help. Organisations such as Refuge, Women's Aid or the Alternatives to Violence Project may be able to help you.
There are many organisations, such as Childline, Mind, Moodjuice and YoungMinds, who provide advice on managing anger. See Further reading links at the end of this leaflet.
What is anger in teenagers and children like?
Anger in childhood and adolescence can cause difficulties in families, as the anger may be expressed in ways that parents find difficult.
This may include 'acting up' and oppositional behaviour, pushing boundaries and school difficulties. It can also involve withdrawal, isolation and self-harm.
Some young people struggle more than others to manage anger. Parents and families can help young people develop coping strategies.
Finding the cause
If your child or teenager seems angry, sullen or withdrawn, try to find out how they feel. If anger is there it often simmers just beneath the surface, and you may see it expressed if you ask carefully. Try to work out what is making them angry, together.
- If they are not ready to talk to you, give them space but be ready to listen when they are ready. Consider whether there is anyone else they could talk to. Is there another trusted adult in their life? Would they speak to the school counsellor?
- Are they afraid? Does something in their life feel out of control? Young people may become angry because they are afraid. Anger is common in those experiencing bullying, in drug and alcohol use, and where there is peer pressure to do unwanted or frightening things.
- Whilst most children and teenagers who are angry do not have mental health difficulties, for a few - as in adults - anger can be part of serious mental health problems like anxiety, depression, panic attacks and self-harm.
- Consider the possibility of abuse. Most young people who are angry are not experiencing abuse, but anger can be a symptom of abuse in children and young people. Give them space to talk. If they don't want to talk to you, is there anyone else they could talk to? Make sure they are aware of Childline. If you are concerned that there is a possibility that your child has experienced abuse then it is crucial to seek advice.
Make it clear that you have noticed their unhappiness and are ready to help but give them time and space to talk.
Help them to work out ways of channelling their anger. Consider the techniques above used in adults, particularly trying sport, relaxation techniques and creative time.
Consider counselling. This is usually provided through the school counselling service, at least initially, for those of school age.
How do I manage an angry child or teenager?
When your child becomes angry you are likely to feel distressed and rejected.
Try to set your feelings aside and to focus on them, your child, caught up in an emotion they can't handle well. They need your help.
- Respond to the anger, not the child or teenager. Be clear when you react, that it is your child's behaviour, not your child, that you don't like. This may seem obvious to you but it may not be obvious to them.
- Stay calm. Keep your body language relaxed. Don't shout.
- Acknowledge the anger: 'I can see you're really angry.'
- Consider using time out to give them a chance to calm down and then discuss things.
- Don't lecture.
- Don't patronise or tell them they're too young to know anything.
- Be ready to listen, and tell them they can say anything they need to say.
- Having given them that permission, don't take it personally.
- Verbal or physical abuse or violence from your child can be very difficult:
- If you can do so safely, remove yourself from the room.
- If not, and you feel that you or anyone else are at immediate risk of harm, warn the child that if the aggression does not stop you will need to ask the police to come and help you keep everyone safe. Whilst this is a very tough thing to do, it may be needed to keep everyone safe.
- Don't give in to angry demands. Be consistent. Keep your boundaries. If your child is angry with those, it doesn't mean they're wrong. Be ready to listen if they want to make a case for a different boundary, but unless you think they're right, stick to your rules.
Learning difficulties and anger
Children who have difficulties with speech and language, communication problems or other developmental difficulties may have particular difficulties in expressing their anger and may need specialist help. Your GP will be able to advise on this.
Oppositional defiance disorder and conduct disorder
These terms are used for severe behavioural difficulties in children and young people, that may not respond to simple measures above, and that can harm children's prospects in life if they are not addressed.
If your child's angry and difficult behaviour has taken over their life, is severe and persistent, or is leading them into difficulties with the police, ask your GP about a referral to the Child and Adolescent Mental Health Service (CAMHS). Support and treatment should be available for you at home, in school and in the community. | <urn:uuid:7f7252eb-7949-445d-adae-8c9cb45877d0> | CC-MAIN-2022-33 | https://patient.info/mental-health/anger-management | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00096.warc.gz | en | 0.96016 | 5,054 | 3.609375 | 4 |
- Open Access
Kismeth: Analyzer of plant methylation states through bisulfite sequencing
BMC Bioinformatics volume 9, Article number: 371 (2008)
There is great interest in probing the temporal and spatial patterns of cytosine methylation states in genomes of a variety of organisms. It is hoped that this will shed light on the biological roles of DNA methylation in the epigenetic control of gene expression. Bisulfite sequencing refers to the treatment of isolated DNA with sodium bisulfite to convert unmethylated cytosine to uracil, with PCR converting the uracil to thymidine followed by sequencing of the resultant DNA to detect DNA methylation. For the study of DNA methylation, plants provide an excellent model system, since they can tolerate major changes in their DNA methylation patterns and have long been studied for the effects of DNA methylation on transposons and epimutations. However, in contrast to the situation in animals, there aren't many tools that analyze bisulfite data in plants, which can exhibit methylation of cytosines in a variety of sequence contexts (CG, CHG, and CHH).
Kismeth http://katahdin.mssm.edu/kismeth is a web-based tool for bisulfite sequencing analysis. Kismeth was designed to be used with plants, since it considers potential cytosine methylation in any sequence context (CG, CHG, and CHH). It provides a tool for the design of bisulfite primers as well as several tools for the analysis of the bisulfite sequencing results. Kismeth is not limited to data from plants, as it can be used with data from any species.
Kismeth simplifies bisulfite sequencing analysis. It is the only publicly available tool for the design of bisulfite primers for plants, and one of the few tools for the analysis of methylation patterns in plants. It facilitates analysis at both global and local scales, demonstrated in the examples cited in the text, allowing dissection of the genetic pathways involved in DNA methylation. Kismeth can also be used to study methylation states in different tissues and disease cells compared to a reference sequence.
DNA methylation involves the conversion of cytosine to 5-methylcytosine, which results from the action of DNA methyltransferases (DNMTs) . DNA methylation occurs in different sequence contexts in different organisms. In H. sapiens and other mammals, it is believed that DNA methylation occurs mainly in the cytosines of CG dinucleotides .
In plants, DNA methylation is critical for parental imprinting, the regulation of embryogenesis, transposon silencing and for seed viability [3–5]. It has been shown that different pathways are involved in the methylation of cytosines in three different contexts; CG, CHG (C followed by a non-G followed by a G) and CHH (C followed by two non-Gs) . Plants share some of the key elements of the DNA methylation machinery with mammals, but additionally contain plant-specific pathways as well. Plants can tolerate mutations in the DNA methylation pathways, that are embryonic lethal in mammals (e.g. DNMT1), and therefore provide a powerful model system for the study of methylation.
The ability to measure DNA methylation efficiently and accurately is essential for understanding the mechanisms of the processes that lead to DNA methylation. Various techniques have been developed to detect and quantify DNA methylation. Bisulfite sequencing is becoming the gold standard in methylation studies, since it provides both high resolution in sequence and a quantitative measure of DNA methylation at specific loci [6, 7].
Bisulfite sequencing involves bisulfite treatment of single stranded DNA that converts unmethylated cytosines (C's) into uracil while methylated C's remain unconverted. After treatment, the region of interest is PCR amplified, and the PCR product is cloned and sequenced. The PCR amplification of the converted C (to uracil) will result in the replacement of uracil with thymine. By comparing the sequence of the bisulfite-treated DNA with that of untreated DNA, the methylation profile is determined: conversion of a C to T indicates non-methylated C's; in contrast, the absence of C to T conversion indicates protection by the methyl moiety of the C and hence methylation. In a standard bisulfite treatment, thus, several sequencing runs/clones are sampled per sequence. This makes the analysis of the data complex.
Most of the extant web-based tools are designed specifically for mammals, and are, therefore, unable to detect methylation outside the CG context. Currently, the only available tool for the analysis of bisulfite converted DNA in plants is CyMATE . Although this tool provides ample analyses, Kismeth provides additional useful features such as the ability to design primers for PCR amplification of bisulfite-treated DNA, analysis of individual sequenced reads and facilitates the bulk analysis of the many sequences associated with bisulfite-treated methylation detection.
Results and discussion
The C at any particular position may not be completely methylated in any given tissue, which is a measure of the intrinsic variability. In addition, bisulfite treatment can lead to incomplete conversion, which is the extrinsic noise introduced by the act of measurement. Thus, in order to get a measure of the DNA methylation level, a large number of individual clones of PCR products from multiple biological replicates need to be analyzed. Kismeth is one of a few web-based programs that can perform such an analysis, especially for plants.
In this section, we describe Kismeth, then its use in two pilot studies and conclude with a comparison to other tools that can be used to analyze bisulfite sequencing.
We describe here the two tools included in Kismeth, the analyser of bisulfite sequencing data and the designer of bisulfite sequencing primers.
Bisulfite sequencing analyzer
Kismeth requires two fasta-format files, one file containing the reference sequence and the other containing the results of bisulfite sequencing. The reference sequence should be the minimal sequence (not including the PCR primers) between the flanking PCR primers. There are no restrictions on the lengths of sequences other that the limits placed by the sequencing technologies, the software works well for hundreds of sequences, but very large numbers can lead to the website stalling [6, 7].
Both files are uploaded on the front page shown in figure 1. Example files from the pilot study described in the application section are available through a link on the front page of Kismeth. The question mark on the front page provides a manual (or answers to questions) for the tool. Kismeth will perform the analysis and return a synopsis table and graph, shown in figures 2 and 3, summarizing the statistics for the sequence as a whole. The graph shows the fraction of methylation at each cytosine position along the reference sequence, allowing a quick estimate of the rates of methylation in different regions (Figure 3). The data underlying the graph, the methylation states of various kinds of cytosines in the sequence, is also available either for browsing on the web (the View links) or as downloadable comma separated value (csv) files (the download links) which can be imported into spreadsheet programs.
In addition, two kinds of detailed reports, on a sequence-by-sequence basis, are accessible through the Matches and dot plot links on the synopsis page (shown in figure 2). The detailed matches view highlights the various kinds of cytosines in the sequence and the result of the bisulfite treatment (figure 4) allowing the user to study individual alignments to estimate the quality of the sequencing effort that can lead to mismatches (besides the C to T conversions). The dot plot shows only the cytosines as circles, colored according to the type of cytosine (red for CG, blue for CHG and green for CHH), with filled circles representing methylated cytosines and empty circles representing un-methylated cytosines (figure 5). The program parameters, described in the algorithm section, can be changed on the front page of Kismeth. Kismeth also generates postscript files for various figures, which can be downloaded for use in publications.
Bisulfite Primer Design
Kismeth also provides the option to design primers for methylation analysis of a particular region. The link for the primer design program on the front page (Figure 1) leads to the primer design front page (Figure 6). Here, the user can upload a reference sequence file, specify the length of the PCR product and the desired Tm (approximate), and Kismeth will provide a list of optional primers based on their predicted efficiency. The user can also choose to design primers for the reverse complement of the input sequence, and thus interrogate both DNA strands. The results are presented as a table (figure 7).
Application of Kismeth
We used Kismeth to analyze data from two experiments in Arabidopsis thaliana. The first study demonstrates the loss of DNA methylation of an AtMu1 transposon in a mutant background, leading to epigenetic reactivation . The second study is of the global effect of ARGONAUTE-4, which is necessary for CHG DNA methylation in A. thaliana . Our aim in these pilot studies is to demonstrate Kismeth's ability to analyze such data in meaningful ways. The biology relevant to these pilot studies is described more extensively in other publications.
DNA methylation of an AtMu1 transposable element
Our first pilot study was the use of Kismeth to study methylation data for an AtMu1 locus of A. thaliana (At4g08680). This transposon is epigenetically silenced by DNA methylation . A decrease in DNA methylation1 (ddm1) mutant background induces a genome-wide decrease in DNA methylation . The AtMu1 locus shows a decrease of DNA methylation in the ddm1 mutant background .
We generated bisulfite data from wild type A. thaliana plants (Columbia-0, Col-0), and the ddm1 mutant for the AtMu1 5' terminal inverted repeat using PCR primers generated by the primer design tool in Kismeth. As can be seen in Figure 8, changes are apparent in the overall methylation between cytosines of all sequence contexts in the ddm1 mutant.
Using the Matches link we can see that even though some of our clones had one or two mismatches, the overall quality was satisfactory. As can be seen in Figure 9, the dot plot shows that although there is an overall reduction of methylation in the ddm1 mutant, there are two clones in the ddm1 background that show wt-like levels of DNA methylation.
The overall decrease in methylation that is evident in Figure 8 can be the result of either a general reduction in methylation levels in each plant or cell, or a complete loss of methylation in some plants/cells and retention of WT levels in others. Figure 9 clearly shows that the latter is correct.
To asses our background non-conversion level we used At2g20610, a gene that is known to be unmethylated in WT. We have provided the data in the example data sets that can be downloaded from the website, it does show that the effect we are seeing is not an experimental artefact.
Role of Argonaute-4 in DNA methylation
As a second pilot study, to study global effects of the DNA methylation pathway, we prepared genomic DNA from A. thaliana wild type (Landsberg erecta, La-er) and an RNAi mutant ago4-1. Treated DNA was then used for PCR amplification of MEA-ISR, a repetitive element .
In our experiment, La-er and ago4-1 had 35 and 36 clones sequenced, respectively. These two sets of sequences were then analyzed using Kismeth. The analysis generated by Kismeth was in full agreement with what has been shown previously .
In the wild type plant, high levels of methylated Cs in CG, CHG, and CHH contexts were observed (the example datasets called Laer that can be downloaded from the Kismeth website); whereas those in ago4-1 (example dataset available on the Kismeth website, labeled ago4-1) have a decrease in CHG and CHH methylation, with CG methylation unchanged (Figure 10). The dot plots shown in Figure 11 agree with the observations from Figure 10, that there is a reduction in CHG and CHH methylation by comparing the graphs for the two datasets. Thus, Kismeth allows for a quick evaluation of biologically-relevant, global methylation changes.
Advantages of Kismeth
There are several programs designed to analyze bisulfite sequencing data, CyMATE is the web-based program that comes closest to Kismeth in terms of functionality and applicability to plant data. We list a few user-friendly features in which Kismeth differs from CyMATE.
Preparation of Input sequences. Kismeth does not require pre-alignments and the reference sequence must be uploaded separately from the clone sequences. CyMATE takes as input the alignment from ClustalW of the vector-trimmed sequences, with the reference sequence always being the first one.
Interactive use of the browser. Kismeth presents the results on the browser and alerts users to problems, while CyMATE sends the results via email, some errors, such as incorrect data formats, are indicated via the website.
Organization of reports and graphs. Kismeth provides graphical output for various aggregate measures as well as raw data files in the form of downloadable, spreadsheet-compatible files whereas CyMATE provides only the dot plot and leaves some of the tables in log files.
Analysis of individual reads. Kismeth provides a custom viewer (through the Matches function) for the study of alignments of the clones against the reference sequence, finding non C/T mismatches and scoring the quality of each read separately, no such facility exists in CyMATE.
Design of primers. Kismeth is the only tool for bisulfite primer design for plants.
In animals, DNA methylation is involved in various developmental processes and its dysregulation can cause developmental abnormality and diseases including cancer . In plants, it is critical for parental imprinting, the regulation of embryogenesis, transposon silencing and for seed viability . Detection and measurement of DNA methylation has become an essential component for studying the biology of these processes. Kismeth is a convenient tool for processing data from bisulfite sequencing, the most commonly used method to examine DNA methylation. In all cases, appropriate controls that are not methylated need to also be studied to ensure that there are no systematic biases in the experiments.
Though high-throughput techniques are being developed for the detection of DNA methylation, their validation, for the most part, still relies on traditional bisulfite sequencing . Therefore, tools like Kismeth are still essential for the study of DNA methylation.
We describe here the software underlying Kismeth, as well as the algorithms. We also provide details on the experimental methods used in our pilot studies. The use of the tool is described in the Results and Discussion section.
Kismeth Algorithms and software
The central analysis in Kismeth is the alignment of the bisulfite-treated reads against the reference sequence. This requires that C's in the reference sequence be allowed to align against T's in the bisulfite-treated reads, without a penalty. If a standard, BLASTn-type alignment were used, then regions with a large number of unmethylated C's would not align with the reference sequence, since they would be converted to T's under bisulfite treatment.
One possiblity is to use protein alignment programs, with a custom scoring matrix. We used a banded smith-waterman based alignment program, cross_match , by modifying the scoring matrix, so that it allows alignment of C's from the reference sequence against T's from the treated sequences.
The sequenced read, as well as its reverse complement, is aligned against the reference sequence, only one of them will align properly, unless the read is of poor quality. Poor alignments, either in terms of the length of match (lengths less than 50 percent of the reference sequence length), or quality of match (less than 80% positive match in the alignment) are not considered for the analysis. These parameters (called min. fraction of length and min. fraction of positive matches) can be modified on the Kismeth website.
The portion of reference sequence used for analysis can be modified using the start of match and end of match variables. Sequence ends might have poorer sampling, since the quality of the reads at the ends is usually lower than in the middle, thus care must be taken in inferring position-dependent methylation. The program first identifies the various kinds of C's on the reference sequence (CG, CHG, and CHH). The output of cross_match is parsed by the program and a report is generated, that holds a synopsis for each alignment that is accepted, the alignments, as well as the identities of the various C's in the alignment. This is the central report file that is used to generate various reports and graphs.
The first reports are the gross analysis for the three types of C's in the sequence, which is compiled from the central report file and is shown on the results page. This allows a quick appraisal to see if there is any particular bias in the kinds of C's that get methylated.
A special display program, using the central report file, generates a browseable view of the individual matches. This is available through the Matches link on the results page. Various kinds of C's are highlighted using appropriate colors, and the number of mismatches is listed at the top to indicate the overall quality of the match. The start and end of the match on the reference sequence is also shown. This allows access to individual alignments.
Another special display program, again using the central report file, generates a dot plot. The dot plot shows only the C's in the reference sequence using appropriately colored circles. Each row represents a read. Open circles are used to represent unmethylated C's while filled circles represent methylated C's. The central report file is also used to generate data files that are used for the plot shown in the figure. The plot is generated using gnuplot, an open source program http://www.gnuplot.info. We have devised a program that allows zooming into the graph to study details that might not be apparent in the large-scale view of the graph. Separate programs use the same files to generate the table and excel view for the detailed reports on a site-by-site basis.
Bisulfite Primer Design
The melting temperature (Tm) of the primers is calculated using a crude approximation,Tm = 64.9 + 41 * (number of G's and C's - 16.4)/N
where N is the length of the primer.
The primers are designed using bisulfite limitations. The software minimizes the C's in the forward primer and the G's in the reverse primer, and does not allow any C in the five bases at the 3' end of the forward primer (or G's in the five bases at the 3' end of the reverse primer).
The forward primers are ranked by the number of C's, with the highest ranking primer being the one with the least number of C's. In addition, the number of C's has to be less than three and not occur in the five bases at the 3' end. If two primers have the same number of C's then the one with the higher Tm is ranked higher. For the reverse primers, the number of G's are limited and they are ranked in a similar manner to the forward primers. The C's in the forward primers are replaced with Y (C/T) and the G's in the reverse primers are replace with R (A/G). Pairs of primers are then chosen by picking one from each set, such that they lie within the product length ranges entered by the user.
For the pilot study on the AtMu1 transposable element, the QIAGEN EpiTech bisulfite kit was used according to manufacturer directions. The primer sequences were designed using Kismeth; the forward read primer pair is AATTTTATGGAATGAAGTTATATG and TTCTCATACARTRRCTTCAATTT, while for the other strand, the primer pair is ATAYAGTGGYTTYAATTTGGGTT and RAAAAATATTTRAAAATAACAAAATAAT. The amplified sequences were cloned into a vector using the TOPO TA cloning kit from Invitrogen.
For the pilot study on the effect of Argonaute-4 on DNA methylation, the EZ DNA Methylation-Gold kit was used for bisulfite treatment of genomic DNA according to the manufacturer's instructions (ZYMO Research). We used published primer sequences for MEA-ISR, JP1026 AAAGTGGTTGTAGTTTATGAAAGGTTTTAT and JP1027 CTTAAAAAATTTTCAACTCATTTTTTTTAAAAAA . The PCR products were cloned into pGEM-T easy vector (Promega).
The primers for the control gene (At2g20610) used in the AtMu1 study are GTTGYTGATTATATGAAYYGAGATYTT (forward) and TTAATTACAACCATARCCACARTRTTCTC (reverse).
Availability and requirements
Chan SW, Henderson IR, Jacobsen SE: Gardening the genome: DNA methylation in Arabidopsis thaliana. Nat Rev Genet 2005, 5: 351–60. 10.1038/nrg1601
Bird A: DNA methylation patterns and epigenetic memory. Genes Dev 2002, 16(1):6–21. 10.1101/gad.947102
Jullien P, Kinoshita T, Ohad N, Berger F: Maintenance of DNA methylation during the Arabidopsis life cycle is essential for parental imprinting. Plant Cell 2006, 18: 1360–1372. 10.1105/tpc.106.041178
Xiao W, Custard K, Brown RC, Lemmon BE, Harada JJ, Goldberg RB, Fischer RL: DNA methylation is critical for Arabidopsis embryogenesis and seed viability. Plant Cell 2006, 18: 805–814. 10.1105/tpc.105.038836
Miura A, Yonebayashi S, Watanabe K, Toyama T, Shimada H, Kakutani T: Mobilization of transposons by a mutation abolishing full DNA methylation in Arabidopsis. Nature 2001, 411(6834):212–4. 10.1038/35075612
Frommer M, McDonald L, Millar D, Collis C, Watt F, Grigg G, Molloy P, Paul C: A genomic sequencing protocol that yields a positive display of 5-methylcytosine residues in individual DNA strands. Proc Natl Acad Sci USA 1992, 89: 1827–1831. 10.1073/pnas.89.5.1827
Clark S, Harrison J, Paul C, Frommer M: High sensitivity mapping of methylated cytosines. Nucleic Acids Res 1994, 22: 2990–2997. 10.1093/nar/22.15.2990
Hetzl J, Foerster AM, Raidl G, Mittelsten SO: CyMATE: a new tool for methylation analysis of plant genomic DNA after bisulphite sequencing. Plant J 2007, 51(3):526–36. 10.1111/j.1365-313X.2007.03152.x
Singer T, Yordan C, Martienssen RA: Robertson's Mutator transposons in A. thaliana are regulated by the chromatin-remodeling gene Decrease in DNA Methylation (DDM1). Genes Dev 2001, 15(5):591–602. 10.1101/gad.193701
Zilberman D, Cao X, Jacobsen SE: ARGONAUTE4 control of locus-specific siRNA accumulation and DNA and histone methylation. Science 2003, 299: 716–719. 10.1126/science.1079695
Lippman Z, Gendrel AV, Vaughn MBM, Dedhia N, McCombie WR, Lavine K, Mittal V, May B, Kasschau KD, Carrington JC, Doerge RW, Colot V, Martienssen R: Role of transposable elements in heterochromatin and epigenetic control. Nature 2004, 430(6998):471–6. 10.1038/nature02651
Cao X, Jacobsen SE: Locus-specific control of asymmetric and CpNpG methylation by the DRM and CMT3 methyltransferase genes. Proc Natl Acad Sci USA 2002, 99: 16491–16498. 10.1073/pnas.162371599
Jones PA, Baylin SB: The fundamental role of epigenetic events in cancer. Nature Reviews Genetics 2002, 3: 415–428. 10.1038/nrg962
Bender J: DNA methylation and epigenetics. Annu Rev Plant Biol 2004, 55: 41–68. 10.1146/annurev.arplant.55.031903.141641
Taylor KH, Kramer RS, Davis JW, Guo J, Duff DJ, Xu D, Caldwell CW, Shi H: Ultradeep Bisulfite Sequencing Analysis of DNA Methylation Patterns in Multiple Gene Promoters by 454 Sequencing. Cancer Research 2007, 67: 8511–8518. 10.1158/0008-5472.CAN-07-1016
Ewing B, Hillier L, Wendl M, Green P: Basecalling of automated sequencer traces using phred. I. Accuracy assessment. Genome Research 1998, 8: 175–185.
Milos Tanurdzic, Vladimir Grubor, Julius Brennecke and Rob Lucito helped improve the paper in a variety of ways. The anonymous reviewers helped improve the tool and the paper substantially.
YQ suggested the need for a plant-specific tool and helped with the initial design, ago4-1 data, testing and writing. EG suggested the need for a dot plot, helped fixed many problems, improved the tool and the writing substantially. RKS edited the manuscript, suggested the primer design program as well as several improvements to the tool in addition to sequencing the AtMu1 5' TIR in ddm1 and WT. RAM helped with the manuscript and gave the initial impetus for a bisulfite analysis tool. TR designed several aspects of the website. RS designed the tool, created the software and wrote the manuscript. All authors read and approved the final manuscript.
Eyal Gruntman, Yijun Qi, R Keith Slotkin contributed equally to this work.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
About this article
Cite this article
Gruntman, E., Qi, Y., Slotkin, R.K. et al. Kismeth: Analyzer of plant methylation states through bisulfite sequencing. BMC Bioinformatics 9, 371 (2008). https://doi.org/10.1186/1471-2105-9-371
- Reference Sequence
- Bisulfite Sequencing
- Front Page | <urn:uuid:0a5c8c5d-ac98-49ad-8c24-286adf5bf781> | CC-MAIN-2022-33 | https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-9-371 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00296.warc.gz | en | 0.907461 | 6,128 | 2.703125 | 3 |
Section I Use of English Read the following text. Choose the best word(s) for each numbered blank and mark A, B, C or Don the ANSWER SHEET. (lOpoints)
People are, on the whole, poor at considering background information when making individual decisions. At first glance this might seem like a strength that _1_ the ability to make judgments which are unbiased by 2 factors. But Dr Uri Simonsohn speculated that an inability to consider the big -3_ was leading decision-makers to be biased by the daily samples of information they were working with. 4 , he theorised that a judge 5 of appearing too soft 6 cnme might be more likely to send someone to prison _7_ he had already sentenced five or six other defendants only to forced community service on that day. To 8 this idea, he turned to the university-admissions process. In theory, the _9_ of an applicant should not depend on the few others_lQ_ randomly for interview during the same day, but Dr Simonsohn suspected the truth was 11 He studied the results of 9,323 MBA interviews 12 by 31 admissions officers. The interviewers had 13 applicants on a scale of one to five. This scale 14 numerous factors into consideration. The scores were 15 used in conjunction with an applicant’s score on the Graduate Management Admission Test, or GMAT, a standardised exam which is 16 out of 800 points, to make a decision on whether to accept him or her. Dr Simonsohn found if the score of the previous candidate in a daily series of interviewees was 0.75 points or more higher than that of the one 17 that, then the score for the next applicant would 18 by an average of 0.075 points. This might sound small, but to 19 the effects of such a decrease a candidate would need 30 more GMATpoints than would otherwise have been 20
1. [A] grants [B]submits[C]transmits[D]delivers
2. [A] minor [B]external[C]crucial[D]objective
3. [A] issue [B]vision[C]picture[D]moment
4. [A] Above all [B]On average[C]In principle[D]For example
5. [A] fond [B]fearful[C]capable[D]thoughtless
6. [A] in [B]for[C]to[D]on
7. [A] if [B]until[C]though[D]unless
8. [A] test [B]emphasize[C]share[D]promote
9. [A] decision [B]quality[C]status[D]success
10.[A] found [B]studied[C]chosen[D]identified
11.[A] otherwise [B]defensible[C]replaceable[D]exceptional
12.[A] inspired [B]expressed[ C]conducted[D]secured
13.[A] assigned [B]rated[C]matched[D]arranged
14.[A] put [B]got[C]took[D]gave
15.[A] instead [B]then[C]ever[D]rather
16.[A] selected [B]passed[C]marked[D]introduced
17.[A] below [B]after[C]above[D]before
18.[A] jump [B]float[C]fluctuate[D]drop
19.[A] achieve [B]undo[C]maintain[D]disregard
20.[A] necessary [B]possible[ C]promising[D]helpful
Section II Reading Comprehension
Read the following four texts. Answer the questions below each text by choosing A, B, C or D. Mark your answers on the ANSWER SHEET. (40 points)
In the 2006 film version of The Devil Wears Prada, Miranda Priestly, played by Meryl Streep, scolds her unattractive assistant for imagining that high fashion doesn’t affect her. Priestly explains how the deep blue color of the assistant’s sweater descended over the years from fashion shows to department stores and to the bargain bin in which the poor girl doubtless found her garment. This top-down conception of the fashion business couldn’t be more out of date or at odds with the feverish world described in Overdressed, Elizabeth Cline’s three-year indictment of “fast fashion”. In the last decade or so, advances in technology have allowed mass-market labels such as Zara, H&M, and Uniqlo to react to trends more quickly and anticipate demand more precisely. Quicker turnarounds mean less wasted inventory, more frequent releases, and more profit. These labels encourage style-conscious consumers to see clothes as disposable – meant to last only a wash or two, although they don’t advertise that – and to renew their wardrobe every few weeks. By offering on-trend items at dirt-cheap prices, Cline argues, these brands have hijacked fashion cycles, shaking an industry long accustomed to a seasonal pace. The victims of this revolution, of course, are not limited to designers. For H&M to offer a $5.95 knit miniskirt in all its 2,300-plus stores around the world, it must rely on low-wage overseas labor, order in volumes that strain natural resources, and use massive amounts of harmful chemicals. www.ienglishcn.comOverdressed is the fashion world’s answer to consumer-activist bestsellers like Michael Pollan’s The Omnivore’s Dilemma. “Mass-produced clothing, like fast food, fills a hunger and need, yet is non-durable and wasteful,” Cline argues. Americans, she finds, buy roughly 20 billion garments a year – about 64 items per person – and no matter how much they give away, this excess leads to waste. Towards the end of Overdressed, Cline introduced her ideal, a Brooklyn woman named Sarah Kate Beaumont, who since 2008 has made all of her own clothes – and beautifully. But as Cline is the first to note, it took Beaumont decades to perfect her craft; her example can’t be knocked off. Though several fast-fashion companies have made efforts to curb their impact on labor and the environment – including H&M, with its green Conscious Collection line – Cline believes lasting change can only be effected by the customer. She exhibits the idealism common to many advocates of sustainability, be it in food or in energy. Vanity is a constant; people will only start shopping more sustainably when they can’t afford not to.
21.Priestly criticizes her assistant for her[A]insensitivity to fashion.[B]obsession with high fashion.[ C] poor bargaining skill.[D]lack of imagination.
22. According to Cline, mass-market labels urge consumers to[A]combat unnecessary waste.[B]shop for their garments more frequently.[C]resist the influence of advertisements.[D]shut out the feverish fashion world.
23.The word “indictment” (Line 3, Para.2) is closest in meaning to[A]tolerance.[B]indifference.[C]enthusiasm.[D]accusation.
24.Which of the following can be inferred from the last paragraph?[A]Vanity has more often been found in idealists.[B]The fast-fashion industry ignores sustainability.[C]Pricing is vital to environment-friendly purchasing.[D]People are more interested in unaffordable garments.
25.What is the subject of the text?[A]Satire on an extravagant lifestyle.[B]Challenge to a high-fashion myth.[C]Criticism of the fast-fashion industry.[D]Exposure of a mass-market secret.
An old saying has it that half of all advertising budgets are wasted – the trouble is, no one knows which half. In the internet age, at least in theory, this fraction can be much reduced. By watching what people search for, click on and say online, companies can aim “behavioural” ads at those most likely to buy. In the past couple of weeks a quarrel has illustrated the value to advertisers of such fine-grained information: Should advertisers assume that people are happy to be tracked and sent behavioural ads? Or should they have explicit permission? In December 2010 America’s Federal Trade Commission (FTC) proposed adding a “do not track” (DNT) option to internet browsers, so that users could tell advertisers that they did not want to be followed. Microsoft’s Internet Explorer and Apple’s Safari both offer DNT; Google’s Chrome is due to do so this year. In February the FTC and the Digital Advertising Alliance (DAA) agreed that the industry would get cracking on responding to DNT requests. On May 31 st Microsoft set off the row. It said that Internet Explorer 10, the version due to appear with Windows 8, would have DNT as a default. Advertisers are horrified. Human nature being what it is, most people stick with default settings. Few switch DNT on now, but if tracking is off it will stay off. Bob Liodice, the chief executive of the Association of National Advertisers, says consumers will be worse off if the industry cannot collect information about their preferences. People will not get fewer ads, he says. “They’ll get less meaningful, less targeted ads.” It is not yet clear how advertisers will respond. Getting a DNT signal does not oblige anyone to stop tracking, although some companies have promised to do so. Unable to tell whether someone really objects to behavioural ads or whether they are sticking with Microsoft’s default, some may ignore a DNT signal and press on anyway. Also unclear is why Microsoft has gone it alone. After all, it has an ad business too, which it says will comply with DNT requests, though it is still working out how. If it is trying to upset Google, which relies almost wholly on advertising, it has chosen an indirect method: there is no guarantee that DNT by default will become the norm. DNT does not seem an obviously huge selling point for windows 8 -though the firm has compared some of its other products favourably with Google’s on that count before. Brendon Lynch, Microsoft’s chief privacy officer, blogged: “We believe consumers should have more control.” Could it really be that simple?
26.It is suggested in Paragraph 1 that “behavioural” ads help advertisers to[A]provide better online services.[B]ease competition among themselves.[C]avoid complaints from consumers.[D]lower their operational costs.
27.”the industry” (Line 5, Para.3) refers to[A]internet browser developers.[B]digital information analysts.[C]e-commerce conductors.[D]online advertisers.
28.Bob Liodice holds that setting DNT as a default[A]may cut the number of junk ads.[B]fails to affect the ad industry.[C]will not benefit consumers.[D]goes against human nature.
29.Which of the following is true according to Paragraph 6?[A]Advertisers are willing to implement DNT.[B]DNT may not serve its intended purpose.[C]DNT is losing its popularity among consumers.[D]Advertisers are obliged to offer behavioural ads.www.ienglishcn.com
30.The author’s attitude towards what Brendon Lynch said in his blog is one of[A]indulgence.[B]understanding.[C]appreciation.[D]skepticism.
Up until a few decades ago, our visions of the future were largely – though by no means uniformly – glowingly positive. Science and technology would cure all the ills of humanity, leading to lives of fulfilment and opportunity for all. Now utopia has grown unfashionable, as we have gained a deeper appreciation of the range of threats facing us, from asteroid strike to epidemic flu and to climate change. You might even be tempted to assume that humanity has little future to look forward to. But such gloominess is misplaced. The fossil record shows that many species have endured for millions of years – so why shouldn’t we? Take a broader look at our species’ place in the universe, and it becomes clear that we have an excellent chance of surviving for tens, if not hundreds, of thousands of years. Look up Homo sapiens in the “Red List” of threatened species of the International Union for the Conservation of Nature (IUCN) and you will read: “Listed as Least Concern as the species is very widely distributed, adaptable, currently increasing, and there are no major threats resulting in an overall population decline.” So what does our deep future hold? A growing number of researchers and organisations are now thinking seriously about that question. For example, the Long Now Foundation has as its flagship project a mechanical clock that is designed to still be marking time thousands of years hence. Perhaps willfully, it may be easier to think about such lengthy timescales than about the more immediate future. The potential evolution of today’s technology, and its social consequences, is dazzlingly complicated, and it’s perhaps best left to science fiction writers and futurologists to explore the many possibilities we can envisage. That’s one reason why we have launched Arc, a new publication dedicated to the near future. But take a longer view and there is a surprising amount that we can say with considerable assurance. As so often, the past holds the key to the future: we have now identified enough of the long-term patterns shaping the history of the planet, and our species, to make evidence-based forecasts about the situations in which our descendants will find themselves. This long perspective makes the pessimistic view of our prospects seem more likely to be a passing fad. To be sure, the future is not all rosy. But we are now knowledgeable enough to reduce many of the risks that threatened the existence of earlier humans, and to improve the lot of those to come.
31.Our vision of the future used to be inspired by[A]our desire for lives of fulfillment.[B]our faith in science and technology.[C]our awareness of potential risks.[D]our belief in equal opportunity.
32.The IUCN’s “Red List” suggests that human beings are[A]a sustained species.[B]the world’s dominant power.[C]a threat to the environment.[D]a misplaced race.
33.Which of the following is true according to Paragraph 5?[A]The interest in science fiction is on the rise.[B]Arc helps limit the scope of futurological studies.[C]Technology offers solutions to social problems.[D]Our immediate future is hard to conceive.
34.To ensure the future of mankind, it is crucial to[A]adopt an optimistic view of the world.[B]draw on our experience from the past.[C]explore our planet’s abundant resources.[D] curb our ambition to reshape history.
35. Which of the following would be the best title for the text?[A] The Ever-bright Prospects of Mankind.[B]Science, Technology and Humanity.[C]Evolution of the Human Species.[D]Uncertainty about Our Future.
On a five to three vote, the Supreme Court knocked out much of Arizona’s immigration law Monday – a modest policy victory for the Obama Administration. But on the more important matter of the Constitution, the decision was an 8-0 defeat for the Administration’s effort to upset the balance of power between the federal government and the states. In Arizona v. United States, the majority overturned three of the four contested provisions of Arizona’s controversial plan to have state and local police enforce federal immigration law. The Constitutional principles that Washington alone has the power to “establish a uniform Rule of Naturalization” and that federal laws precede state laws are noncontroversial. Arizona had attempted to fashion state policies that ran parallel to the existing federal ones. Justice Anthony Kennedy, joined by Chief Justice John Roberts and the Court’s liberals, ruled that the state flew too close to the federal sun. On the overturned provisions the majority held Congress had deliberately “occupied the field” and Arizona had thus intruded on the federal’s privileged powers. However, the Justices said that Arizona police would be allowed to verify the legal status of people who come in contact with law enforcement. That’s because Congress has always envisioned joint federal-state immigration enforcement and explicitly encourages state officers to share information and cooperate with federal colleagues. Two of the three objecting Justices – Samuel Alito and Clarence Thomas -agreed with this Constitutional logic but disagreed about which Arizona rules conflicted with the federal statute. The only major objection came from Justice Antonin Scalia, who offered an even more robust defense of state privileges going back to the Alien and SeditionActs. The 8-0 objection to President Obama turns on what Justice Samuel Alito describes in his objection as “a shocking assertion of federal executive power”. The White House argued that Arizona’s laws conflicted with its enforcement priorities, even if state laws complied with federal statutes to the letter. In effect, the White House claimed that it could invalidate any otherwise legitimate state law that it disagrees with. Some powers do belong exclusively to the federal government, and control of citizenship and the borders is among them. But if Congress wanted to prevent states from using their own resources to check immigration status, it could. It never did so. The Administration was in essence asserting that because it didn’t want to carry out Congress’s immigration wishes, no state should be allowed to do so either. Every Justice rightly rejected this remarkable claim.
36.Three provisions of Arizona’s plan were overturned because they[A]disturbed the power balance between different states.[B]overstepped the authority of federal immigration law.[C] deprived the federal police of Constitutional powers.[D]contradicted both the federal and state policies.
37.On which of the following did the Justices agree, according to Paragraph4?[A]Congress’s intervention in immigration enforcement.[B]Federal officers’ duty to withhold immigrants’information.[C]States’ legitimate role in immigration enforcement.[D]States’ independence from federal immigration law.
38.It can be inferred from Paragraph 5 that the Alien and SeditionActs[A]stood in favor of the states.[B]supported the federal statute.[C]undermined the states’ interests.[D]violated the Constitution.
39.The White House claims that its power of enforcement[A]is dependent on the states’ support.[B]is established by federal statutes.[C]outweighs that held by the states.[D]rarely goes against state laws.40.What can be learned from the last paragraph?[A]Immigration issues are usually decided by Congress.[B]The Administration is dominant over immigration issues.[C]Justices wanted to strengthen its coordination with Congress.[D]Justices intended to check the power of the Administration.
In the following text, some sentences have been removed. For Questions 41-45 , choose the most suitable one from the list A-G to fit into each of the numbered blanks. There are two extra choices, which do not fit in any of the blanks. Mark your answers on ANSWER SHEET. (10 points)
The social sciences are flourishing. As of 2005, there were almost half a million professional social scientists from all fields in the world, working both inside and outside academia. According to the World Social Science Report 2010, the number of social-science students worldwide has swollen by about 11 % every year since 2000. Yet this enormous resource is not contributing enough to today’s global challenges, including climate change, security, sustainable development and health. ( 41 ) ______________ Humanity has the necessary agro-technolo-gical tools to eradicate hunger, from genetically engineered crops to artificial fertilizers. Here, too, the problems are social: the organization and distribution of food, wealth and prosperity. ( 42)______________ This is a shame – the communityshould be graspmg the opportunity to raise its influence in the real world. To paraphrase the great social scientist Joseph Schumpeter: there is no radical innovation without creative destruction. Today, the social sciences are largely focused on disciplinary problems and internal scholarly debates, rather than on topics with external impact. Analyses reveal that the number of papers including the keywords “environmental change” or “climate change” have increased rapidly since 2004. (43) _____________ _ When social scientists do tackle practical issues, their scope is often local: Belgium is interested mainly in the effects of poverty on Belgium, for example. And whether the community’s work contributes much to an overall accumulation of knowledge is doubtful. The problem is not necessarily the amount of available funding. ( 44) ____ _ This is an adequate amount so long as it is aimed in the right direction. Social scientists who complain about a lack of funding should not expect more in today’s economic climate.
The trick is to direct these funds better. The European Union Framework funding programs have long had a category specifically targeted at social scientists. This year, it was proposed that the system be changed: Horizon 2020, a new program to be enacted in 2014, would not have such a category. This has resulted in protests from social scientists. But the intention is not to neglect social science; rather, the complete opposite. ( 45) ______________ That should create more collaborative endeavors and help to develop projects aimed directly at solving global problems.
[A] The idea is to force social scientists to integrate their work with other categories,including health and demographic change; food security; marine research and thebio-economy; clean, efficient energy; and inclusive, innovative and securesocieties.
[B] The solution is to change the mindset of the academic community, and what it considers to be its main goal. Global challenges and social innovation ought to receive much more attention from scientists, especially the young ones.
[CJ It could be that we are evolving two communities of social scientists: one that 1s discipline-oriented and publishing in highly specialized journals, and one that is problem-oriented and publishing elsewhere, such as policy briefs.
[D] However, the numbers are still small: in 2010, about 1,600 of the 100,000 socialsciences papers published globally included one of these keywords.
[EJ These issues all have root causes in human behavior: all require behavioral change and social innovations, as well as technological development. Stemming climate change, for example, is as much about changing consumption patterns and promoting tax acceptance as it is about developing clean energy.
[F] Despite these factors, many social scientists seem reluctant to tackle such problems. And in Europe, some are up in arms over a proposal to drop a specific funding category for social-science research and to integrate it within crosscutting topics of sustainable development.
[G] During the late 1990s, national spending on social sciences and the humanities as a percentage of all research and development funds including government, higher education, non-profit and corporate varied from around 4% to 25%; in most European nations, it is about 15%.
Read the following text carefully and then translate the underlined segments into Chinese. Your translation should be written neatly on the ANSWER SHEET. (10 points)
It is speculated that gardens arise from a basic human need in the individuals who made them: the need for creative expression. There is no doubt that gardens evidence an irrepressible urge to create, express, fashion, and beautify and that selfexpression is a basic human urge; ( 46) yet when one looks at the photographs of the gardens created by the homeless, it strikes one that, for all their diversity of styles, these gardens speak of various other fundamental urges, beyond that of decoration and creative expression. One of these urges has to do with creating a state of peace in the midst of turbulence, a “still point of the turning world,” to borrow a phrase from T. S. Eliot. (47)A sacred place of peace, however crude it may be, is a distinctly human need, asopposed to shelter, which is a distinctly animal need. This distinction is so much sothat where the latter is lacking, as it is for these unlikely gardeners, the formerbecomes all the more urgent. Composure is a state of mind made possible by thestructuring of one’s relation to one’s environment. ( 48) The gardens of the homeless,which are in effect homeless gardens, introduce form into an urban environmentwhere it either didn’t exist or was not discernible as such. In so doing they givecomposure to a segment of the inarticulate environment in which they take their stand.Another urge or need that these gardens appear to respond to, or to arise from, is so intrinsic that we are barely ever conscious of its abiding claims on us. When we are deprived of green, of plants, of trees, ( 49) most of us give in to a demoralization of spirit which we usually blame on some psychological conditions, until one day we find ourselves in a garden and feel the oppression vanish as if by magic. In most of the homeless gardens of New York City the actual cultivation of plants is unfeasible, yet even so the compositions often seem to represent attempts to call forth the spirit of plant and animal life, if only symbolically, through a clumplike arrangement of materials, an introduction of colors, small pools of water, and a frequent presence of petals or leaves as well as of stuffed animals. On display here are various fantasy elements whose reference, at some basic level, seems to be the natural world. (50) !t is this implicit or explicit reference to nature that fully justifies the use of the word garden, though in a “liberated” sense, to describe these synthetic constructions. In them we can see biophilia – a yearning for contact with nonhuman life – assuming uncanny representational forms.
Write an e-mail of about I 00 words to a for,eign teacher in your college, invitinghim加r to be a judge for the upcoming English speech contest.
You should include the details you think necessary.
You should write neatly on ANSWER SHEET.
Do not sign your own name at the end of the e-mail. Use “Li Ming” instead.
Do not write the address. (10 points)
Write an essay of 160-200 words based on the following drawing. In your essay,you should
1) describe the drawing briefly,
2) interpret its intended meaning, and
3) give your comments.
You should write neatly on ANSWER SHEET. (20 points)
Section I: Use of English (10 points)
Section II: Reading Comprehension (60 points) Part A (40 points)
Part B (10 points)
Part C (10 points)
46.然而, 当我们看到这样的照片,看到那些无家可归者创造的花园时,感到了 深深的震撼:尽管它们的风格多样,但这些花园道出了其他的根本需求,而非停留在装饰美化或是创造性表达上。
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。 | <urn:uuid:ae5279e5-9845-4fa2-b852-6eb318742083> | CC-MAIN-2022-33 | http://www.ienglishcn.com/2646.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573744.90/warc/CC-MAIN-20220819161440-20220819191440-00497.warc.gz | en | 0.925745 | 7,016 | 2.6875 | 3 |
New Town, Edinburgh
|UNESCO World Heritage Site|
|Part of||Old and New Towns of Edinburgh|
|Inscription||1995 (19th Session)|
The New Town is a central area of Edinburgh, the capital of Scotland. It was built in stages between 1767 and around 1850, and retains much of its original neo-classical and Georgian period architecture. Its best known street is Princes Street, facing Edinburgh Castle and the Old Town across the geological depression of the former Nor Loch. Together with the West End, the New Town was designated a UNESCO World Heritage Site alongside the Old Town in 1995. The area is also famed for the New Town Gardens, a heritage designation since March 2001.
Proposal and planning
The idea of a New Town was first suggested in the late 17th century when the Duke of Albany and York (later King James VII and II), when resident Royal Commissioner at Holyrood Palace, encouraged the idea of having an extended regality to the north of the city and a North Bridge. He gave the city a grant:
That, when they should have occasion to enlarge their city by purchasing ground without the town, or to build bridges or arches for the accomplishing of the same, not only were the proprietors of such lands obliged to part with the same on reasonable terms, but when in possession thereof, they are to be erected into a regality in favour of the citizens.
It is possible that, with such patronage, the New Town may have been built many years earlier than it was but, in 1682, the Duke left the city and became King in 1685, only to lose the throne in 1688.
The decision to construct a New Town was taken by the city fathers, after overcrowding inside the walls of the Old Town reached breaking point and to prevent an exodus of wealthy citizens from the city to London. The Age of Enlightenment had arrived in Edinburgh, and the outdated city fabric did not suit the professional and merchant classes who lived there. Lord Provost George Drummond succeeded in extending the boundary of the Royal Burgh to encompass the fields to the north of the Nor Loch, the heavily polluted body of water which occupied the valley immediately north of the city. A scheme to drain the Loch was put into action, although the process was not fully completed until 1817. Crossing points were built to access the new land; the North Bridge in 1772, and the Earthen Mound, which began as a tip for material excavated during construction of the New Town. The Mound, as it is known today, reached its present proportions in the 1830s.
As the successive stages of the New Town were developed, the rich moved northwards from cramped tenements in narrow closes into grand Georgian homes on wide roads. However, the poor remained in the Old Town.
The First New Town
A design competition was held in January 1766 to find a suitably modern layout for the new suburb. It was won by 26-year-old James Craig, who, following the natural contours of the land, proposed a simple axial grid, with a principal thoroughfare along the ridge linking two garden squares. Two other main roads were located downhill to the north and south with two minor streets between. Several mews off the minor streets provided stable lanes for the large homes. Completing the grid are three north-south cross streets.
Craig's original plan has not survived but it has been suggested that it is indicated on a map published by John Laurie in 1766. This map shows a diagonal layout with a central square reflecting a new era of civic Hanoverian British patriotism by echoing the design of the Union Flag. Both Princes Street and Queen Street are shown as double sided. A simpler revised design reflected the same spirit in the names of its streets and civic spaces.
The intended principal street was named George Street, after the king at the time, George III. Queen Street was to be located to the north, named after his wife, and St. Giles Street to the south, after the city's patron saint. St Andrew Square and St. George's Square were the names chosen to represent the union of Scotland and England. The idea was continued with the smaller Thistle Street (for Scotland's national emblem) between George Street and Queen Street, and Rose Street (for England's emblem) between George Street and Princes Street.
King George rejected the name St. Giles Street, St Giles being the patron saint of lepers and also the name of a slum area or 'rookery' on the edge of the City of London. It was therefore renamed Prince's Street after his eldest son, the Prince of Wales. The name of St. George's Square was changed to Charlotte Square, after the Queen, to avoid confusion with the existing George Square on the South Side of the Old Town. The westernmost blocks of Thistle Street were renamed Hill Street and Young Street, making Thistle Street half the length of Rose Street. The three streets completing the grid, Castle, Frederick and Hanover Streets, were named for the view of the castle, King George's second son Prince Frederick, and the House of Hanover respectively.
Craig's proposals hit further problems when development began. Initially the exposed new site was unpopular, leading to a £20 premium being offered to the first builder on site. This was received by John Young who built Thistle Court, the oldest remaining buildings in the New Town, at the east end of Thistle Street in 1767. Instead of building as a terrace as envisaged, he built a small courtyard. Doubts were overcome soon enough, and further construction started in the east with St. Andrew Square.
Craig had intended that the view along George Street be terminated by two large churches, situated at the outer edge of each square, on axis with George Street. Whilst the western church on Charlotte Square was built, at St Andrew Square the land behind the proposed church site was owned by Sir Lawrence Dundas. He decided to build a town mansion here and commissioned a design from Sir William Chambers. The resulting Palladian mansion, known as Dundas House, was completed in 1774. In 1825 it was acquired by the Royal Bank of Scotland and today is the registered office of the bank.. The forecourt of the building, with the equestrian monument to John Hope, 4th Earl of Hopetoun, occupies the proposed church site. St. Andrew's Church had to be built on a site on George Street. The lack of a visual termination at the end of this street was remedied in 1823 with William Burn's monument to Henry Dundas.
The first New Town was mainly completed by 1820, with the completion of Charlotte Square. This was built to a design by Robert Adam, and was the only architecturally unified section of the New Town. Adam also produced a design for St. George's Church, although his design was superseded by that of Robert Reid. The building, now known as West Register House, now houses part of the National Archives of Scotland. The north side of Charlotte Square features Bute House, formerly the official residence of the Secretary of State for Scotland and, since the introduction of devolution in Scotland, the official residence of the First Minister of Scotland.
A few small sections remained undeveloped at the time. In 1885 an unbuilt section of Queen Street (an open garden until that time), north of St Andrew Square, provided the site for the Scottish National Portrait Gallery. To the north-west, north of Charlotte Square, the land was part of the Earl of Moray's estate and a long-running boundary dispute with the Moray Estate. caused delay in development. A section of Glenfinlas Street at the north-west corner of Charlotte Square was not completed until 1990 while the western end of Queen Street, north of Charlotte Square, has never been developed.
The New Town was envisaged as a mainly residential suburb with a number of professional offices of domestic layout. It had few planned retail ground floors, however it did not take long for the commercial potential of the site to be realised. Shops were soon opened on Princes Street, and during the 19th century the majority of the townhouses on that street were replaced with larger commercial buildings. Occasional piecemeal redevelopment continues to this day, though most of Queen Street and Thistle Street, and large sections of George Street, Hanover, Frederick and Castle Streets, are still lined with their original late 18th century buildings.
Northern, or Second, New Town and extensions
After 1800, the success of the first New Town led to grander schemes. The 'Northern New Town' (now usually called the Second New Town) aimed to extend Edinburgh from the north of Queen Street Gardens towards the Water of Leith, with extensions to the east and west. These developments took place mostly between 1800–1830. Initial designs by William Sibbald followed the original grid orientation of Craig’s First New Town, with entire streets being built as one construction. Building continued on an extended Hanover Street, called Dundas Street and, beyond Great King Street, Pitt Street (later renamed to Dundas Street in the 1960s), almost 1 km north towards the Water of Leith at Canonmills, where Bellevue Crescent would eventually mark the most northern extent of the New Town project. Streets were laid out either side with Great King Street the central avenue terminated by Drummond Place to the east and Royal Circus to the west. Northumberland Street and Cumberland Street were lesser streets to the south and north respectively. Heriot Row and Abercromby Place, both one-sided streets at the southern limit of the development, enjoyed open aspects to Queen Street Gardens. The builder for large sections of the Second New Town was George Winton.
Very large sections of the Second New Town, built from the early 19th century are also still exactly as built. Townhouses generally occupied the east-west streets, with blocks of flats (called tenements in Scotland) along the north-south streets. Shops were originally generally restricted to the lower floors of the wider north-south streets. The larger houses had service mews running behind and parallel to their terraces.
The Picardy Place extension (including Broughton Street, Union Street and East London Street) was mostly finished by 1809. To the west of the original New Town, Shandwick Place, an extension of Princes Street, was started in 1805. Development of Melville Street and the area north of Shandwick Place followed in 1825. The Gayfield Estate (Gayfield Square) extension was designed in 1807 and from around 1813 the New Town gradually replaced and developed the older village of Stockbridge. The painter Henry Raeburn bought the Deanhough estate in the northwest of the New Town and started development in 1813 with Ann Street named after his wife.
In 1822, the Earl of Moray had plans drawn up by James Gillespie Graham to develop his Drumsheugh estate, between Charlotte Square and the Water of Leith. This was popular amongst the Scots nobility and wealthy lawyers. The bulk of the estate was complete by 1835, but many of the corner blocks were not finally added until the 1850s. The estate is now usually called the Moray Estate. It remains one of the city's most affluent areas and of the most exclusive set of addresses Gillespie Graham would continue the Westward expansion of the New Town here into the estate of Lord Alva, forming the West End Village.
Eastern, or Third, New Town
In order to extend the New Town eastwards, the Lord Provost, Sir John Marjoribanks, succeeded in getting the elegant Regent Bridge built. It was completed in 1819. The bridge spanned a deep ravine with narrow inconvenient streets and made access to Calton Hill much easier and agreeable from Princes Street.
Even before the bridge had been built, Edinburgh Town Council were making preparations for building the Eastern New Town, which would stretch from the slopes of Calton Hill, north to Leith, between Leith Walk and Easter Road. The Lord Provost made an agreement with the main landowners in 1811, some initial surveying was done and there was a competition for architectural plans for the development that on 1 January 1813, the results of which were inconclusive. A number of prominent architects were then asked for their opinions: William Stark, James Gillespie, Robert Burn and his son William Burn, John Paterson and Robert Reid and others.
Stark's observations were particularly valued and he went on to expand them in a "Report to the Lord Provost, Magistrates and Council of Edinburgh on the Plans for Laying out the Grounds for Buildings between Edinburgh and Leith". Stark died on 9 October 1813, and his report was published posthumously in 1814.
The commissioners decided to turn to Stark's pupil William Henry Playfair. He was appointed in February 1818, and produced a plan in April 1819, that closely followed Starks's recommendations. Playfair’s designs were intended to create a New Town even more magnificent than Craig's.
Regent Terrace, Carlton Terrace and Royal Terrace on Calton Hill were built, also Hillside Crescent and some adjoining streets, but the development further north in the direction of Leith was never completed. On the south side of Calton Hill various monuments were erected as well as the Royal High School, designed in Greek revival style by Thomas Hamilton.
For the history and development of the West End see: West End, Edinburgh.
A few modest developments in Canonmills were started in the 1820s but none were completed at that time. For several decades the operations of the tannery at Silvermills inhibited development in the immediate vicinity. From the 1830s onward, development slowed but following the completion in 1831 of Thomas Telford’s Dean Bridge, the Dean Estate had some developments built. These included the Dean Orphanage (now the Dean Gallery), Daniel Stewart's College, streets to the Northeast of Queensferry Street (in the 1850s), Buckingham Terrace (in 1860) and Learmonth Terrace (in 1873).
In the 19th century Edinburgh's second railway, the Edinburgh, Leith and Newhaven Railway, built a tunnel under the New Town to link Scotland Street with Canal Street (later absorbed into Waverley Station). After its closure, the tunnel was used to grow mushrooms, and during World War 2 as an air raid shelter.
An attempt to build an elevated walkway along the length of Princes Street involved the planned demolition of the entire street in a radical plan published in the 1960s. The plan was unpopular but before it was abandoned in 1982, seven buildings were removed. The old Boots building at 102 Princes Street, with its series of statues of William Wallace, Robert Burns, Sir Walter Scott and Robert the Bruce, was demolished in 1965. The North British & Mercantile Insurance Company building at number 64 followed. The New Club, designed by William Burn and extended by David Bryce, and the adjacent Life Association of Scotland building by David Rhind and Sir Charles Barry also came down.
Lost streets include those in the St James Square area, demolished in the 1960s to make way for the St James Shopping Centre and offices for the Scottish Office. This mainly tenemental area, reported as having a population of 3,763, was demolished largely on the basis of being slums with only 61 of 1,100 dwellings being considered fit for habitation. Also demolished as slums was most of Jamaica Street at the west end of the Second New Town.
Bellevue House by Robert Adam, which became the Excise or Custom House, was built in 1775, before the New Town extended to the Bellevue area, in what is now Drummond Place Gardens. Great King Street and London Street in the Northern or Second New Town were aligned on this building but it was demolished in the 1840s due to the construction of the Scotland Street railway tunnel below.
The New Town is home to the National Gallery of Scotland and the Royal Scottish Academy Building, both designed by Playfair and located next to each other on The Mound. The Scottish National Portrait Gallery is on Queen Street. Other notable buildings include the Assembly Rooms on George Street, the Balmoral Hotel (formerly called the North British Hotel, after a railway company) with its landmark clock tower above Waverley Station, and the Scott Monument.
The Cockburn Association (Edinburgh Civic Trust) is prominent in campaigning to preserve the architectural integrity of the New Town.
The New Town contains Edinburgh's main shopping streets. Princes Street is home to many chain shops, formerly including Jenners department store, an Edinburgh institution. George Street, once the financial centre, now has numerous modern bars, many occupying former banking halls, while Multrees Walk on St. Andrew Square is home to Harvey Nichols and other designer shops. The St. James Centre, at the east end of the New Town, was an indoor mall completed in 1970. Often considered an unwelcome addition to New Town architecture, it included a large branch of John Lewis. The St. James Centre (excluding John Lewis) closed on Sunday, 16 October 2016 and has been demolished. It was redeveloped and reopened in 2021 as the St James Quarter. Also, by the Waverley Station lies Waverley Market, which contains many high street stores including: Game, Costa, McDonald's, Sainsbury's, KFC, Subway, Superdry and Greggs.
- Banknotes of Scotland (featured on design)
- History of Edinburgh
- List of Category A listed buildings in the New Town, Edinburgh
- World Heritage Sites in Scotland
- Historic Environment Scotland. "The New Town Gardens (GDL00367)".
- Grant, James. Old and New Edinburgh. Vol. 2.
- Glendinning and MacKechnie (2004). Scottish Architecture. Thames and Hudson. p. 120. ISBN 0-500-20374-1.; citing pamphlet entitled 'Proposals for Carrying on Certain Public Works in the City of Edinburgh'
- "A plan of Edinburgh and places adjacent". Counties of Scotland, 1580-1928. National Library of Scotland. Retrieved 4 June 2015.
- "Thistle Court". Retrieved 1 March 2012.
- Buildings of Scotland: Edinburgh, by Gifford, McWilliam and Walker
- The City of Edinburgh Council (2005). "New Town Conservation Area Character Appraisal". Retrieved 9 July 2017.
- "Dictionary of Scottish Architects - DSA Architect Biography Report (August 29, 2021, 9:48 pm)".
- "History – Lord Moray's Feu".
- "Engine Shed guidance for owners and occupiers of traditional buildings". 18 December 2018.
- Marjoribanks, Roger (2014) "Edinburgh Portrait, Sir John Marjoribanks, Bart, MP (1763–1833)" The Book of the Edinburgh Club, Volume 10, Pp 151-156, ISBN 0-9517284-9-0
- Historic Environment Scotland. "Regent Bridge carrying Waterloo Place over Calton Road including Railings (Category A Listed Building) (LB27945)". Retrieved 18 March 2019.
- Youngson, A.J. (1966): "The Making of Classical Edinburgh", Edinburgh University Press, Edinburgh, UK, ISBN 0-7486-1768-X
- Youngson, A.J. (2001): "The Companion Guide to Edinburgh and the borders", Chapter 9 (Calton Hill), Polygon Books, Edinburgh, UK, ISBN 0-7486-6307-X
- "Brutal surgery" - how a crackpot plan to create an 'elevated walkway' almost doomed Princes Street, BP Perry, edinburghlive, 29 April 2019
- Gifford, John; McWilliam, Colin; Walker, David (1984). The buildings of Scotland: Edinburgh. Penguin Books. ISBN 0-14-071068-X.
- Davey, Andy et al. The Care and Conservation of Georgian Houses: A maintenance manual for Edinburgh New Town. 4th edition. Oxford: Butterworth-Architecture, 1995. ISBN 0-7506-1860-4 | <urn:uuid:d0d8f3bb-0e96-44dc-b26b-1911d3e121e4> | CC-MAIN-2022-33 | https://en.wikipedia.org/wiki/New_Town,_Edinburgh | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00496.warc.gz | en | 0.968725 | 4,348 | 3.5625 | 4 |
|Sound pressure||p, SPL,LPA|
|Particle velocity||v, SVL|
|Sound intensity||I, SIL|
|Sound power||P, SWL, LWA|
|Sound energy density||w|
|Sound exposure||E, SEL|
The speed of sound is the distance travelled per unit of time by a sound wave as it propagates through an elastic medium. At 20 °C (68 °F), the speed of sound in air is about 343 metres per second (1,125 ft/s; 1,235 km/h; 767 mph; 667 kn), or one kilometre in 2.9 s or one mile in 4.7 s. It depends strongly on temperature as well as the medium through which a sound wave is propagating. At 0 °C (32 °F), the speed of sound in air is about 331 m/s (1,086 ft/s; 1,192 km/h; 740 mph; 643 kn).
The speed of sound in an ideal gas depends only on its temperature and composition. The speed has a weak dependence on frequency and pressure in ordinary air, deviating slightly from ideal behavior.
In colloquial speech, speed of sound refers to the speed of sound waves in air. However, the speed of sound varies from substance to substance: typically, sound travels most slowly in gases, faster in liquids, and fastest in solids. For example, while sound travels at 343 m/s in air, it travels at 1,481 m/s in water (almost 4.3 times as fast) and at 5,120 m/s in iron (almost 15 times as fast). In an exceptionally stiff material such as diamond, sound travels at 12,000 metres per second (39,000 ft/s),— about 35 times its speed in air and about the fastest it can travel under normal conditions.
Sound waves in solids are composed of compression waves (just as in gases and liquids), and a different type of sound wave called a shear wave, which occurs only in solids. Shear waves in solids usually travel at different speeds than compression waves, as exhibited in seismology. The speed of compression waves in solids is determined by the medium's compressibility, shear modulus and density. The speed of shear waves is determined only by the solid material's shear modulus and density.
In fluid dynamics, the speed of sound in a fluid medium (gas or liquid) is used as a relative measure for the speed of an object moving through the medium. The ratio of the speed of an object to the speed of sound (in the same medium) is called the object's Mach number. Objects moving at speeds greater than the speed of sound (Mach1) are said to be traveling at supersonic speeds.
Sir Isaac Newton's 1687 Principia includes a computation of the speed of sound in air as 979 feet per second (298 m/s). This is too low by about 15%. The discrepancy is due primarily to neglecting the (then unknown) effect of rapidly-fluctuating temperature in a sound wave (in modern terms, sound wave compression and expansion of air is an adiabatic process, not an isothermal process). This error was later rectified by Laplace.
During the 17th century there were several attempts to measure the speed of sound accurately, including attempts by Marin Mersenne in 1630 (1,380 Parisian feet per second), Pierre Gassendi in 1635 (1,473 Parisian feet per second) and Robert Boyle (1,125 Parisian feet per second). In 1709, the Reverend William Derham, Rector of Upminster, published a more accurate measure of the speed of sound, at 1,072 Parisian feet per second. (The Parisian foot was 325 mm. This is longer than the standard "international foot" in common use today, which was officially defined in 1959 as 304.8 mm, making the speed of sound at 20 °C (68 °F) 1,055 Parisian feet per second).
Derham used a telescope from the tower of the church of St. Laurence, Upminster to observe the flash of a distant shotgun being fired, and then measured the time until he heard the gunshot with a half-second pendulum. Measurements were made of gunshots from a number of local landmarks, including North Ockendon church. The distance was known by triangulation, and thus the speed that the sound had travelled was calculated.
The transmission of sound can be illustrated by using a model consisting of an array of spherical objects interconnected by springs.
In real material terms, the spheres represent the material's molecules and the springs represent the bonds between them. Sound passes through the system by compressing and expanding the springs, transmitting the acoustic energy to neighboring spheres. This helps transmit the energy in-turn to the neighboring sphere's springs (bonds), and so on.
The speed of sound through the model depends on the stiffness/rigidity of the springs, and the mass of the spheres. As long as the spacing of the spheres remains constant, stiffer springs/bonds transmit energy quicker, while larger spheres transmit the energy slower.
In a real material, the stiffness of the springs is known as the "elastic modulus", and the mass corresponds to the material density. Given that all other things being equal (ceteris paribus), sound will travel slower in spongy materials, and faster in stiffer ones. Effects like dispersion and reflection can also be understood using this model.
For instance, sound will travel 1.59 times faster in nickel than in bronze, due to the greater stiffness of nickel at about the same density. Similarly, sound travels about 1.41 times faster in light hydrogen (protium) gas than in heavy hydrogen (deuterium) gas, since deuterium has similar properties but twice the density. At the same time, "compression-type" sound will travel faster in solids than in liquids, and faster in liquids than in gases, because the solids are more difficult to compress than liquids, while liquids, in turn, are more difficult to compress than gases.
Some textbooks mistakenly state that the speed of sound increases with density. This notion is illustrated by presenting data for three materials, such as air, water, and steel; they each have vastly different compressibility, which more than makes up for the density differences. An illustrative example of the two effects is that sound travels only 4.3 times faster in water than air, despite enormous differences in compressibility of the two media. The reason is that the larger density of water, which works to slow sound in water relative to air, nearly makes up for the compressibility differences in the two media.
A practical example can be observed in Edinburgh when the "One o'Clock Gun" is fired at the eastern end of Edinburgh Castle. Standing at the base of the western end of the Castle Rock, the sound of the Gun can be heard through the rock, slightly before it arrives by the air route, partly delayed by the slightly longer route. It is particularly effective if a multi-gun salute such as for "The Queen's Birthday" is being fired.
In a gas or liquid, sound consists of compression waves. In solids, waves propagate as two different types. A longitudinal wave is associated with compression and decompression in the direction of travel, and is the same process in gases and liquids, with an analogous compression-type wave in solids. Only compression waves are supported in gases and liquids. An additional type of wave, the transverse wave, also called a shear wave, occurs only in solids because only solids support elastic deformations. It is due to elastic deformation of the medium perpendicular to the direction of wave travel; the direction of shear-deformation is called the "polarization" of this type of wave. In general, transverse waves occur as a pair of orthogonal polarizations.
These different waves (compression waves and the different polarizations of shear waves) may have different speeds at the same frequency. Therefore, they arrive at an observer at different times, an extreme example being an earthquake, where sharp compression waves arrive first and rocking transverse waves seconds later.
The speed of a compression wave in a fluid is determined by the medium's compressibility and density. In solids, the compression waves are analogous to those in fluids, depending on compressibility and density, but with the additional factor of shear modulus which affects compression waves due to off-axis elastic energies which are able to influence effective tension and relaxation in a compression. The speed of shear waves, which can occur only in solids, is determined simply by the solid material's shear modulus and density.
The speed of sound in mathematical notation is conventionally represented by c, from the Latin celeritas meaning "velocity".
For fluids in general, the speed of sound c is given by the Newton–Laplace equation:
Thus, the speed of sound increases with the stiffness (the resistance of an elastic body to deformation by an applied force) of the material and decreases with an increase in density. For ideal gases, the bulk modulus K is simply the gas pressure multiplied by the dimensionless adiabatic index, which is about 1.4 for air under normal conditions of pressure and temperature.
For general equations of state, if classical mechanics is used, the speed of sound c can be derived as follows:
Consider the sound wave propagating at speed through a pipe aligned with the axis and with a cross-sectional area of . In time interval it moves length . In steady state, the mass flow rate must be the same at the two ends of the tube, therefore the mass flux is constant and . Per Newton's second law, the pressure-gradient force provides the acceleration:
If relativistic effects are important, the speed of sound is calculated from the relativistic Euler equations.
In a non-dispersive medium, the speed of sound is independent of sound frequency, so the speeds of energy transport and sound propagation are the same for all frequencies. Air, a mixture of oxygen and nitrogen, constitutes a non-dispersive medium. However, air does contain a small amount of CO2 which is a dispersive medium, and causes dispersion to air at ultrasonic frequencies (> 28 kHz).
In a dispersive medium, the speed of sound is a function of sound frequency, through the dispersion relation. Each frequency component propagates at its own speed, called the phase velocity, while the energy of the disturbance propagates at the group velocity. The same phenomenon occurs with light waves; see optical dispersion for a description.
The speed of sound is variable and depends on the properties of the substance through which the wave is travelling. In solids, the speed of transverse (or shear) waves depends on the shear deformation under shear stress (called the shear modulus), and the density of the medium. Longitudinal (or compression) waves in solids depend on the same two factors with the addition of a dependence on compressibility.
In fluids, only the medium's compressibility and density are the important factors, since fluids do not transmit shear stresses. In heterogeneous fluids, such as a liquid filled with gas bubbles, the density of the liquid and the compressibility of the gas affect the speed of sound in an additive manner, as demonstrated in the hot chocolate effect.
In gases, adiabatic compressibility is directly related to pressure through the heat capacity ratio (adiabatic index), while pressure and density are inversely related to the temperature and molecular weight, thus making only the completely independent properties of temperature and molecular structure important (heat capacity ratio may be determined by temperature and molecular structure, but simple molecular weight is not sufficient to determine it).
Sound propagates faster in low molecular weight gases such as helium than it does in heavier gases such as xenon. For monatomic gases, the speed of sound is about 75% of the mean speed that the atoms move in that gas.
For a given ideal gas the molecular composition is fixed, and thus the speed of sound depends only on its temperature. At a constant temperature, the gas pressure has no effect on the speed of sound, since the density will increase, and since pressure and density (also proportional to pressure) have equal but opposite effects on the speed of sound, and the two contributions cancel out exactly. In a similar way, compression waves in solids depend both on compressibility and density—just as in liquids—but in gases the density contributes to the compressibility in such a way that some part of each attribute factors out, leaving only a dependence on temperature, molecular weight, and heat capacity ratio which can be independently derived from temperature and molecular composition (see derivations below). Thus, for a single given gas (assuming the molecular weight does not change) and over a small temperature range (for which the heat capacity is relatively constant), the speed of sound becomes dependent on only the temperature of the gas.
In non-ideal gas behavior regimen, for which the Van der Waals gas equation would be used, the proportionality is not exact, and there is a slight dependence of sound velocity on the gas pressure.
Humidity has a small but measurable effect on the speed of sound (causing it to increase by about 0.1%–0.6%), because oxygen and nitrogen molecules of the air are replaced by lighter molecules of water. This is a simple mixing effect.
In the Earth's atmosphere, the chief factor affecting the speed of sound is the temperature. For a given ideal gas with constant heat capacity and composition, the speed of sound is dependent solely upon temperature; see § Details below. In such an ideal case, the effects of decreased density and decreased pressure of altitude cancel each other out, save for the residual effect of temperature.
Since temperature (and thus the speed of sound) decreases with increasing altitude up to 11 km, sound is refracted upward, away from listeners on the ground, creating an acoustic shadow at some distance from the source. The decrease of the speed of sound with height is referred to as a negative sound speed gradient.
However, there are variations in this trend above 11 km. In particular, in the stratosphere above about 20 km, the speed of sound increases with height, due to an increase in temperature from heating within the ozone layer. This produces a positive speed of sound gradient in this region. Still another region of positive gradient occurs at very high altitudes, in the aptly-named thermosphere above 90 km.
For an ideal gas, K (the bulk modulus in equations above, equivalent to C, the coefficient of stiffness in solids) is given by
Thus, from the Newton–Laplace equation above, the speed of sound in an ideal gas is given by
Using the ideal gas law to replace p with nRT/V, and replacing ρ with nM/V, the equation for an ideal gas becomes
This equation applies only when the sound wave is a small perturbation on the ambient condition, and the certain other noted conditions are fulfilled, as noted below. Calculated values for cair have been found to vary slightly from experimentally determined values.
Newton famously considered the speed of sound before most of the development of thermodynamics and so incorrectly used isothermal calculations instead of adiabatic. His result was missing the factor of γ but was otherwise correct.
Numerical substitution of the above values gives the ideal gas approximation of sound velocity for gases, which is accurate at relatively low gas pressures and densities (for air, this includes standard Earth sea-level conditions). Also, for diatomic gases the use of γ = 1.4000 requires that the gas exists in a temperature range high enough that rotational heat capacity is fully excited (i.e., molecular rotation is fully used as a heat energy "partition" or reservoir); but at the same time the temperature must be low enough that molecular vibrational modes contribute no heat capacity (i.e., insignificant heat goes into vibration, as all vibrational quantum modes above the minimum-energy-mode have energies that are too high to be populated by a significant number of molecules at this temperature). For air, these conditions are fulfilled at room temperature, and also temperatures considerably below room temperature (see tables below). See the section on gases in specific heat capacity for a more complete discussion of this phenomenon.
For air, we introduce the shorthand
In addition, we switch to the Celsius temperature = T − 273.15 K, which is useful to calculate air speed in the region near 0 °C (273 K). Then, for dry air,
Substituting numerical values
Finally, Taylor expansion of the remaining square root in yields
A graph comparing results of the two equations is to the right, using the slightly more accurate value of 331.5 m/s (1,088 ft/s) for the speed of sound at 0 °C.
The speed of sound varies with temperature. Since temperature and sound velocity normally decrease with increasing altitude, sound is refracted upward, away from listeners on the ground, creating an acoustic shadow at some distance from the source. Wind shear of 4 m/(s · km) can produce refraction equal to a typical temperature lapse rate of 7.5 °C/km. Higher values of wind gradient will refract sound downward toward the surface in the downwind direction, eliminating the acoustic shadow on the downwind side. This will increase the audibility of sounds downwind. This downwind refraction effect occurs because there is a wind gradient; the sound is not being carried along by the wind.
For sound propagation, the exponential variation of wind speed with height can be defined as follows:
In the 1862 American Civil War Battle of Iuka, an acoustic shadow, believed to have been enhanced by a northeast wind, kept two divisions of Union soldiers out of the battle, because they could not hear the sounds of battle only 10 km (six miles) downwind.
In the standard atmosphere:
In fact, assuming an ideal gas, the speed of sound c depends on temperature and composition only, not on the pressure or density (since these change in lockstep for a given temperature and cancel out). Air is almost an ideal gas. The temperature of the air varies with altitude, giving the following variations in the speed of sound using the standard atmosphere—actual conditions may vary.
|Characteristic specific |
Given normal atmospheric conditions, the temperature, and thus speed of sound, varies with altitude:
|Sea level||15 °C (59 °F)||340||1,225||761||661|
|11,000 m to 20,000 m
(cruising altitude of commercial jets,
and first supersonic flight)
|−57 °C (−70 °F)||295||1,062||660||573|
|29,000 m (flight of X-43A)||−48 °C (−53 °F)||301||1,083||673||585|
The medium in which a sound wave is travelling does not always respond adiabatically, and as a result, the speed of sound can vary with frequency.
The limitations of the concept of speed of sound due to extreme attenuation are also of concern. The attenuation which exists at sea level for high frequencies applies to successively lower frequencies as atmospheric pressure decreases, or as the mean free path increases. For this reason, the concept of speed of sound (except for frequencies approaching zero) progressively loses its range of applicability at high altitudes. The standard equations for the speed of sound apply with reasonable accuracy only to situations in which the wavelength of the sound wave is considerably longer than the mean free path of molecules in a gas.
The molecular composition of the gas contributes both as the mass (M) of the molecules, and their heat capacities, and so both have an influence on speed of sound. In general, at the same molecular mass, monatomic gases have slightly higher speed of sound (over 9% higher) because they have a higher γ (5/3 = 1.66...) than diatomics do (7/5 = 1.4). Thus, at the same molecular mass, the speed of sound of a monatomic gas goes up by a factor of
This gives the 9% difference, and would be a typical ratio for speeds of sound at room temperature in helium vs. deuterium, each with a molecular weight of 4. Sound travels faster in helium than deuterium because adiabatic compression heats helium more since the helium molecules can store heat energy from compression only in translation, but not rotation. Thus helium molecules (monatomic molecules) travel faster in a sound wave and transmit sound faster. (Sound travels at about 70% of the mean molecular speed in gases; the figure is 75% in monatomic gases and 68% in diatomic gases).
Note that in this example we have assumed that temperature is low enough that heat capacities are not influenced by molecular vibration (see heat capacity). However, vibrational modes simply cause gammas which decrease toward 1, since vibration modes in a polyatomic gas give the gas additional ways to store heat which do not affect temperature, and thus do not affect molecular velocity and sound velocity. Thus, the effect of higher temperatures and vibrational heat capacity acts to increase the difference between the speed of sound in monatomic vs. polyatomic molecules, with the speed remaining greater in monatomics.
By far, the most important factor influencing the speed of sound in air is temperature. The speed is proportional to the square root of the absolute temperature, giving an increase of about 0.6 m/s per degree Celsius. For this reason, the pitch of a musical wind instrument increases as its temperature increases.
The speed of sound is raised by humidity. The difference between 0% and 100% humidity is about 1.5 m/s at standard pressure and temperature, but the size of the humidity effect increases dramatically with temperature.
The dependence on frequency and pressure are normally insignificant in practical applications. In dry air, the speed of sound increases by about 0.1 m/s as the frequency rises from 10 Hz to 100 Hz. For audible frequencies above 100 Hz it is relatively constant. Standard values of the speed of sound are quoted in the limit of low frequencies, where the wavelength is large compared to the mean free path.
As shown above, the approximate value 1000/3 = 333.33... m/s is exact a little below 5 °C and is a good approximation for all "usual" outside temperatures (in temperate climates, at least), hence the usual rule of thumb to determine how far lightning has struck: count the seconds from the start of the lightning flash to the start of the corresponding roll of thunder and divide by 3: the result is the distance in kilometers to the nearest point of the lightning bolt.
Main article: Mach number
Mach number, a useful quantity in aerodynamics, is the ratio of air speed to the local speed of sound. At altitude, for reasons explained, Mach number is a function of temperature.
Aircraft flight instruments, however, operate using pressure differential to compute Mach number, not temperature. The assumption is that a particular pressure represents a particular altitude and, therefore, a standard temperature. Aircraft flight instruments need to operate this way because the stagnation pressure sensed by a Pitot tube is dependent on altitude as well as speed.
A range of different methods exist for the measurement of sound in air.
The earliest reasonably accurate estimate of the speed of sound in air was made by William Derham and acknowledged by Isaac Newton. Derham had a telescope at the top of the tower of the Church of St Laurence in Upminster, England. On a calm day, a synchronized pocket watch would be given to an assistant who would fire a shotgun at a pre-determined time from a conspicuous point some miles away, across the countryside. This could be confirmed by telescope. He then measured the interval between seeing gunsmoke and arrival of the sound using a half-second pendulum. The distance from where the gun was fired was found by triangulation, and simple division (distance/time) provided velocity. Lastly, by making many observations, using a range of different distances, the inaccuracy of the half-second pendulum could be averaged out, giving his final estimate of the speed of sound. Modern stopwatches enable this method to be used today over distances as short as 200–400 metres, and not needing something as loud as a shotgun.
The simplest concept is the measurement made using two microphones and a fast recording device such as a digital storage scope. This method uses the following idea.
If a sound source and two microphones are arranged in a straight line, with the sound source at one end, then the following can be measured:
Then v = x/t.
In these methods, the time measurement has been replaced by a measurement of the inverse of time (frequency).
Kundt's tube is an example of an experiment which can be used to measure the speed of sound in a small volume. It has the advantage of being able to measure the speed of sound in any gas. This method uses a powder to make the nodes and antinodes visible to the human eye. This is an example of a compact experimental setup.
A tuning fork can be held near the mouth of a long pipe which is dipping into a barrel of water. In this system it is the case that the pipe can be brought to resonance if the length of the air column in the pipe is equal to (1 + 2n)λ/4 where n is an integer. As the antinodal point for the pipe at the open end is slightly outside the mouth of the pipe it is best to find two or more points of resonance and then measure half a wavelength between these.
Here it is the case that v = fλ.
The effect of impurities can be significant when making high-precision measurements. Chemical desiccants can be used to dry the air, but will, in turn, contaminate the sample. The air can be dried cryogenically, but this has the effect of removing the carbon dioxide as well; therefore many high-precision measurements are performed with air free of carbon dioxide rather than with natural air. A 2002 review found that a 1963 measurement by Smith and Harlow using a cylindrical resonator gave "the most probable value of the standard speed of sound to date." The experiment was done with air from which the carbon dioxide had been removed, but the result was then corrected for this effect so as to be applicable to real air. The experiments were done at 30 °C but corrected for temperature in order to report them at 0 °C. The result was 331.45 ± 0.01 m/s for dry air at STP, for frequencies from 93 Hz to 1,500 Hz.
In a solid, there is a non-zero stiffness both for volumetric deformations and shear deformations. Hence, it is possible to generate sound waves with different velocities dependent on the deformation mode. Sound waves generating volumetric deformations (compression) and shear deformations (shearing) are called pressure waves (longitudinal waves) and shear waves (transverse waves), respectively. In earthquakes, the corresponding seismic waves are called P-waves (primary waves) and S-waves (secondary waves), respectively. The sound velocities of these two types of waves propagating in a homogeneous 3-dimensional solid are respectively given by
The last quantity is not an independent one, as E = 3K(1 − 2ν). Note that the speed of pressure waves depends both on the pressure and shear resistance properties of the material, while the speed of shear waves depends on the shear properties only.
Typically, pressure waves travel faster in materials than do shear waves, and in earthquakes this is the reason that the onset of an earthquake is often preceded by a quick upward-downward shock, before arrival of waves that produce a side-to-side motion. For example, for a typical steel alloy, K = 170 GPa, G = 80 GPa and ρ = 7,700 kg/m3, yielding a compressional speed csolid,p of 6,000 m/s. This is in reasonable agreement with csolid,p measured experimentally at 5,930 m/s for a (possibly different) type of steel. The shear speed csolid,s is estimated at 3,200 m/s using the same numbers.
Speed of sound in semiconductor solids can be very sensitive to the amount of electronic dopant in them.
The speed of sound for pressure waves in stiff materials such as metals is sometimes given for "long rods" of the material in question, in which the speed is easier to measure. In rods where their diameter is shorter than a wavelength, the speed of pure pressure waves may be simplified and is given by:
where E is Young's modulus. This is similar to the expression for shear waves, save that Young's modulus replaces the shear modulus. This speed of sound for pressure waves in long rods will always be slightly less than the same speed in homogeneous 3-dimensional solids, and the ratio of the speeds in the two different types of objects depends on Poisson's ratio for the material.
In a fluid, the only non-zero stiffness is to volumetric deformation (a fluid does not sustain shear forces).
Hence the speed of sound in a fluid is given by
where K is the bulk modulus of the fluid.
In fresh water, sound travels at about 1481 m/s at 20 °C (see the External Links section below for online calculators). Applications of underwater sound can be found in sonar, acoustic communication and acoustical oceanography.
See also: Sound speed profile
In salt water that is free of air bubbles or suspended sediment, sound travels at about 1500 m/s (1500.235 m/s at 1000 kilopascals, 10 °C and 3% salinity by one method). The speed of sound in seawater depends on pressure (hence depth), temperature (a change of 1 °C ~ 4 m/s), and salinity (a change of 1‰ ~ 1 m/s), and empirical equations have been derived to accurately calculate the speed of sound from these variables. Other factors affecting the speed of sound are minor. Since in most ocean regions temperature decreases with depth, the profile of the speed of sound with depth decreases to a minimum at a depth of several hundred metres. Below the minimum, sound speed increases again, as the effect of increasing pressure overcomes the effect of decreasing temperature (right). For more information see Dushaw et al.
An empirical equation for the speed of sound in sea water is provided by Mackenzie:
The constants a1, a2, ..., a9 are
with check value 1550.744 m/s for T = 25 °C, S = 35 parts per thousand, z = 1,000 m. This equation has a standard error of 0.070 m/s for salinity between 25 and 40 ppt. See Technical Guides. Speed of Sound in Sea-Water for an online calculator.
(Note: The Sound Speed vs. Depth graph does not correlate directly to the MacKenzie formula. This is due to the fact that the temperature and salinity varies at different depths. When T and S are held constant, the formula itself is always increasing with depth.)
Other equations for the speed of sound in sea water are accurate over a wide range of conditions, but are far more complicated, e.g., that by V. A. Del Grosso and the Chen-Millero-Li Equation.
The speed of sound in a plasma for the common case that the electrons are hotter than the ions (but not too much hotter) is given by the formula (see here)
In contrast to a gas, the pressure and the density are provided by separate species: the pressure by the electrons and the density by the ions. The two are coupled through a fluctuating electric field.
The speed of sound on Mars varies as a function of frequency. Higher frequencies travel faster than lower frequencies. Higher frequency sound from lasers travels at 250 m/s (820 ft/s), while low frequency sound topped out at 240 m/s (790 ft/s).
Main article: Sound speed gradient
When sound spreads out evenly in all directions in three dimensions, the intensity drops in proportion to the inverse square of the distance. However, in the ocean, there is a layer called the 'deep sound channel' or SOFAR channel which can confine sound waves at a particular depth.
In the SOFAR channel, the speed of sound is lower than that in the layers above and below. Just as light waves will refract towards a region of higher refractive index, sound waves will refract towards a region where their speed is reduced. The result is that sound gets confined in the layer, much the way light can be confined to a sheet of glass or optical fiber. Thus, the sound is confined in essentially two dimensions. In two dimensions the intensity drops in proportion to only the inverse of the distance. This allows waves to travel much further before being undetectably faint.
A similar effect occurs in the atmosphere. Project Mogul successfully used this effect to detect a nuclear explosion at a considerable distance.
It may be seen that refraction effects occur only because there is a wind gradient and it is not due to the result of sound being convected along by the wind.
As wind speed generally increases with altitude, wind blowing towards the listener from the source will refract sound waves downwards, resulting in increased noise levels. | <urn:uuid:0a86689b-68e2-4b94-944c-499d5839691f> | CC-MAIN-2022-33 | https://db0nus869y26v.cloudfront.net/en/Speed_of_sound | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00093.warc.gz | en | 0.923425 | 7,506 | 4.28125 | 4 |
University of Victoria
P.O. Box 3070 STN CSC
Victoria, BC V8W 3W1
In what sense was Shakespeare an anthropologist? Harold Bloom credits Shakespeare with having “invented” the human.(1) This may be an overstatement. Anthropologists are supposed to study humans, not invent them. Of course, when Bloom says such things he is being deliberately belligerent. He presents himself as the last romantic, the last believer in the transcendence of art. As far as Bloom is concerned, Shakespeare provided us not merely with entertainment but also with ethical models for how to live the good life—good life here meaning above all an aesthetic life. “Shakespeare,” Bloom writes, “teaches us how and what to perceive, and he also instructs us how and what to sense and then to experience as sensation.”(2) Shakespeare teaches you how to see the world aesthetically.
The flip side to Bloom’s unabashed romantic aestheticism is what Bloom calls “French Shakespeare,” or the Shakespeare of the “school of resentment.”(3) French Shakespeare is really a corollary of the romantic Shakespeare in which Bloom so fervently believes. For if Shakespeare did indeed invent the human, as Bloom claims, then presumably we can un-invent or deconstruct this invention by showing the ideological assumptions behind the idea of Shakespeare himself. This “hermeneutic of suspicion” has been the dominant mode of criticism for almost half a century. Michel Foucault argued in his 1966 Les mots et les choses that “man” is an invention of nineteenth-century anthropology.(4) The sooner we realize this, the better. It’s not clear to me exactly what we are supposed to do after we have established the fact that man is a recent invention. Bloom clearly is happy with the idea. He just disagrees about who should be credited with the invention. It is not nineteenth-century anthropology that invented man but Shakespeare. Moreover, Bloom believes that since Shakespeare’s intelligence vastly outmatches ours, we are better off accepting his version of humanity, at least for the time being. For all Bloom’s romantic bombast, there is a certain humility in his belief that Shakespeare is the definitive anthropologist. But this humility before the aesthetic master (Shakespeare) is won at the cost of anthropology itself. Bloom’s anthropological universe is a purely aesthetic one. You pay homage to the bard in the hope that some of his genius will rub off on you.
Like Bloom, René Girard believes in Shakespeare’s transcendent status among literary authors. But unlike Bloom, Girard interprets Shakespeare’s greatness in explicitly anthropological rather than purely aesthetic terms. Shakespeare is great not because he taught us how to perceive the world aesthetically, but because he discovered an otherwise nonobvious anthropological or sociological truth. If the social order is to survive, it needs to constrain the contagion of mimetic desire. So for Girard, Shakespeare is quite literally an anthropologist or sociologist. Presumably the only reason he didn’t get his PhD in anthropology or some other related theoretical discipline, such as sociology, philosophy, or critical theory, was that these fields of study didn’t exist in his day. Instead he was forced to make do with the medium he knew and loved best, which was the theatre.
The idea that Shakespeare was a keen student of human behaviour, a philosopher or anthropologist of sorts, is not new. But the more one emphasizes the idea that Shakespeare was a social theorist, the more tricky it becomes to explain the fact that he was also, quite obviously, a dramatist, an entertainer of the people. Bloom gets around this problem by making the strong romantic claim that human beings are fundamentally aesthetic creatures. Shakespeare teaches us how to perceive and feel. Hence for Bloom there is no contradiction between the two conceptions of Shakespeare. Dramatist and anthropologist are one. The two are the same because poetry defines—indeed creates—humanity. We are homo aestheticus, not homo politicus. As Bloom well knows, this stance puts him at odds with his anti-romantic contemporaries, which is precisely why Bloom’s heroes don’t go beyond the mid-twentieth-century Shakespeare critic Harold Goddard. Believers in homo aestheticus are a dying breed in the universities.
Still, at least Bloom has a tradition he can refer to, even if he is perceived as quaint and outmoded by the more advanced—postmodern—members of this tradition. In contrast, when Girard writes on Shakespeare, he appears to be writing in a vacuum. Let me quote from the introduction of his major work on Shakespeare, A Theater of Envy:
But the persuasiveness of the reply really depends upon whether you accept the premise. Is Girard as original as he claims to be? What is to distinguish Girard’s reading of Shakespeare from, for example, Francis Fergusson’s reading of the ritual origins of Greek and Shakespearean tragedy, or John Holloway’s remarks on the sacrificial origins of Shakespearean tragedy?(6) More generally, can’t we see a connection between Girard’s ideas about sacrifice and the work of James George Frazer or Émile Durkheim in the early twentieth century, both of whom were highly influential among critics of the early and mid-twentieth century? What about the ironic, late-romantic readings of Shakespeare by Wilson Knight or Harold Goddard? Finally, don’t Girard’s ideas about tragedy sound very similar to Kenneth Burke’s?
But Girard’s Theater of Envy is almost totally devoid of references to previous scholarship, and this has understandably upset Shakespeare specialists. Girard explicitly rejects the idea that he is just another “Shakespearean” humbly providing another interpretation to the ever-growing mountain of Shakespeare scholarship. “Interpretation,” Girard writes, “is not the appropriate word for what I am doing. My task is more elementary. I am reading for the first time the letter of the text that has never been read on many subjects essential to dramatic literature: desire, conflict, violence, sacrifice.”(7) Interpretation is an inadequate word for Girard because interpretation is what everybody else is doing. His task is, as he says, “more elementary.”
When Girard says his task is more elementary, one is reminded of Durkheim’s use of the word in the title of his magnum opus, The Elementary Forms of Religious Life. Girard’s other key phrase, “for the first time,” is also noteworthy. Girard is saying he is the first interpreter of Shakespeare to read him in this elementary fashion. Where others have merely interpreted Shakespeare in terms of the content of his works, Girard proposes to go beyond this content to explore the elementary anthropological conditions of the theatre itself. Girard proposes to trace literary content back to its elementary form in ritual sacrifice.
Let me briefly rehearse Girard’s argument about the elementary structure of sacrifice. Sacrifice is necessary because desire is contagious. Desire, because it is always imitated from others, tends to get out of hand. If we all imitate each other, sooner or later a crisis of “undifferentiation” occurs, when all hands reach for the same object. To constrain the contagiousness of mimetic desire, it is necessary every now and again to punish those who seem to be responsible for it. It is not necessary that these victims really are the cause of the disorder. What is absolutely necessary, however, is that they are believed to be the cause. This “mimetic” account of desire leads Girard to his famous scapegoat hypothesis of culture outlined in his 1972 book, La violence et le sacré.(8)
With this simple theory Girard explains numerous puzzling facts in Shakespeare’s plays. Consider, for example, his discussion of The Two Gentlemen of Verona. Proteus and Valentine are best friends. Proteus is in love with Julia, but he is torn between staying in Verona with Julia and following his best friend to Milan. Valentine goes to Milan and falls in love with Silvia. When Proteus decides to follow him there, he also falls in love with Silvia. Girard points out that Proteus, the more mimetic of the two friends, doesn’t really have a choice. Valentine so praises Silvia that Proteus imitates his friend’s desire and falls in love with the same woman. At the end Proteus tries to rape Silvia. She is saved only by the sudden appearance of Valentine, whose main concern seems to be that he has been betrayed by his best friend: “Oh, time most accurst, / ’Mongst all foes that a friend should be the worst!” (5.4.71–2). Proteus, embarrassed by his poor behaviour, begs forgiveness of his friend: “My shame and guilt confounds me. / Forgive me Valentine” (5.4.73–4).(9) In a gesture that upsets audiences and critics alike, Valentine responds by offering Proteus the woman he (Proteus) has just attempted to rape: “And, that my love may appear plain and free, / All that was mine in Silvia I give thee” (5.4.82–3). Girard explains this apparently despicable action as a logical consequence of mimetic desire. Valentine feels guilty for having encouraged Proteus to desire Silvia in the first place. He realizes that he is partly responsible for what his friend has done. “The only peaceful solution,” Girard says, “is to let the rival have the disputed object.”(10) Girard reads this moment as a classic mimetic double bind. To remain friends, Proteus and Valentine must give up their rivalry for the same object. Valentine learns this more quickly than Proteus, which is why he is the first to give up Silvia. The important point, Girard says, is not that Valentine abandons Silvia to a would-be rapist, but that he abandons the rivalry of mimetic desire. By giving up the object, he gives up the rivalry. Luckily this spirit of renunciation is catching. Proteus refuses to accept Silvia. Instead he returns to the girl he originally loved, Julia. The play ends happily with Valentine marrying Silvia, and Proteus marrying Julia.
Girard’s book is full of examples like this. Often the readings are quite brilliant. Highlights for me include his reading of The Winter’s Tale, especially the final act in which Girard describes Leontes as a man tempted by the sight of Florizel and Perdita holding hands just as Polixenes and Hermione had sixteen years earlier. Will Leontes be able to withstand this second test of mimetic desire? Happily, sixteen years of repentance allow him to triumph over the temptation. He agrees to be a friend to Florizel without also falling in love with Florizel’s fiancée, the beautiful Perdita, who is the mirror image of her mother, Hermione, the woman whom Leontes believes he has killed in a fit of jealous rage. As Girard says, “The entire past seems resurrected.”(11) But this time there is a difference. Leontes does not make the same mistake the second time. Instead of treating Florizel as a rival, he treats him as a friend. The key lines for Girard occur when Leontes says to Florizel, “Your honor not o’erthrown by your desires, / I am friend to them and you” (5.1.230–1). Leontes has mastered his desire, and this is why he can be a friend to Florizel. Unlike his earlier self, or the Proteus of The Two Gentlemen of Verona, Leontes has renounced the object of mimetic desire.
I could easily cite more examples of Girard’s reading of Shakespeare. But rather than simply repeat what Girard has said, I want to return to the question I began with. How does Girard justify his “mimetic” approach to Shakespeare? We have already seen that Girard claims that he is not simply offering another interpretation of Shakespeare. But if that is the case, then he can’t justify himself by citing the self-evident plausibility of his reading of Shakespeare, because that would be to concede precisely what he finds objectionable: that is, the assumption that there is no way to go beyond the aesthetic.
For many critics, of course, criticism is criticism of aesthetic texts, and that’s the end of the matter. Critics differ on how much latitude they’re willing to give to this idea of textuality. Bloom is a traditionalist because he restricts the text to Shakespeare, but many critics are willing to spread the wealth around a bit more. For this reason, I think it is wrong to read the new historicism as antithetical to aesthetic formalism. On the contrary, the new historicism is an attempt to expand the categories of aesthetic criticism beyond the canonical work to the surrounding cultural context. I think this is quite obvious, for instance, in the case of Stephen Greenblatt.(12)
Like the new historicists, Girard also claims that he is new. Implicit in this claim of newness is the sense that the aesthetic tradition has worn itself out and therefore needs renewing. Bloom’s representation of contemporary cultural criticism as an exercise in resentment may be a caricature, but it has the virtue of identifying our general disenchantment with the aesthetic. Bloom compensates for this disenchantment by raising his voice and plugging his ears. He imagines himself transcending his contemporaries to take his rightful place in a tradition of criticism that stretches from Johnson and Hazlitt to Bradley, Wilson Knight, and Harold Goddard. Girard’s claim to newness, however, is to present himself neither as the last romantic nor as a certified member of the disenchanted postmodern vanguard. Rather, his claim is that he is transcending the aesthetic tradition altogether. Shakespeare is great because he sees exactly what Girard sees: the futility of using art to conquer mimetic desire.
This conception of the aesthetic leads to a curious paradox. On the one hand, Shakespeare is a great dramatist who uncovers the mimetic structure of desire. On the other, he is a poor theorist because as a dramatist he is not at liberty to explain his theory in the straightforward logical fashion of a philosopher or anthropologist. Philosophers are not known for their capacity to earn a living by their writing alone. People are understandably unwilling to part with their hard-earned cash just to hear a philosopher lecture about the truth of his theory. Shakespeare’s solution to this dilemma, Girard says, was to be fiendishly clever. Knowing that merely stating the principles of mimetic desire in sober, logical fashion is unlikely to satisfy the crowds, who are expecting something with a bit more gore, sensation, and slapstick, Shakespeare disguised the theory by cloaking it in good old-fashioned tragedy and comedy. In other words, he wrote two plays in one. The first version of the play was for the regular audience, who were looking for pure entertainment. The second, ironic version was for the philosophers, hoping for something more profound.
In principle there is nothing wrong with this “two-audience” theory to describe Shakespeare’s method. You can strive to entertain everyone all the time, but if you wish to keep the attention of the more refined you will have to go beyond mere slapstick and gore. What is problematic in Girard’s use of the two-audience theory, however, is his apocalyptic application of it to modernity. Consider, for example, this remark from his discussion of Hamlet. After commenting that Hamlet is caught in the double bind between revenge and no revenge, Girard goes on to generalize Hamlet’s condition to all modernity:
Another way of putting this is to say that Girard subordinates his reading of literature to his reading of religion; in particular, to his reading of Christianity. The reason he can ignore the difference between classical, neoclassical, romantic, modernist, and postmodernist aesthetics is that next to Christianity, the difference between these aesthetic periods appears negligible. For Girard, the really significant difference, the one that trumps all others, is the difference between primitive religion and Judeo-Christianity. The role of literature in understanding this fundamental difference is at best ambivalent. Consider Girard’s explanation of Shakespeare’s turn to romance towards the end of the playwright’s career. These last plays, especially The Winter’s Tale and The Tempest, are (Girard says) resolutely self-undermining. The Tempest is an allegory of Shakespeare’s career, beginning with Caliban who represents the monstrosity of mimetic desire, which Shakespeare had exploited to satisfy the audience’s relentless appetite for mimetic violence. When Prospero breaks his staff and promises to leave off magic for good, this is Shakespeare’s way of saying, “Enough already!” Tired of the mimetic games of the dramatist, Shakespeare announces his retirement. Presumably Shakespeare had learned his lesson; in particular, the lesson of the Gospels, in which forgiveness and love triumph over the violence and rivalry of mimetic desire. For Shakespeare, to continue to write drama would be merely bad faith.
I said just now that Girard doesn’t really care about the difference between the various periods of literature because these seem insignificant when compared to the more fundamental anthropological problem of the origin of literature in sacrificial ritual. I think that the two-audience theory can help us unpack this problem. The theatre affords excellent opportunities for words to be supported by their actual flesh-and-blood contexts. This fact should not be underestimated. Despite what many philosophers believe, or used to believe, language is not primarily a means for communicating facts about the world. It is above all a means for producing what psychologists call “joint attention.”(14) The most elementary form of language, the ostensive, is a pointing gesture. But what is worth pointing at? Girard believes it is the scapegoat, the first cultural and historical object of joint attention. But paradoxically he also insists that this form of attention is nonsymbolic. In Things Hidden Since the Foundation of the World, Girard writes, “I think that even the most elementary form of the victimage mechanism, prior to the emergence of the sign, should be seen as an exceptionally powerful means of creating a new degree of attention, the first non-instinctual attention.”(15)
Here are the essential ingredients of sacrifice, all packed into a single primal scene. Again Girard stresses that he is looking at the most “elementary form” of culture, the very first moment of “non-instinctual attention.” But there is a problem. The scapegoaters are both conscious and unconscious of what they are doing. They are conscious in the sense that this is a new moment of attention in which instinct has been superseded by something else, by a new type of attention that is therefore by definition the very first of its kind, unique in all human history. But they are also unconscious in the sense that this new type of attention is only a very minimal form of awareness. Girard really wants to say that they are in a state of semi-consciousness, a sort of liminal state between waking and sleeping where one is not really sure what one is doing. Perhaps noticing this ambivalence, Girard’s interlocutor, Jean-Michel Oughourlian, asks a very good question: “Would this already be a sacred victim?” Girard responds:
I don’t think Girard has adequately answered Oughourlian’s question. The key point is not the amount of violence in the scene, nor the tremendous contrast between violence and peace that Girard says the scene produces. Girard assumes that the sheer violence of the mimetic crisis is sufficient to generate an experience of the sacred. By bombarding your perceptual field with enough violence, you will eventually be compelled to see the sacred. But violence in itself is nothing new. On the contrary, nature is full of it. What is key is rather the representation of the violence and, more precisely, the collective form of attention that Girard says the violence leads to. For if the victim truly is to be represented as sacred, then this is already to say that the victim is an object of a collective attention, which is irreducible to the kind of indexical associations of purely individual perceptual experience.(17) Collective attention—symbolic representation—cannot originate unconsciously. On the contrary, the function it performs is by definition a conscious one—that is, to order and constrain the chaotic and largely unconscious associations of individual sensory experience. The joint scene of attention requires the individual not merely to attend to the object qua individual, but to attend to it as part of an intersubjective, collectively shared experience. In the scene of joint attention I attend to your attention to the object. And this relationship is reciprocal. Just as I attend to your attention to the object, so you attend to my attention to the object. Our relationship to the object is an instance of shared, collective attention, and this—the origin of joint attention—is indeed quite revolutionary in the history of hominid evolution. In the oscillation between other-model and central-object the word is born. This intersubjective oscillation is also what distinguishes the act of pointing from the indexical signals of animal communication. Animal signals remain unmediated by the intersubjective, joint attentional scene.
Girard’s ambivalence towards the uniqueness of this originary event is reproduced in his ambivalence towards modernity and Shakespeare’s place in it. Girard’s paradoxical claim that the originary scene is both conscious and unconscious, both a unique event in human history and an intermediate stage in a series of endless intermediate stages, applies equally to his understanding of Shakespeare. On the one hand, Shakespeare is a vast intelligence who exposes ruthlessly and definitively the myth of romantic desire. On the other, Shakespeare is a dramatist who must hide this mimetic awareness behind the mythologizing narratives of tragic and comic form. Shakespeare has the potential to be a unique event in human history, but unfortunately the medium he selected for sharing his discovery of mimetic desire inevitably meant that his anthropological insights would be buried behind a wall of conventional theatrical pieties. If we read for the theatrical pieties, we will miss forever the mimetic intelligence. This is the fate of all Shakespeare criticism before Girard. If we read for the mimetic intelligence, we are forced to dispense with the theatre altogether, which is why Girard argues that Shakespeare’s farewell to the stage in The Tempest is so critically self-referential. It is a deconstruction of the aesthetic myth of Shakespeare by Shakespeare himself.
So what can we learn from Girard’s reading of Shakespeare? I think we can learn a great deal from Girard, but I have to add a significant caveat. Girard’s ambivalence towards Shakespeare is a direct consequence of his ambivalence towards language. This is most clear in his hypothesis of the origin of sacrifice, which he sees as the fundamental cultural institution pre-existing even language itself. By claiming that the first act of scapegoating was unconscious and unrepresentable, Girard can say that all subsequent historical evidence that seems to contradict his hypothesis is merely a misrepresentation, a ruse distracting us from the reality of scapegoating. The technique of using the unconscious as a clever ruse has been made familiar to us by Freud. Because the unconscious is by definition elusive, it is always up to the one who is uniquely qualified in sniffing it out to let you know whether or not you have correctly identified the problem. The same rule applies to Girard’s theory of the scapegoat. If you don’t see how Shakespeare’s plays demonstrate the scapegoating hypothesis, then you just have to look harder. And you do that by training yourself in the technique of Girard’s peculiar brand of mimetic anthropology.
In the end, all claims to originality are by definition problematic. If you are the first to see things this way, then by definition nobody else does. But Girard’s claim goes one step further. Not only is he the first, he is also the last. By making scapegoating unconscious, he absolves himself of the inconvenience of ever being refuted. For how can you refute something of which you are unconscious? Any refutation can be immediately dismissed as yet another confirmation of the unconscious at work. One has been hoodwinked yet again by the ruse of scapegoating.
What is the solution to this conundrum? The solution is to admit that scapegoating depends upon representation, and that representation itself cannot originate unconsciously. Once we have conceded this, it remains up to the individual investigator to decide what to include in an anthropological hypothesis of origin. The real point of formulating such a hypothesis is not to be the first or the last, the most original or the most definitive. It is to provide a minimal starting point for dialogue on our fundamental humanity. That is the simplest way to define an anthropology. I hope that Girard’s work on Shakespeare will be read in this sense: that is, as an attempt to initiate a dialogue concerning Shakespeare’s contribution to human self-understanding—in other words, as a step towards a Shakespearean anthropology.(18)
1. Harold Bloom, Shakespeare: The Invention of the Human (New York: Riverhead Books, 1998). (back)
2. Ibid., 8. (back)
3. Ibid., 9. (back)
4. “Les mots et les choses” was translated into English as “the order of things.” See Michel Foucault, The Order of Things: An Archeology of the Human Sciences (New York: Vintage Books, 1994). (back)
5. René Girard, A Theater of Envy: William Shakespeare (Oxford: Oxford University Press, 1991), 5–6. (back)
6. Francis Fergusson, The Idea of a Theater: A Study of Ten Plays: The Art of Drama in Changing Perspective (Princeton, NJ: Princeton University Press, 1949); John Holloway, The Story of the Night: Studies in Shakespeare’s Major Tragedies (London: Routledge & Kegan Paul, 1961). (back)
7. Girard, A Theater of Envy, 5. (back)
8. La violence et le sacré was preceded by Girard’s 1961 study of the novel, Mensonge romantique et vérité Romanesque, in which Girard first proposed his theory of mimetic desire. In La violence et le sacré Girard applied the “mimetic” model of desire he discovered in the novels of Stendhal, Flaubert, Dostoyevsky, and Proust to the general anthropological problem of human origin, proposing a global theory of human society. Mensongewas translated into English in 1965 as Deceit, Desire, and the Novel: Self and Other in Literary Structure. La violence was translated into English in 1977 as Violence and the Sacred. (back)
9. All quotations of Shakespeare are from The Complete Works of Shakespeare, ed. David Bevington (New York: Pearson Longman, 2009). (back)
10. Girard, A Theater of Envy, 16. (back)
11. Ibid., 328. (back)
12. I discuss Greenblatt’s work in “The Critic as Ethnographer,” New Literary History 35 (2004): 621–61, and “The Culture of Criticism,” Criticism 49 (2007): 459–79. Both essays are reprinted in The End of Literature: Essays in Anthropological Aesthetics (Aurora, CO: Davies Group Publishers, 2009). (back)
13. Girard, A Theater of Envy, 284. (back)
14. See, for example, Michael Tomasello, The Cultural Origins of Human Cognition (Harvard, MA: Harvard University Press, 1999), or his more recent book, A Natural History of Human Thinking (Harvard MA: Harvard University Press, 2014). (back)
15. René Girard, Things Hidden Since the Foundation of the World, trans. Stephen Bann and Michael Metteer (Stanford, CA: Stanford University Press, 1987), 99. (back)
16. Ibid., 100. (back)
17. See Terrence Deacon, The Symbolic Species: The Co-evolution of Language and the Brain (New York: W.W. Norton, 1997). (back)
18. This essay is an excerpt from Shakespeare’s Big Men: Tragedy and the Problem of Resentment, forthcoming from the University of Toronto Press, June 2016. (back) | <urn:uuid:4f71fdb7-fa9f-4e39-a469-e93f3a99c0cb> | CC-MAIN-2022-33 | http://anthropoetics.ucla.edu/ap2102/2102vanoort/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571993.68/warc/CC-MAIN-20220814022847-20220814052847-00696.warc.gz | en | 0.947221 | 6,224 | 2.96875 | 3 |
Re-thinking the Economy’s ‘Fuel System’ to Avert the Collapse of Civilization
“Money makes the world go ‘round.” That is a once-common line that I haven’t heard in a long time. Money definitely makes the economy “go ‘round.”
Money is the fuel of the economy. Without it, the economy will not function. (Even in a barter economy each participant in a trade is in effect producing one’s own money, trade by trade.) So the monetary system can be thought of as an economy’s fuel system.
Today, every nation has money that is its ‘legal tender’. That means only it can be used to pay taxes and only it can be required by banks for repayment of loans. [Nations in the EU have decided to have a legal tender, the Euro, common to all member nations.]
The most important feature of the monetary system is how money originates/enters the economy. That is the functional basis of the whole economy.
In other words, the most important question concerning the economy is, where does money come from? Today, it comes from debt.
Every nation’s monetary system is debt-based. That means that every country’s economy is functionally debt-based.
Time for an upgrade?
The history of the contemporary, debt-based monetary system goes back to 1668, when Sweden created the first Modern central bank. Before the central-bank monetary system was invented coinage was a prerogative of monarchs. That system is the bulwark of the Modern economy.
The central-bank monetary system has of course evolved over its history, but the basic structure was there at the beginning. It makes the central government and the central bank the institutional pillars of the nation. The Modern nation-state rests upon them and their interrelationship.
Capitalism did not begin to develop before the central-bank monetary system was invented. It developed in nations with that system in place.
So capitalism is embedded in the Modern economy. It has no existence independent of the central-bank monetary system that is the bulwark of the Modern economy. That system is more economically important than capitalism is. Capitalism is dependent on that system and subordinate to it. The central-bank monetary system can exist — did exist — without capitalism; the reverse is not true.
For present purposes, then, capitalism, as such, is irrelevant. That is to say, the monetary system is of such overweening importance to society that its structure and functioning must be analysed and proposals for its improvement must be evaluated without regard for possible effects on capitalism, as a particular paradigm for supplying goods and services within the economy that is defined by the monetary system.
It is a fact that, after the Civil War, when capitalism triumphed in this nation, the U.S. had no central bank. Even though that institution’s functions were being carried out via other means, one can understand how capitalists and their ‘running dogs’ feel like capitalism exceeds the monetary system in economic importance in this nation.
It does not. It is those functions, above all (but not limited to) the function of supplying legal tender for the economy, on which capitalism is dependent, which make its existence possible.
In 1913, with a single Act of Congress, we established our current central bank (the third one in our nation’s history), the Federal Reserve System (the ‘Fed’), (re)consolidating those functions in one institution. Even that was done more than one — hugely eventful — century ago.
Might it be time for a monetary upgrade?
Avoiding the cataclysm to which the debt-based economy is bringing us
At the end of this essay I’ll present a way to change from the debt-based monetary system/economy to an income-based monetary system/economy. In the U.S. we could make that transition with a single Act of Congress. It would not require tearing down any of the existing institutional structure. Even the central bank would still exist.
[For the record, I do have an M.A. in economics; I have been developing this idea, working out the details and addressing the various issues such a big, new idea necessarily entails, over a considerable period of time.]
The U.S., as the world’s most important economy, should lead the way in establishing an income-based economy. If we do not, sooner or later our economy will completely collapse (if collapse hadn’t already been initiated from some other source) and take the entire global economy down with it.
This time, complete economic collapse will be the end of civilization as we know it (as the rule of law). The other time the Modern economy collapsed completely, which is called the “Great Depression,” government was available to act as a life-support system for the economy. Now government is already fully involved in supporting the economy — with the central bank fully involved in supporting government. So there is no institution available to come to the rescue.
That means the collapse of the economy will mean the collapse of the nation-state system — i.e., civilization as we know it. Most likely (if a nuclear war did not result), following a period of unprecedented suffering the world would be divided among local warlords, with almost all survivors as their slaves. The rule of law would be replaced with rule by individuals’ whims. [Silver lining (if nuclear annihilation were avoided): that would mean an abrupt end to too much human activity producing greenhouse gases, thus putting the brakes on global warming.]
Making income the functional basis of the economy would make the Modern economy stable and self-regulating. It would provide the means to eliminate unemployment (at no cost to anyone) and poverty (without having to redistribute anything) for all citizens, whatever the GDP, the total output of goods and services, might be. The process could be used to fund government (all government, forever, at the current per capita rate of total government spending), eliminating the need for any taxes/public debt at all levels of government. Sustainability would be increased (even without any other measures).
Those outcomes are not hopes, or dreams, or visions, or suppositions, or even probabilities; they are absolutely, positively guaranteed. Moreover, none of that would require anyone to change any of one’s behavior one bit. Rather, those outcomes are built into the structure of the income-based monetary system (which I happened to stumble upon wondering how a really just economy might be possible — but this is a strictly economic proposal, with no reference to justice). This monetary paradigm could be adopted by any nation or group of nations (without compromising national sovereignty).
As much as I hate it, capitalism would also still exist. This change to the monetary system would not require any changes to the current paradigm for supplying goods and services within the economy. Still, having civilization (as the rule of law) with capitalism is better than not having civilization would be.
Where money comes from in the debt-based economy
In the debt-based economy money is created by debt in two different ways. One occurs when banks lend. The other occurs when the central government borrows money.
When banks lend they do not actually hand over money. They keep their money in their vaults. In ‘lending’ banks actually ‘extend credit’. That credit is used to make a purchase. The credit is thereby transferred from the debtor to the seller. The seller can then use that credit to make purchases, and those sellers can use it to make purchases, ad infinitum.
[A long time ago lending was done with actual pieces of paper, ‘bank notes’. The borrower presented the note to the seller. The seller could take that note to the bank to get the money it promised (redeem it). Alternatively, the note could be used by the seller to make a purchase, just like actual currency, and that seller could choose to redeem it or to make a purchase with it, ad infinitum. Banks could also sell their own ‘notes’, a promissory note that the bank would give a specified amount of (usually) gold to whoever presented the note to the bank. So back then banks had — or claimed to have — (usually) gold in their vaults (as well as money). [The notes were sold at a discount to agents who would resell them for the bank; the agent could sell a note at a price that would represent a profit for both him and the person who bought it and took it to the bank for redemption.] The idea, for the banks, was for those notes to get used as currency; even honest bankers were banking on some notes never being redeemed, but circulating forever — or being permanently lost or destroyed. Notice that a piece of paper currency in the U.S. today is designated as a “Federal Reserve Note,” the Federal Reserve System’s Notes being our legal tender. There was a time when Federal Reserve currency was in the form of “Certificates” that could be exchanged at a bank for gold or silver, but those days are long gone (though I am old enough to have received and spent old paper currency with “Silver Certificate” written on it, not “Federal Reserve Note”). Now its Notes can only be exchanged at a bank for other Federal Reserve Notes.]
So, when banks extend credit they create something that can be used indefinitely for making purchases. That is one of the three standard functions of money. In the lending process that extension of credit takes on another function of money: being used as a unit of account. When that credit changes hands it gets counted as income that can be retained rather than spent, thus taking on the third function of money: it can be used as a store of value. So when banks lend they are effectively creating money, though they are not producing actual money.
Debt is also used to produce actual money. That occurs when the central government borrows money. It does that by selling promissory notes, promising to pay back the amount loaned to it plus some specified amount of interest. (Notice the slight changes from the way banks used to do their notes.) The central bank handles that for the central government. The central bank is required by law to be the ‘lender of last resort’ — to see to it that every promissory note of the central government is purchased. In the process the central bank has the authority to direct the treasury of the central government to create actual money to be handed to the central bank for it to purchase some of those promissory notes. [Assets are transferred to ‘balance the books’, but since those assets can be the government’s own promissory notes, and can at all events be used by the treasury to conduct other transactions, that transfer of assets is effectively nothing more than an accounting fig leaf.] How much, if any, actual money the central bank chooses to have created depends on many factors, but the decision is its and its alone. (Those transactions have never involved physical currency getting hauled back and forth — what used to be done ‘on paper’ is now done through digits on computers — but actual money is created nonetheless.)
With debt as the basis of the economy it was perhaps inevitable that our economy would end up where it is today. Where is that?
The mess we have gotten ourselves into, and how
As can be seen in any graph comparing debt and GDP from, say, 1929 to the present (such as this one), the U.S. underwent a dramatic change in 1980. We went from ‘tax-and-spend’ to ‘borrow-and-spend’. The institutional structure of the system did not change, but that change in flows of money that started with the central government has engulfed businesses, households, and individuals. We have now had ample time to see what that change has meant for the economy as a whole.
For one thing, we can see that debt has grown way, way faster than the GDP has. That graph shows a gap between the GDP and total debt that is growing like the distance between the ground and a fighter jet upon takeoff. Changing from tax-and-spend to borrow-and-spend was like hitting the afterburners.
We have seen through experience that debt makes the economy more precarious. Notice I did not use the word “unstable.” Instability refers to the ‘natural’ ups and downs, expansions and contractions (recessions) that have always been a part of the Modern economy. (That used to be routinely referred to as ‘the business cycle’, but for whatever reason that is a term one seldom encounters these days.)
“Stability” can be thought of as the length of expansions, the periods of time between recessions. Debt can actually make the economy more ‘stable’ by making those periods longer. We are currently experiencing the longest economic expansion in our history in the U.S.
Why would debt make periods of economic expansions longer?
We must first note that when we say “longer” we are making comparison between 1980 to the present, the era of borrow-and-spend, and the time between WWII and 1980, the era of ‘tax-and-spend’. The economy before that was a different animal — one that perished with the Great Depression. WWII ended the Depression and facilitated a transition to what has been, functionally, a different economy than the one that existed when the Depression occurred (though not structurally different: the central-bank, debt-based monetary system has been in place, recall, since we re-instituted a central bank in 1913).
Taxing and spending by the central government can be used to ‘manage’ the economy, i.e. support expansions and end — perhaps even avoid — recessions. To end/avoid a recession requires ‘stimulus’. Such “stimulus” has always meant ‘deficit spending’.
That is, the central government has borrowed the money to increase its spending. It could raise taxes to get the money to spend, but of course the idea of raising taxes will always be opposed by some ‘on principle’. Most importantly in economic terms, though, raising taxes would be anti-stimulating because it would take money from people and businesses that could have been spent on goods and services and plant and equipment — the stuff of the GDP, the very thing stimulus is supposed to be increasing.
So even in the tax-and-spend regime government used borrowing to stimulate the economy. In that mode of economic management, though, the idea is that the increase in debt is temporary and will be repaid by higher tax revenues (without higher tax rates) when the economy expands, which increases total income and therefore total taxes collected.
That did not exactly happen. The debt of the central government increased rather steadily up to 1980. (Right after WWII ended it did decrease for a few years because we were no longer funding a World War.) It went from a post-war low of $252 billion in 1948 to $908 billion in 1980 [numbers from here].
We should also note, however, that the central government’s total debt went from being 92% as large as the output of the entire economy to 32% of the GDP over that period [same source]. So the ratio of the debt of the national government to national income, which is one measure of solvency, improved dramatically under the tax-and spend regime.
At any rate, once stimulus makes the economy start to expand, inflation becomes a potential problem. To keep inflation in check using the central government to manage the economy has meant raising taxes — again, something many people always oppose ‘on principle’.
By the middle of the 1970’s the idea of raising taxes for any reason had become all but politically impossible. That is when the phrase ‘tax-and-spend Democrats’ was invented. As a result, even though inflation was rapidly increasing, no meaningful action was taken to dampen it. By the end of the 1970’s it got really bad. (Major inflation was one reason the nominal ratio of debt to income got as small as it did as of 1980, but that ratio was pretty steadily decreasing over that whole period of time.)
[The Democratic Party had control of Congress from the middle of the 1950’s to 1980 — and beyond (for another 1½ decades). In the ‘70’s there were still lots of ‘conservatives’ in the Democratic Party who opposed raising taxes ‘on principle’. (Many of them also opposed the desegregation of society ‘on principle’, which is why they left the Democratic Party, the Party that stood for integration, for the Republican Party, which had invited them in — President Nixon’s (R) “southern strategy.” That is all historical fact, not opinion; it explains why ‘tax-and-spend Democrats’, who held the presidency and both houses of the Congress in the second half of the 1970’s, did not raise taxes even though inflation was getting almost completely out of hand.)]
It must be noted that unemployment was also high in the second half of the 1970's, which militated against raising taxes, but inflation was becoming an existential threat. In 1979 President Carter (D) nominated Paul Volcker to be the Chairman of the Board of Governors of the Federal Reserve System with the understanding that he would raise interest rates to stop inflation by inducing a recession. He was approved by the Senate and he did raise interest rates very sharply in 1980.
A very steep but very brief recession ensued; the Carter/Volcker plan of action succeeded — to the benefit of President Reagan (R), elected in 1980. That, however, represented the beginning of the borrow-and-spend regime.
The borrow-and-spend regime uses debt to fund increases in spending by the central government as a routine matter. It even uses new debt to pay off old debts that come due. As of 2018, the debt of the national government of this country was 105% of GDP [same source] (so 92% to 32% to 105%). (Note also that, just as unusually high inflation made the ratio look better than it really was, lower inflation means it is relatively even worse than it appears.)
In the borrow-and-spend regime it has become the responsibility of the Fed to ‘manage’ the economy. It does that primarily by adjusting interest rates. Lower interest rates stimulate the economy by encouraging borrowing for spending and investing (as well as speculating), and also mean cheaper costs of borrowing for the national government. Higher interest rates have the opposite effect.
So the Fed has sought to keep rates as low as possible, which has encouraged borrowing. With incomes for most individuals and households being stagnant since 1980, even moderate rates of inflation have meant that they have had to increase their borrowing to maintain a middle class lifestyle. Businesses, especially large ones, have taken advantage of low rates to borrow rather than spend cash on hand.
The traditional form of stimulus, purposeful deficit spending by the central government, is still sometimes used, though it now adds but a tiny fraction each time to the total debt of the central government. Tax cuts have also been used to stimulate the economy, with the resultant drop in tax revenue covered by yet more borrowing.
As we’ve seen, in the tax-and-spend regime stimulus tended to lead to inflation. Measures taken to stop inflation (when they were used) invariably contributed a recession. In the borrow-and-spend regime inflation has not been such a problem. For that reason, the Fed as manager of the economy has been able to allow expansions to continue for longer periods of time.
One reason for less inflation is that when households and individuals are repaying loans they have less income available for purchasing goods and services, so there is less demand than there would otherwise be, which dampens prices compared to what they would be. Moreover, because wages and salaries for most positions in the economy have been stagnant during the borrow-and-spend era, while individuals’ and households’ average indebtedness has hugely increased, that debt is being repaid out of effectively the same personal incomes.
For all that, major inflation is actually still with us. In the tax-and spend regime inflation has mostly occurred in assets — real estate, stocks, bonds — the stuff of the wealth of the ‘investor class’. Far from being a problem that should be stamped out, the Fed has seen that kind of inflation as a good thing (despite one Chairman’s famous reference to “irrational exuberance”). The Fed is, after all, the pinnacle of the private banking system, and bankers do love wealth.
To be sure, other things have been going on in the economy, but the big picture is accurately portrayed by the transition from a tax-and-spend regime to a borrow-and-spend regime. Now that regime is teetering.
The inevitable end of the path we are on
Remember “precarious?” We’ve seen that, in the nature of the borrow-and-spend regime, economic expansions have lasted longer. Yet with debt as the source of the economic fuel that allows that to happen, that does create an economy that will suffer much greater damage when the end of an expansion does come. The ‘Great Recession’ of 2008 is an example.
In economic terms debt is ‘leverage’. Physically, a lever is a means of increasing the power of a given amount of force. In the economy debt is used as leverage to increase the immediate purchasing power of a given amount of income. Whether debt is used to purchase food, clothes, cars, houses, stocks, bonds, or anything else, the principle is the same. (It is technically illegal to use debt to buy stocks or bonds, other than buying stocks ‘on margin’, but that bar is easily skirted. I’ve done it myself.)
If a person is physically using a lever to increase the power of whatever amount of force that person is able to generate, the potential for serious injury is compounded, should the lever break. The same is even more certainly true of using leverage in the economy. The more debt an economy has in it, the more damage will occur when the inevitable economic contraction occurs.
With the debt of the national government alone, much less all other debt, now greater than the size of the total income of the entire economy, it is safe to say that the economy of this country is in an extremely precarious state. Modern monetary theory (MMT) maintains that the debt of the national government is all but irrelevant, but that is only because actual money can be created ad infinitum to keep it from defaulting on its debts. Even if such a course did not represent a dangerous economic path in itself, that says nothing about all the other debt that has accumulated in the borrow-and-spend regime.
Of course, there is always ‘quantitative easing’ (QE). With it actual money can be created ad infinitum to keep banks and other ‘significant financial entities’ from going broke.
If, however, with our current monetary system, we got to the point of expressly creating actual money both to fund the central government outright and to keep the ‘significant financial entities’ financially solvent indefinitely, that would truly be the other side of the economic looking glass. Nothing would make economic sense. It would be the end for this economic system. Most likely, hyperinflation such has never been seen would eventually precede total economic disintegration.
I do not mean to suggest that collapse is (necessarily) imminent. The emergence of the use of negative interest rates to spur spending (which negative rates encourage by discouraging saving), which inevitably means more borrowing, and the unlimited capacity to create actual money probably provide the means to stave off economic collapse until some exogenous shock, probably due to war (especially cyber warfare) or global warming, occurs. When the economic collapse comes, however, whether from within or without, it will be the end for civilization as we know it.
Why not consider an alternative? Make no mistake, the alternative I am proposing is a new idea. It is similar in some ways to some ideas that have been proposed or implemented, but it is not a variation on any other theme any reader has ever encountered. Given its guaranteed outcomes, it is something we should do regardless of the ultimate fate of the debt-based economy.
In encountering this idea it is of the utmost importance to set aside all ideology. This idea was not formulated with any ideology in mind. It is not intended to please or displease people on either side of the ideological divide. If looked at through the lens of any ideology, it will be totally distorted and impossible to understand.
My idea is to make income, not debt, the basis of the economy. That would be accomplished by instituting an “allotted income.” Links for further reading will be provided for those whose curiosity exceeds their cynicism, but here I’ll give the briefest possible sketch of the basic idea.
· The amount of the allotted income would be based on the current median income — say, in the U.S., $15/hr.; $600/wk.
· The money for that income would be created as needed, so it would be available for an unlimited number of people.
· The allotted income would be paid to eligible citizens: though it would not be paid to all citizens, any citizen could become eligible for it.
· It would thus be an absolutely, positively guaranteed (potential) minimum income available for all (adult) citizens.
The total of that income would form the supply of money for the economy. To prevent inflation money would have to be returned to its point of origin, which could be either the central bank or a newly created Monetary Agency. People and businesses would, however, retain plenty of money (determined by the amount of income) and, unlike taxes, no money would be collected from any person or business before it could be used for purchases, to include investing (and even speculation).
To some readers this idea might look very similar to the situation I described previously as being “on the other side of the economic looking glass.” Here, though, the amount of money being created, though vast, has a definite limit. Also, the allotted income goes directly to individuals, not to government or financial entities. It involves no debt. Finally, there is a mechanism in place for returning money to its point of origin, which our current monetary system does not have.
Those differences make all the difference. The banking system, including the central bank, would still exist. Banks would still extend credit. Debt would not, however, be the sole, or even the major source of fuel for the economy.
If interested, a ’5 min read’ summarizing the idea is available here in Medium. It has links to more detailed explications of the paradigm.
This idea needs advocates. | <urn:uuid:a724990a-4679-4c05-b5d0-c251900f9bd2> | CC-MAIN-2022-33 | https://medium.com/discourse/re-thinking-the-economys-fuel-system-to-avert-the-collapse-of-civilization-48a3517a8dc3?source=post_internal_links---------3---------------------------- | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571692.3/warc/CC-MAIN-20220812105810-20220812135810-00097.warc.gz | en | 0.967274 | 5,722 | 2.5625 | 3 |
The U.S. Constitution grants no categorical right to counsel in civil cases. Undaunted, the legal profession's renewed effort to improve access to justice for low-income unrepresented civil litigants includes a movement to establish this right. How this right is implemented turns out to be as important as whether such a right exists. To be effective, any new right must be national in scope, adequately funded, and protected from political influence. Lawyers must be available early and often in the legal process, so that they can provide assistance for the full scope of their client's legal problem and prevent further legal troubles. A right to civil counsel should encompass proceedings where basic needs are at stake, and not be influenced by inadequately informed judgments of who is worthy of representation.
Designing a right to counsel for people with civil justice problems is no simple task. Consider the state of the constitutional right to counsel in state criminal cases, which the U.S. Supreme Court recognized in 1963 in Gideon v. Wainwright.1
The public defender system is in crisis because most state governments do not allocate enough funding to fulfill their constitutional duty. Gideon is an unfunded federal mandate. In Missouri in 2016, the governor slashed the annual public-defender budget approved by the legislature from $4.5 million to $1 million. As a result, the director of the state's public-defender system lacked funding to hire the 270 additional attorneys needed to serve the criminal caseload. Advocates decided that a drastic measure was needed to draw attention to the problem, so the director appointed the governor (a lawyer) to represent a poor criminal defendant in place of a court-appointed lawyer.2 The ploy was ultimately unsuccessful because a state court held that only the state's courts had the power to appoint a lawyer, but it generated national media attention for the budget issue.3
The U.S. Constitution grants no categorical right to counsel in civil cases. Decades of Supreme Court jurisprudence have rejected constitutional claims to this right, most recently in 2011.4 Undaunted, the legal profession's renewed effort to improve access to justice for low-income unrepresented civil litigants includes a movement to establish this right.
In recent years, there have been impressive gains toward this goal through legislation and court victories. In 2017, New York City became the first city in the United States to enact legislation providing low-income tenants facing eviction with legal representation.5 In 2016, California put into force a 2009 state law establishing publicly funded counsel for poor litigants in cases about housing, child custody, conservatorship, and guardianship.6 In 2016, the Supreme Court of New Jersey held that parents have a right to counsel in adoption cases.7
The right-to-counsel movement continues to build momentum. By 2018, eighteen right-to-counsel bills had been enacted in fourteen states, and an additional eighty-four were pending in Congress and in state legislatures.8 The laws enacted include a San Francisco ballot measure providing a publicly funded right to counsel for tenants facing eviction, a Massachusetts law requiring appointment of counsel for anyone at risk of being incarcerated for failure to pay fees or fines, and a Wisconsin law creating a pilot project to provide a right to counsel for parents in child welfare proceedings.9
Because right-to-counsel victories like these have proceeded largely on an issue-by-issue basis, they have leapfrogged an important question. What types of problems or legal proceedings should trigger the right to civil counsel? In 2006, the American Bar Association (aba) called on federal, state, and local governments to provide legal counsel to people who are poor or have low income “as a matter of right at public expense” in cases where basic human needs are at stake, such as those involving shelter, food, safety, health, or child custody.10 The aba acknowledged that its proposal was “substantially narrower” than what would be necessary to close the justice gap documented in legal-needs studies, and advocated for a “careful, incremental” approach involving the “evolution of a right to civil counsel on a state-by-state basis.”11
Recent legislative activity has not followed the aba's cautious approach. The victories, particularly laws creating a right in eviction cases, also challenge widespread political skepticism about state legislatures appropriating money to fund these new rights. Still, successes thus far are piecemeal and clustered in wealthier and Democratic-leaning states. If the right to civil counsel develops state by state, it will likely become more robust and better funded and cover a broader range of matters in blue states such as California, Massachusetts, and New York, while remaining limited and poorly funded in red states such as Oklahoma, Mississippi, and Texas.
To prevent these discrepancies, it would be best for Congress to establish a federal right to civil counsel that reached across state boundaries. To be effective, this right must be secure in the sense that it is adequately funded, resilient in the sense that it is protected from political interference, and unencumbered in the sense that it is not hobbled by limitations and restrictions. The right to counsel in criminal cases has been severely eroded in many states, nearly to the breaking point. Likewise, adjusted for inflation, federal funding for the Legal Services Corporation, which has provided funding for essential civil legal services to low-income Americans since 1974, has declined by nearly 40 percent over the last three decades.12 Restrictions dictate who can and cannot be sued by legal-aid attorneys, what procedural devices they can use, and what claims they can bring.13 Legal-aid attorneys cannot address systemic problems or leverage the strength of mass claims to challenge wrongful conduct by powerful institutions or governmental entities.
Advocates for a right to civil counsel want to reject these restrictions, empowering legal-aid lawyers to confront systemic injustices on a mass scale. A right to publicly funded lawyers for people with civil legal issues will aid those served, but is unlikely to force changes in their adversary's usual behavior or practices. Providing representation to someone facing unlawful debt collection may resolve that person's case favorably, for example, but it does not prevent the debt collector from continuing to use abusive and deceptive practices with other debtors. A right to counsel that permitted mass claims, by contrast, would allow broader structural and injunctive relief impacting large groups of similarly situated people, a much more efficient and effective way to advance civil justice.
A resilient and secure right to civil counsel would require adequate funding and protection from political interference. The aba estimates that a right to civil counsel when basic human needs are at stake would cost approximately $4.2 billion in current dollars, or about 1.5 percent of total U.S. expenditures on lawyers.14 Return-on-investment studies show that an expanded right to civil counsel can be economically feasible. One study estimated that establishing a right to civil counsel in eviction cases in New York City would save the city $320 million per year through reduced spending on homeless shelters, medical care for the homeless, and law enforcement.15
Any right to civil counsel should be protected from political interference. Funding a broad expansion of a right to civil counsel with public money would likely encounter political resistance. Even solid evidence that the costs of a right to civil counsel are manageable will not deter detractors inclined to politicize publicly funded rights. Other basic rights in our society–for example, rights to public education, medical care, and welfare benefits–have a long history of political struggle as well as public support. The same is likely to happen with a right to civil counsel.
Funding approaches must insulate civil justice budgets from the vagaries of political winds, annual appropriations battles, and opposition that seeks to weaken the right to counsel. If not, any such right will be forever vulnerable to funding rollbacks (or even elimination), regardless of its cost-effectiveness and vital role in providing essential services. As the histories of the right to counsel in criminal cases and of the Legal Services Corporation show, detractors can undermine justice by burdening the right to counsel with all kinds of restrictions.
An effective right to civil counsel must be implemented so that the lawyers provided can both address existing legal problems and prevent future issues. People should be able to access the right at key turning points, and the right should be broad enough to address their full range of legal needs. At present, when these rights exist, they are highly restricted.16 For example, in family law matters such as child welfare and child support enforcement, many states that provide access to counsel do so at the last possible moment, when the risk of serious loss is imminent, rather than from the start and throughout the case, leaving parties unrepresented at critical junctures in their case. These rights are also limited, providing counsel only for the specific issue at hand.17 In the case of child welfare proceedings, this means that, in some states, the right to civil counsel is available only to parents defending themselves in a termination-of-parental-rights proceeding.18 Similarly, states that provide counsel in child support enforcement cases do so only in situations where the defendant is facing civil incarceration for failure to pay court-ordered support.19 These are late-stage events when the unrepresented individual stands on the precipice of great loss: losing their children or their liberty. To provide counsel only at this eleventh hour is, to put it mildly, too little too late. Cases such as these can stretch back many months, even years. During the long span of time when the party is unrepresented, all kinds of critical events and decisions occur without benefit of advice or representation.
My own research examining the experiences of noncustodial parents in child support proceedings reveals that attorney representation earlier in the case and covering a broader scope of legal issues would substantially change case outcomes. The study seeks to understand how attorney representation and other more limited forms of legal assistance affect civil court proceedings for low-income litigants. Most noncustodial parents in these cases are very low-income black fathers who lack attorney representation and owe current and past-due child support, often in the thousands of dollars. The study examines how their cases are handled by the judges and government attorneys they encounter and how they navigate the civil process in proceedings in which they face a variety of increasingly punitive enforcement measures, including civil incarceration for failure to pay support.
The research reveals that a right to civil counsel would be considerably less effective if restrictions limited when in the legal process appointed counsel were available. For example, lawyers-by-right are not made available when a child support order is established. They are also not provided when a parent must file a motion to modify an existing order to reflect a significant change in circumstances, such as losing one's job and income. In both instances, the timing and the scope of representation matter, whether the attorney provides full representation or is limited to performing only specific tasks. Having access to a full-service attorney earlier would ensure that initial orders are for appropriate amounts and are modified when circumstances warrant. Without counsel at these junctures and for broader purposes, pro se defendants are likely to fall behind in their child support payments and face mounting debts that result in contempt proceedings with a risk of civil incarceration and other harsh penalties.
Dearis Calahan's case illustrates how earlier appointment of counsel can be critical.20 A fifty-three-year-old father of seven, he had three children with one woman, one child with another woman, and three children with a third woman. All of Dearis's children are now adults. When I spoke with him, he was in court because he owed past-due child support. Dearis recalled that he owed between $7,000 and $10,000 in past-due support. He was frustrated that the state would not explain how it calculated what he owed. Before his hearing, he made calls to several lawyers seeking legal help, but all wanted a retainer of at least $2,500. The state had suspended his driver's license because of the amount he owed in child support. Dearis, representing himself, argued unsuccessfully for getting his license reinstated so he could drive.
In one of his cases, Dearis was not present in court at the initial hearing when the amount of child support due was set. According to him, he did not receive notice of that hearing and, in his absence, “they kind of set it, gave me a certain number that they figured that it would be proper for me” to pay. Many child support orders are established as a default judgment when noncustodial parents do not appear in court, sometimes because they receive no notice to appear. Such orders are usually calculated based on presumed rather than actual earnings. For Dearis, his payments amounted to 20 percent of the earnings from a full-time, minimumwage job, even though his actual earnings fell far short of that amount. Unable to pay the full amount, he fell behind and quickly accumulated child support debt.
Having access to an attorney at that earlier stage in the case–when the child support order was first established–could have made a significant difference. With representation, it is unlikely that a default judgment would have been entered and, even if it had been, an attorney would have filed a motion to vacate it because Dearis did not receive notice of the hearing. An attorney would have (at a minimum) advocated that the child support order be based on Dearis's actual earnings, more realistically reflecting his ability to pay support. An attorney could also have advocated that the court apply low-income defendant guidelines when calculating support, or even for a reduction from the guidelines because Dearis was supporting several other children at the same time. Dearis lacked knowledge about these intricacies and thus could not raise them on his own behalf.
Maurice Shamble's case shows why appointed counsel's scope of representation matters. Until 2014, he had what he considered a good job, paying $26,000 a year. Under an order set at 40 percent of his net income, the state guideline level for four children, payments came straight out of his paychecks through wage garnishment. However, after he lost his job and his income, the order was not adjusted. He did not know that he had to notify the child support agency that he was no longer working. He assumed they would know because payments would no longer be coming directly out of his paycheck. He also did not know that losing his job provided grounds to reduce the award or that, to do so, he needed to file a motion to modify and appear at a court hearing. Instead, his arrears spiraled out of control. When I spoke with him, he owed past-due support of over $10,000.
The other pro se fathers in the study also lacked steady, reliable employment. Some, like Maurice, lost their jobs after a period of relative stability. Others had a reduction in earnings when employers cut back their hours. Most, however, had jobs that did not pay a living wage and, like the low-wage labor force nationally, had precarious and volatile employment. Most were underemployed and struggled to make ends meet, cobbling together temp work, seasonal jobs, part-time jobs, cash jobs in the informal economy (like yard work for neighbors), and assistance from family and friends. Though they faced frequent changes in their employment status, their child support obligations remained static and did not reflect their ability to pay.
Appointed counsel is available only in situations where the defendant is facing civil contempt for nonpayment, and can address only the contempt proceedings themselves. So an appointed attorney may not file a motion to modify the order on the client's behalf, even though an earlier failure to modify the order after a reduction in the parent's earnings contributed to the arrearage and led to the contempt action. Without such a modification, the debt will grow ever-larger and lead a court to summon the defendant again to explain why he should not be held in contempt for failure to pay support. Preventing an appointed attorney from addressing the essential underlying issue in the case makes no sense.
Navigating the modification process was no easy feat for the pro se litigants in my study, including Maurice. After he was civilly incarcerated for contempt of court because of the unpaid child support, Maurice realized that he had to understand the legal complications impacting his life. He spent many hours researching the law in the courthouse library and online. He had a binder full of handwritten notes and case printouts from his research and he shuffled through them repeatedly as he discussed his case with me. He believed he had found defenses in doctrines on jurisdiction and separation of powers, but it would be remarkable if Maurice understood all the intricacies of the legal principles he studied. Maurice reported that a judge dismissed his arguments as “Internet gibberish” and denied his motion.
The experiences of Dearis Calahan and Maurice Shamble show that how a right to civil counsel is administered is as important as whether a right exists. A right triggered only when a defendant faces a contempt action is woefully insufficient. Most of the judges and lawyers interviewed for the study believed that there was little a lawyer could do to help at that stage in the case. They argued that the matter was open and shut: there was a valid order to pay child support and the defendant had not complied; appointing a lawyer would not change the outcome. Their position is debatable, since counsel could argue that the defendant's failure to comply with the order was not willful and, thus, grounds for contempt were not established. But appointing counsel earlier could have prevented these problems entirely.
Though the right to civil counsel for child support defendants is cramped and inadequate, it provides far more than is generally available from legal aid. Funding for civil legal services for indigent Americans falls far below the demand, and providers must necessarily establish service priorities. Few legal-services offices provide representation to noncustodial parents in child support cases. Compared with custodial mothers, noncustodial fathers are not sympathetic parties. Why devote limited resources to advance their claims? Men like Dearis, with his seven children by three different women, are demonized in politics and ridiculed in popular culture. Someone like him, who has fallen behind in his payments and seeks to reduce his monthly order, is more likely to be viewed as a “deadbeat dad” who is not providing for his children than as an economically vulnerable father who cannot pay his current order, despite his best efforts in the low-skilled, low-wage labor market.
The right to counsel in criminal cases is poorly implemented, yet it embraces values worth incorporating into a right to civil counsel: it is broadly available to indigent defendants at risk of incarceration, regardless of how disliked they may be. A right to civil counsel should likewise be broadly available. In the civil system, as in the criminal, a right to counsel should not be based on social acceptance. It should be based on a fair assessment of who needs a lawyer to make their case when the help really matters.
The access-to-justice study referenced in this essay is supported by two research awards provided by the National Science Foundation (nsf) under Grant No. ses-1323064 and Grant No. ses-1421098.
Gideon v. Wainright, 372 U.S. 335 (1963).
Matt Ford, “A Governor Ordered to Serve as a Public Defender,” The Atlantic, June 4, 2016.
A Missouri circuit court reinstated the lawyer, holding that only the state's courts had the authority to appoint a lawyer. See Celeste Bott, “Court Rules Public Defender Can't Appoint Missouri Governor as a Defense Attorney,” St. Louis Post-Dispatch, August 25, 2016.
Turner v. Rogers, 564 U.S. 431 (2011); and Tonya L. Brito, David Pate Jr., Daanika Gordon, and Amanda Ward, “What We Know and Need to Know about Civil Gideon,” South Carolina Law Review 67 (2) (2016): 223–243.
Ashley Dejean, “New York Becomes First City to Guarantee Lawyers to Tenants Facing Eviction,” Mother Jones, August 11, 2017.
The law was passed in 2009 and the pilot program commenced in 2011. See Erin Gordon, “Advocates Promote a Right to Counsel in Civil Cases, Too,” ABA Journal, February 2018; and Clare Pastore, “Gideon is My Co-Pilot: The Promise of Civil Right to Counsel Pilot Programs,” University of the District of Columbia Law Review 17 (1) (2014): 75–130. Conservatorship and guardianship are similar concepts in the law. A conservator is a person who has been appointed by a court to manage the estate of someone who is legally incapable of doing so, usually due to disability, illness, or injury. A guardian is a person who has been appointed to care for another's person or estate, or both, because of their infancy, incapacity, or disability.
Adoption by J.E.V., 141 A.3d 254 (N.J. 2016).
The website for the National Coalition for a Civil Right to Counsel maintains a list of enacted and pending bills. See http://civilrighttocounsel.org/legislative_developments/2018_civil_right_to_counsel_bills#pending.
J. K. Dineen, “SF's Measure F Wins, Will Give Tax-Funded Legal Help to Tenants Facing Eviction,” San Francisco Chronicle, June 5, 2018; An Act Relative to Criminal Justice Reform, Mass. Bill S.2371, 190th General Court (Mass. 2018); and Relating to: a parent's right to counsel in a child in need of protection or services proceeding, providing an exemption from emergency rule procedures, granting rule-making authority, and making an appropriation, Wisc. Assembly Bill 784, Regular Session of 2017–2018 (Wisc. 2018).
American Bar Association Task Force on Access to Civil Justice et al., Report to the House of Delegates (Chicago: American Bar Association, 2006). Many state and local bar associations have since passed resolutions similar to the aba's. Deborah L. Rhode and Scott L. Cummings, “Access to Justice: Looking Back, Thinking Ahead,” Georgetown Journal of Legal Ethics 30 (3) (2017): 485–500. Also, in 2016, the aba issued a two-year Report on the Future of Legal Services in the United States that “urge[d] the legal profession to commit to the goal of 100 percent access to effective assistance for essential civil legal needs and [made] a number of recommendations that would go a long way towards achieving that goal, including providing civil legal aid as a matter of right when basic human needs are at stake and making wider use of innovative legal technologies”; American Bar Association Commission on the Future of Legal Services, Report on the Future of Legal Services in the United States (Chicago: American Bar Association, 2016).
American Bar Association Task Force on Access to Civil Justice et al., Report to the House of Delegates, 14, 12.
Deborah L. Rhode, “Legal Services Corporation: One of the Worst Cuts in Trump's Budget,” The Legal Aggregate, May 31, 2017. Legal Services Corporation (lsc) funding has increased over the years and the decline noted is a relative comparison. For a history of lsc funding, see Alan W. Houseman, “Civil Legal Aid in the United States: An Update for 2015, A Report for the International Legal Aid Group” (Washington, D.C.: Consortium for the National Equal Justice Library, 2015).
Catherine Albiston, Su Li, and Laura Beth Nielsen, “Public Interest Law Organizations and the Two-Tier System of Access to Justice in the United States,” Law and Social Inquiry 42 (4) (2017): 990–1022.
American Bar Association Task Force on Access to Civil Justice et al., Report to the House of Delegates.
Stout Risius Ross, “The Financial Cost and Benefits of Establishing a Right to Counsel in Eviction Proceedings under Intro 214-A” (Chicago: Stout Risius Ross, 2016).
Laura Abel and Max Rettig, “State Statutes Providing for a Right to Counsel in Civil Cases,” Clearinghouse Review Journal of Poverty Law and Policy 40 (2) (2006): 245–270.
All participant names are pseudonyms. | <urn:uuid:75d8bade-be83-4d09-909d-dfa1131d3862> | CC-MAIN-2022-33 | https://direct.mit.edu/daed/article/148/1/56/27252/The-Right-to-Civil-Counsel | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572127.33/warc/CC-MAIN-20220815024523-20220815054523-00297.warc.gz | en | 0.961565 | 5,111 | 2.875 | 3 |
Bourne Shell Scripting/Files and streams
The Unix world: one file after another[edit | edit source]
When you think of a computer and everything that goes with it, you usually come up with a mental list of all sorts of different things:
- The computer itself
- The monitor
- The keyboard
- The mouse
- Your hard drive with your files and directories on it
- The network connection leading to the Internet
- The printer
- The DVD player
- et cetera
Here's a surprise for you: Unix doesn't have any of these things. Well, almost. Unix certainly has files. Unix has endless reams of files. And since Unix has files, it also has a concept of "between files" (think of it this way: if your universe consists only of boxes, you automatically know about spaces where there are no boxes as well). But Unix knows nothing else than that. Everything in the whole (Unix) universe is a file.
Everything is a file. Even things that are really weird things to think of as files, are files. Your (data) files are files. Your directories are files. Your hard drive is a file. Your keyboard, monitor and printer are files. Yes, really: your keyboard is a read-only file of infinite size. Your monitor and printer are infinitely sized write-only files. Your network connection is a read/write file.
At this point you're probably asking: Why? Why would the designers of the Unix system have come up with this madness? Why is everything a file? The answer is: because if everything is a file, you can treat everything like a file. Or, put a different way, you can treat everything in the Unix world the same way. And, as we will see shortly, that means you can also combine virtually everything using file operations.
Before we move on, here's an extra level of weirdness for you: everything in Unix is a file. Including the processes that run programs. Effectively this means that running programs are also files. Including the interactive shell session that you've been running to practice scripting in. Yes, really, that text screen with the blinking cursor is also a file. And we can prove it too. You might recall that in the chapter on Running Commands we mentioned you can exit the shell using the Ctrl+d key combination. Because that combination produces the Unix character for... that's right, end-of-file!
Streams: what goes between files[edit | edit source]
As we mentioned in the previous section, everything in Unix is a file -- except that which sits between files. Between files Unix defines a mechanism that allows data to move, bit by bit, from one file to another: the stream. A stream is literally what it sounds like: a little river of bits pouring from one file into another. Although actually a bridge would probably have been a better name because unlike a stream (which is a constant flow of water) the flow of bits between files need not be constant, or even used at all.
The standard streams[edit | edit source]
Within the Unix world it is a general convention that each file is connected to at least three streams (that's because that turned out to be the most useful number for those files that are processes, or running programs). There can be more and in fact each file can cause itself to be connected to any number of streams (a program can print and open a network connection, for instance). But there are three basic streams available to all files, even though they may not always be useful or used. These streams are called the "standard" streams:
- Standard in (stdin)
- the standard stream for input into a file.
- Standard out (stdout)
- the standard stream for output out of a file.
- Standard error (stderr)
- the standard stream for error output from a file.
As you can probably tell, these streams are very geared towards those files that are actually processes of the system. In fact many programming languages (like C, C++, Java and Pascal) use exactly these conventions for their standard I/O operations. And since the Unix operating system family includes them in the core of the system definition, these streams are also central to the Bourne Shell.
Getting hold of the standard streams in your scripts[edit | edit source]
So now we know that there's a general mechanism for basic input and output in Unix; but how do you get hold of these streams in a script? What do you have to do to hook your script up to the standard out, or read from the standard in? Well, the happy answer is: nothing. Your scripts are automatically connected to the standard in, out and error stream of the process that is running them. When you read input, it automatically comes from the standard in. Your output goes straight to the standard out. And program errors go right to the standard error. In fact you've already used these streams: every example so far that has printed anything has done so to the standard output stream of your script.
And what about the shell in interactive mode? Does that use those standard streams as well? Yes, it does. In interactive mode, the standard in stream is connected to the keyboard file. And the standard output and standard error are connected to the monitor file.
Okay... But what good is it?[edit | edit source]
This discussion on files and streams has been very interesting so far and a nice insight into the depths of Unix. But what good does it do you to know all this? Ah, glad you asked!
The Bourne Shell has some built-in features that allow you to do neat tricks involving files and their streams. You see, files don't just have streams -- you can also cross-connect the streams of two files. At the end of the previous section we said that the standard input of the interactive session is connected to the keyboard file. In fact it is connected to the standard output stream of the keyboard file. And the standard output and error of the interactive session are connected to the standard input of the monitor file. So you can connect the streams of the interactive session to the streams of devices.
But wait. Do you remember the remark above that the point of Unix considering everything to be a file was that everything gets treated like a file? This is why that was important: you can connect a stream from any file to a stream of any other file. You can connect your interactive shell session to the printer or the network rather than to the monitor (or in addition to the monitor) using streams. You can run a program and have its output go directly to the printer by reconnecting the standard output stream of the program. You can connect the standard output stream of one program directly to the standard input stream of another program and make chains of programs. And the Bourne Shell makes it really simple to do all that.
Do you suddenly feel like you've stuck your fingers in the electrical socket? That's the feeling of the raw power of the shell flowing through your body....
Redirecting: using streams in the shell[edit | edit source]
As explained in the previous section, the shell process is connected by standard streams to (by default) the keyboard and the monitor. But very often you will want to change this linking. Connecting a file to a stream is a very common operation, so would expect it to be called something like "connecting" or "linking". But since the Bourne Shell has default connections and everything you do is always a change in the default connections, connecting a file to a (different) stream using the shell is actually called redirecting.
There are several operators built in to the Bourne Shell that relate to redirecting. The most basic and general one is the pipe operator, which we will examine in some detail further on. The others are related to redirecting to file.
Redirecting to file[edit | edit source]
As we explained (or rather: hinted at) in the previous section, one of the enormously powerful features of the Bourne Shell on top of a Unix operating system is the ability to chain programs together. Execute a program, have it produce output, then automatically send that output to another program as input. The possible combinations are endless, as is the power of what you can achieve.
One of the most common places where you might want to send a program's output is to a file in the file system. And this time by file we mean a regular, classic data file and not a Unix "everything is a file including your hardware" file. In order to achieve this you can imagine that we can use the chaining mechanism described above: let a program generate output through the standard output stream, then connect that stream (i.e. redirect the output) to the standard input stream of a program that creates a data file in the file system. And this would absolutely work. However, redirecting to a data file is such a common operation that you don't need a separate end-of-chain program for it. Redirecting to file is built straight into the Bourne Shell, through the following operators:
- process > data file
- redirect the output of process to the data file; create the file if necessary, overwrite its existing contents otherwise.
- process >> data file
- redirect the output of process to the data file; create the file if necessary, append to its existing contents otherwise.
- process < data file
- read the contents of the data file and redirect that contents to process as input.
Redirecting output[edit | edit source]
Let's take a closer look at these operators through some examples. Take the simple Bourne shell script below called 'hello.sh':
This code may be run in any of the ways described in the chapter Running Commands. When we run the script, it simply outputs the string "Hello" to the screen and then returns us to our prompt. But let's say we want to redirect the output to a file instead. We can use the redirect operators to do that easily:
This time, we don't see the string 'Hello' on the screen. Where's it gone? Well, exactly where we wanted it to: into the (new) data file called 'myfile.txt'. Let's examine this file using the 'cat' command:
Let's run the program again, this time using the '>>' operator instead, and then examine 'myfile.txt' again using the 'cat' command:
You can see that 'myfile.txt' now consists of two lines — the output has been added to the end of the file (or concatenated); this is due to the use of the '>>' operator. If we run the script again, this time with the single greater-than operator, we get:
Just one 'Hello' again, because the '>' will always overwrite the contents of an existing file if there is one.
Redirecting input[edit | edit source]
Okay, so it's clear we can redirect output to a data file. But what about reading from a data file? That's also pretty common. The Bourne Shell helps us here as well: the entire process of reading a file and pumping its data into a stream is captured by the '<' operator.
By default 'stdin' is fed from your keyboard; run the 'cat' command without any arguments and it will just sit there, waiting for you to type something:
In fact 'cat' will sit there all day until you type a 'Ctrl+D' (the 'End of File Character' or 'EOF' for short). To redirect our standard input from somewhere else use the '<' (less-than operator):
So 'cat' will now read from the text file 'myfile.txt'; the 'EOF' character is also generated at the end of file, so 'cat' will exit as before.
Note that we previously used 'cat' in this format:
Which is functionally identical to
However, these are two fundamentally different mechanisms: one uses an argument to the command, the other is more general and redirects 'stdin' – which is what we're concerned with here. It's more convenient to use 'cat' with a filename as argument, which is why the inventors of 'cat' put this in. However, not all programs and scripts are going to take arguments so this is just an easy example.
Combining file redirects[edit | edit source]
It's possible to redirect 'stdin' and 'stdout' in one line:
The command above will copy the contents of 'myfile.txt' to 'mynewfile.txt' (and will overwrite any previous contents of 'mynewfile.txt'). Once again this is just a convenient example as we normally would have achieved this effect using 'cp myfile.txt mynewfile.txt'.
Redirecting standard error (and other streams)[edit | edit source]
So far we have looked at redirecting the "normal" standard streams associated with files, i.e. the files that you use if everything goes correctly and as planned. But what about that other stream? The one meant for errors? How do we go about redirecting that? For example, if we wanted to redirect error data into a log file.
As an example, consider the ls command. If you run the command 'ls myfile.txt', it simply lists the filename 'myfile.txt' – if that file exists. If the file 'myfile.txt' does NOT exist, 'ls' will return an error to the 'stderr' stream, which by default in Bourne Shell is also connected to your monitor.
So, lets run 'ls' a couple of times, first on a file which does exist and then on one that doesn't:
And again, this time with 'stdout' redirected only:
We still see the error message; 'logfile.txt' will be created but will be empty. This is because we have now redirected the stdout stream, while the error message was written to the error stream. So how do we tell the shell that we want to redirect the error stream?
In order to understand the answer, we have to cover a little more theory about Unix files and streams. You see, deep down the reason that we can redirect stdin and stdout with simple operators is that redirecting those streams is so common that the shell lets us use a shorthand notation for those streams. But actually, to be completely correct, we should have told the shell in every case which stream we wanted to redirect. In general you see, the shell cannot know: there could be tons of streams connected to any file. And in order to distinguish one from the other each stream connected to a file has a number associated with it: by convention 0 is the standard in, 1 is the standard out, 2 is standard error and any other streams have numbers counting on from there. To redirect any particular stream you prepend the redirect operator with the stream number (called the file descriptor. So to redirect the error message in our example, we prepend the redirect operator with a 2, for the stderr stream:
No output to the screen, but if we examine 'logfile.txt':
As we mentioned before, the operator without a number is a shorthand notation. In other words, this:
is actually short for
We can also redirect both 'stdout' and 'stderr' independently like this:
'stdio.txt' will be blank , 'logfile.txt' will contain the error as before.
If we want to redirect stdout and stderr to the same file, we can use the file descriptor as well:
Here '2>&1' means something like 'redirect stderr to the same file stdout has been redirected to'. Be careful with the ordering! If you do it this way:
you will redirect stderr to the file that stdout points to, then send stdout somewhere else — and both streams will end up being redirected to different locations.
Special files[edit | edit source]
We said earlier that the redirect operators discussed so far all redirect to data files. While this is technically true, Unix magic still means that there's more to it than just that. You see, the Unix file system tends to contain a number of special files called "devices", by convention collected in the /dev directory. These device files include the files that represent your hard drive, DVD player, USB stick and so on. They also include some special files, like /dev/null (also known as the bit bucket; anything you write to this file is discarded). You can redirect to device files as well as to regular data files. Be careful here; you really don't want to redirect raw text data to the boot sector of your hard drive (and you can!). But if you know what you're doing, you can use the device files by redirecting to them (this is how DVDs are burned in Linux, for instance).
As an example of how you might actually use a device file, in the 'Solaris' flavour of Unix the loudspeaker and its microphone can be accessed by the file '/dev/audio'. So:
Will play a sound, whereas:
Will record a sound.(you will need to CTRL-C this to finish...)
This is fun:
Now wave the microphone around whilst shouting - Jimi Hendrix style feedback. Great stuff. You will probably need to be logged in as 'root' to try this by the way.
Some redirect warnings[edit | edit source]
The astute reader will have noticed one or two things in the discussion above. First of all, a file can have more than just the standard streams associated with it. Is it legal to redirect those? Is it even possible? The answer is, technically, yes. You can redirect stream 4 or 5 of a file (if they exist). Don't try it though. If there's more than a few streams in any direction, you won't know which stream you're redirecting. Plus, if a program needs more than the standard streams it's a good bet that program also needs its extra streams going to a specific location.
Second, you might have noticed that file descriptor 0 is, by convention, the standard input stream. Does that mean you can redirect a program's standard input away from the program? Could you do the following?
The answer is, yes you can. And yes, things will break if you do.
Pipes, Tees and Named Pipes[edit | edit source]
So, after all this talk about redirecting to file, we finally get to it: general redirecting by cross-connecting streams. The most general form of redirecting and the most powerful one to boot. It's called a pipe and is performed using the pipe operator '|'. Pipes allow you to join two processes together through a "pipeline", which directly connects the stdout of one file to the stdin of another.
As an example let's consider the 'grep' command which returns a matching string, given a keyword and some text to search. And let's also use the ps command, which lists running processes on the machine. If you give the command
it will generally list pagefuls of running processes on your machine, which you would have to sift through manually to find what you want. Let's say you are looking for a process which you know contains the word 'oracle'; use the output of 'ps' to pipe into grep, which will only return the matching lines:
Now you will only get back the lines you need. What happens if there's still loads of these ? No problem, pipe the output to the command 'more' (or 'pg'), which will pause your screen if it fills up:
What about if you want to kill all those processes? You need the 'kill' program, plus the process number for each process (the second column returned by the ps command). Easy:
In this command, 'ps' lists the processes and 'grep' narrows the results down to oracle. The 'awk' tool pulls out the second column of each line. And 'xargs' feeds each line, one at a time, to 'kill' as a command line argument.
Pipes can be used to link as many programs as you wish within reasonable limits (and we don't know what these limits are!)
Don't forget you can still use the redirectors in combination:
There is another useful mechanism that can be used with pipes: the 'tee'. To understand tee, imagine a pipe shaped like a 'T' - one input, two outputs:
The 'tee' will copy whatever is given to its stdin and redirect this to the argument given (a file); it will also then send a further copy to its stdout - which means you can effectively intercept the pipe, take a copy at this stage, and carry on piping up other commands; useful maybe for outputting to a logfile, and copying to the screen.
A note on piped commands: piped processes run in parallel on the Unix environment. Sometimes one process will be blocked, waiting for input from another process. But each process in a pipeline is, in principle, running simultaneously with all the others.
Named pipes[edit | edit source]
There is a variation on the in-line pipe which we have been discussing called the 'named pipe'. A named pipe is actually a file with its own 'stdin' and 'stdout' - which you attach processes to. This is useful for allowing programs to talk to each other, especially when you don't know exactly when one program will try and talk to the other (waiting for a backup to finish etc.) and when you don't want to write a complicated network-based listener or do a clumsy polling loop.
To create a 'named pipe', you use the 'mkfifo' command (fifo=first in, first out; so data is read out in the same order as it is written into).
This creates a named pipe called 'mypipe'; next we can start using it.
This test is best run with two terminals logged in:
1. From 'terminal a'
The 'cat' will sit there waiting for an input.
2. From 'terminal b'
This should finish immediately. Flick back to 'terminal a'; this will now have read from the pipe and received an 'EOF', and you will see the data on the screen; the command will have finished, and you are back at the command prompt.
Now try the other way round:
1. From terminal 'b'
This will now sit there, as there isn't another process on the other end to 'drain' the pipe - it's blocked.
2. From terminal 'a'
As before, both processes will now finish, the output showing on terminal 'a'.
Here documents[edit | edit source]
So far we have looked at redirecting from and to data files and cross-connecting data streams. All of these shell mechanisms are based on having a "physical" source for data — a process or a data file. Sometimes though, you want to feed some data into a target without having a source for it. In these cases you can use an "on the fly" document called a here document. A here document means that you open a virtual text document (in memory), type into it as usual, close it and then treat it like any normal file.
Creating a here document is done using a variation on the input redirect operator: the '<<' operator. Like the input redirect operator, the here document operator takes an argument. For the input redirect operator this operand is the name of the file to be streamed in. For the here document operator it is the string that will terminate the here document. So using the here document operator looks like this:
When using here documents in combination with variable or command substitution, it is important to realize that substitutions are carried out before the here document is passed on. So for example: | <urn:uuid:d338a446-8840-4ffe-b964-9d3e2ddfed50> | CC-MAIN-2022-33 | https://en.wikibooks.org/wiki/Bourne_Shell_Scripting/Files_and_streams | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571536.89/warc/CC-MAIN-20220811224716-20220812014716-00297.warc.gz | en | 0.931805 | 5,052 | 3.28125 | 3 |
Last Updated on May 5, 2015, by eNotes Editorial. Word Count: 57
The five lectures that constitute Reason and Existenz were delivered at the University of Groningen, Holland, in the spring of 1935. In these lectures, the author knits together the various themes that are elaborated in his many philosophical writings. Reason and Existenz is thus both a helpful summary of and an excellent introduction to the author’s philosophy.
Last Updated on May 5, 2015, by eNotes Editorial. Word Count: 396
Jaspers defines philosophy as the elucidation of Existenz (Existenzerhellung). (The term “Existenz” is retained because the English “existence” is not its equivalent.) This elucidation of Existenz needs to be sharply contrasted with any attempt at a conceptualization of Existenz through objectively valid and logically compelling categories. Jaspers denies that a unifying perspective of the content of existential reality is possible. Nevertheless, a clarification of or elucidation of Existenz as it expresses itself in concrete situations can be productively undertaken. According to Jaspers, the philosopher is the one who strives for such clarification.
Jaspers finds in the concrete philosophizing of Søren Kierkegaard and Friedrich Nietzsche a profound exemplification of the philosophical attitude. Both, in their interest to understand existential reality from within, had serious reservations about any program that intended to bring thought into a single and complete system, derived from self-evident principles. Any claim for a completed existential system affords nothing more than an instance of philosophical pretension. Existenz has no final content; it is always “on the way,” subject to the contingencies of a constant becoming. Kierkegaard and Nietzsche, in grasping this fundamental insight, uncovered the existential irrelevancy of Georg Wilhelm Friedrich Hegel’s system of logic. It was particularly Kierkegaard, in his attack on speculative thought, who brought to light the comic neglect of Existenz in the essentialism and rationalism of Hegel. Kierkegaard and Nietzsche further laid the foundations for a redefinition of philosophy as an elucidation of Existenz through their emphasis on the attitudinal, as contrasted with the doctrinal, character of philosophy. They set forth a new intellectual attitude toward life’s problems. They developed no fixed doctrines that can be abstracted from their thinking as independent and permanent formulations. They were both suspicious of scientists who sought to reduce all knowledge to simple and quantifiable data. They were passionately interested in the achievements of self-knowledge. Both taught that self-reflection is the way to truth. Reality is disclosed through a penetration to the depths of the self. Both realized the need for indirect communication and saw clearly the resultant falsifications in objectivized modes of discourse. Both were exceptions—in no sense models for followers. They defy classification under any particular type and shatter all efforts at imitation. What they did was possible only once. Thus the problem for us is to philosophize without being exceptions, but with our eyes on the exception.
Last Updated on May 5, 2015, by eNotes Editorial. Word Count: 1119
At the center of Jaspers’s philosophizing is the notion of the Umgreifende. Some have translated this basic notion as the “Comprehensive”; others have found the English term, the “Encompassing,” to be a more accurate rendition of the original German. The Encompassing lies beyond all horizons of determinate being, and thus never makes its appearance as a determinable object of knowledge. Like philosopher Immanuel Kant’s noumenal realm, it remains hidden behind the phenomena. Jaspers readily agrees with Kant that the Encompassing as a designation for ultimate reality is objectively unknowable. It escapes every determinate objectivity, emerging neither as a particular object nor as the totality of objects. As such, it sets the limits to the horizon of humanity’s conceptual categories. In thought, there always arises that which passes beyond thought itself. Humanity encounters the Encompassing not within a conceptual scheme but in existential decision and philosophical faith. This Encompassing appears and disappears only in its modal differentiations. The two fundamental modes of the Encompassing are the “Encompassing as being-in-itself” and the “Encompassing as being-which-we-are.” Both of these modes have their ground and animation in Existenz.
Jaspers’s concern for a clarification of the meaning and forms of being assuredly links him with the great metaphysicians of the Western tradition, and he is ready to acknowledge his debt to Plato, Aristotle, Baruch Spinoza, Hegel, and Friedrich Wilhelm Joseph Schelling. However, he differs from the classical metaphysicians in his relocation of the starting point for philosophical inquiry. Classical metaphysics has taken as its point of departure being-in-itself, conceived either as Nature, the World, or God. Jaspers approaches his program of clarification from being-which-we-are. This approach was already opened up by the critical philosophy of Kant, which remains for Jaspers the valid starting point for philosophical elucidation.
The Encompassing as being-which-we-are passes into further internally articulated structural modes. Here empirical existence (Dasein), consciousness as such (Bewusstsein überhaupt), and spirit (Geist) make their appearance. Empirical existence indicates oneself as object, by virtue of which one becomes a datum for examination by the various scientific disciplines such as biology, psychology, anthropology, and sociology. In this mode of being, one apprehends oneself simply as an object among other objects, subject to various conditioning factors. One is not yet properly known as human. One’s distinctive existential freedom has not yet been disclosed. One is simply an item particularized by the biological and social sciences for empirical investigation.
The second structural mode of the being-which-we-are is consciousness as such. Consciousness has two meanings. In one of its meanings, it is still bound to empirical reality. It is a simple principle of empirical life that indicates the particularized living consciousness in its temporal process. However, we are not only particularized consciousnesses that are isolated from one another; we are in some sense similar to one another, by dint of which we are disclosed as consciousness as such. Through this movement of consciousness as such, one is able to understand oneself in terms of ideas and concepts that have universal validity. Dasein, or empirical existence, expresses a relationship of humanity to the empirical world. Consciousness as such expresses a relationship of humanity to the world of ideas. Ideas are permanent and timeless. Therefore, one can apprehend oneself in one’s timeless permanence.
The influence of the Greek philosopher Plato upon the thought of Jaspers becomes clearly evident at this point. People participate in the Encompassing through the possibility of universally valid knowledge in which there is a union with timeless essences. As simple empirical consciousness, people are split into a multiplicity of particular realities; as consciousness as such, people are liberated from their confinement in a single consciousness and participate in the universal and timeless essence of humanity.
Spirit constitutes the third modal expression of the Encompassing which-we-are. Spirit signifies the appetency toward totality, completeness, and wholeness. As such, it is oriented toward the truth of consciousness. It is attracted by the timeless and universal ideas that bring everything into clarity and connection. It seeks a unification of particular existence in such a way that every particular would be a member of a totality.
There is indeed a sense in which spirit expresses the synthesis of empirical existence and consciousness as such. However, this synthesis is never completed. It is always on the way, an incessant striving that is never finished. It is at this point that Jaspers’s understanding of spirit differs from that of Hegel. For Hegel, spirit drives beyond itself to its own completion, but not so for Jaspers. On one hand, spirit is oriented to the realm of ideas in which consciousness as such participates and is differentiated from simple empirical existence; on the other hand, spirit is contrasted with the abstraction of a timeless consciousness as such and expresses kinship with empirical existence. This kinship with empirical existence is its ineradicable temporality. It is a process of constant striving and ceaseless activity, struggling with itself, reaching ever beyond that which it is and has. Yet it differs from empirical existence in that empirical existence is unconsciously bound to its particularization in matter and life, by virtue of which it can become an object in a determinable horizon. As empirical existence, people are split off from each other and become objects of scientific investigation. Spirit overflows every objectivization and remains empirically unknowable. It is not capable of being investigated as a natural object. Although it always points to its basis in empirical existence, it also points to a power or dynamism that provides the impetus for its struggle toward meaning and totality.
It is through the Encompassing being-which-we-are that one has an approach to the Encompassing as being-in-itself. Being-in-itself never emerges independently as a substantive and knowable entity. It appears only in and through the being-which-we-are. In this appearance, it is disclosed as a limit expressing a twofold modification: the world and transcendence. The being-which-we-are has one of its limits in the experience of the world. The world in Jaspers’s philosophy signifies neither the totality of natural objects nor a spatiotemporal continuum in which these objects come to be. It signifies instead the horizon of inexhaustible appearances that present themselves to inquiry. This horizon is always receding, and it manifests itself only indirectly in the appearances of particular and empirical existence. It is never fully disclosed in any one of its perspectives and remains indeterminate for all empirical investigation. The Encompassing being-which-we-are has its other limit in transcendence. Transcendence is that mode of being-in-itself that remains hidden from all phenomenal experience. It does not even manifest itself indirectly. It extends beyond the horizons of world orientation as such. It remains the completely unknowable and indefinable, existentially posited through a philosophical faith.
Last Updated on May 5, 2015, by eNotes Editorial. Word Count: 616
All the modes of the Encompassing have their original source in Existenz. Existenz is not itself a mode, but it carries the meaning of every mode. It is the animation and the ground of all modes of the Encompassing. Thus, only in turning one’s attention to Existenz can one reach the pivotal point in Jaspers’s philosophizing. In Existenz, one reaches the abyss or the dark ground of selfhood. Existenz contains within itself an element of the irrational and thus never becomes fully transparent to consciousness as such. Consciousness is always structurally related to the universal ideas, but Existenz can never be grasped through an idea. It never becomes fully intelligible because it is the object of no science. Existenz can only be approached through concrete elucidations—hence, Jaspers’s program of Existenzerhellung. Existenz is the possibility of decision, which has its origin in time and apprehends itself only within its temporality. It escapes from every idea of consciousness as well as from the attempt of spirit to render it into an expression of a totality or a part of a whole. Existenz is the individual as historicity. It determines the individual in one’s unique past and unique future. Always moving into a future, the individual, as Existenz, is burdened with the responsibilities of decisions. This fact constitutes one’s historicity. Existenz is irreplaceable. The concrete movements within one’s historicity, which always call one to decision, disclose one in one’s unique individuality and personal idiosyncrasy. One is never a simple individual empirical existent that can be reduced to a specimen or an instance of a class; one is unique and irreplaceable. Finally, Existenz, as it knows itself before transcendence, reveals itself as freedom. Existenz is possibility, which means freedom. Humanity is that which one can become in one’s freedom.
As the modes of the Encompassing have their roots in Existenz, so they have their bond in Reason. Reason is the bond that internally unites the modes and keeps them from falling into an unrelated plurality. Thus Reason and Existenz are the great poles of being, permeating all the modes but not coming to rest in any one of them. Jaspers cautions the reader against a possible falsification of the meaning of Reason as it is used in his elucidation of Existenz. Reason is not to be construed as simple, clear, objective thinking (Verstand). Understood in this sense, Reason would be indistinguishable from consciousness as such. Reason, as the term is used by Jaspers, is closer to the Kantian meaning of Vernunft. It is the preeminence of thought that includes more than mere thinking. It not only includes a grasp of what is universally valid (ens rationis) but also touches upon and reveals the nonrational, bringing to light its existential significance. It always pushes toward unity, the universal, law, and order but at the same time remains within the possibility of Existenz. Reason and Existenz are thus inseparable. Each disappears when the other disappears. Reason without Existenz is hollow and culminates in an empty intellectualism. Existenz without Reason is blind, incessant impulse and irrational striving. Reason and Existenz are friends rather than enemies. Each is determined through the other. They mutually develop each other and through this development find both clarity and reality. In this interdependence of Reason and Existenz is an expression of the polar union of the Apollonian and the Dionysian. The Apollonian, or the structural principle, dissolves into a simple intellectual movement of consciousness, a dialectical movement of spirit, when it loses the Dionysian or dynamic principle. Conversely, the Dionysian passes over into irrational passion that burns to its own destruction when it loses its bond with the Apollonian.
Last Updated on May 5, 2015, by eNotes Editorial. Word Count: 760
The reality of communication provides another dominant thesis in the philosophy of Jaspers. Philosophical truth, which discloses Existenz as the ground of the modes and Reason as their bond, can be grasped only in historical communication. The possibility of communication follows from the ineradicable communality of humanity. No one achieves humanity in isolation. People exist only in and through others and come to an apprehension of the truth of their Existenz through interdependent and mutual communal understanding. Truth cannot be separated from communicability. However, the truth that is expressed in communication is not simple; there are as many senses of truth as there are modes of the Encompassing being-which-we-are. In the community of one’s empirical existence, it is the pragmatic conception of truth that is valid. Empirical reality knows no absolutes that have a timeless validity. Truth in this mode is relative and changing, because empirical existence itself is in a constant process of change. That which is empirically true today may be empirically wrong tomorrow because of a new situation into which one will have passed. All empirical truth is dependent upon the context of the situation and one’s own standpoint within the situation.
As the situation perpetually changes, so does truth. At every moment, the truth of one’s standpoint is in danger of being refuted by the very fact of process. The truth in the communication of consciousness as such is logical consistency and cogent evidence. By means of logical categories, one affirms and denies that which is valid for everyone. Whereas in empirical reality, truth is relative and changing because of the multiple fractures of particulars with one another in their time-bound existence, in consciousness as such there is a self-identical consciousness that provides the condition for universally valid truths. The communication of spirit demands participation in a communal substance. Spirit has meaning only in relation to the whole of which it is a part. Communication is thus the communication of a member with its organism. Although each spirit differs from every other spirit, there is a common agreement as concerns the order that comprehends them. Communication occurs only through the acknowledgment of their common commitment to this order. Truth in the community of spirit is thus total commitment or full conviction. Pragmatic meaning, logical intelligibility, and full conviction are the three senses of truth expressed in the Encompassing being-which-we-are.
However, there is also the will to communicate Reason and Existenz. The communication of Existenz never proceeds independently of the communication in the three modes of the Encompassing being-which-we-are. Existenz retains its membership in the mode of empirical existence, consciousness as such, and spirit; but it passes beyond them in a “loving struggle” (liebender Kampf) to communicate the innermost meaning of its being. The communication of Existenz is not that of relative and changing particulars, nor is it that of an identical and replaceable consciousness. Existential communication is communication between irreplaceable persons. The community of Existenz is also contrasted with the spiritual community. Spirit seeks security in a comprehensive group substance. Existenz recognizes the irremovable fracture in being, accepts the inevitability of struggle, and strives to open itself for transcendence. Only through these movements does Existenz apprehend its irreplaceable and essentially unrepeatable selfhood and bind itself to the historical community of selves who share the same irreplaceable determinants. It is in existential communication that the self first comes to a full consciousness of itself as a being qualified by historicity, uniqueness, freedom, and communality.
Reason plays a most important role in existential communication. Reason as the bond of the various modes of the Encompassing strives for a unity in communication. However, its function is primarily negative. It discloses the limits of communication in each of the modes and checks the absolutization of any particular mode as the full expression of Being. When empirical existence is absolutized, the essence of humanity is lost; one is reduced to an instance of matter and biological life, and one’s essence becomes identified with knowable regularities. One is comprehended not in one’s humanity, but in one’s simple animality. The absolutization of consciousness as such results in an empty intellectualism. One’s empirical reality is dissolved into timeless truths, and the life of the spirit remains unacknowledged. When spirit becomes a self-sufficient mode, the result is a wooden culture in which all intellection and creativity are sacrificed to a communal substance. None of the modes is sufficient by itself. Each demands the other. Reason provides the internal bond through which their mutual dependence can be harmoniously maintained.
Last Updated on May 5, 2015, by eNotes Editorial. Word Count: 175
For Jaspers, the truth of Reason is philosophical logic; the truth of Existenz is philosophical faith. Philosophical logic and philosophical faith interpenetrate, as do Reason and Existenz themselves. Logic takes its impulse from Existenz, which it seeks to clarify. Philosophical logic is limited neither to traditional formal logic nor to mere methodology; it prevents any reduction of humanity to mere empirical existence or to a universal consciousness. Philosophical logic is negative in that it provides no new contents, but it is positive in establishing the conditions for every possible content. Philosophical faith, the truth of Existenz, confronts humanity with transcendence and discloses one’s freedom. Philosophical faith is contrasted with religious faith in that it acknowledges no absolute or final revelation in time. Transcendence discloses a constant openness in which humanity apprehends itself as an “inner act,” more precisely, an act of freedom. Faith is an acknowledgment of transcendence as the source of humanity’s freedom. The highest freedom that humanity can experience is the freedom that has its condition in a source outside itself.
Last Updated on May 5, 2015, by eNotes Editorial. Word Count: 281
Ehrlich, Leonard. Karl Jaspers: Philosophy as Faith. Amherst: University of Massachusetts Press, 1975. An analysis of Karl Jaspers’s understanding of philosophical thought as the expression of faith, in the underlying unity of the subjective and the objective, examining such key themes as the role of freedom and transcendence.
Kaufmann, Walter. From Shakespeare to Existentialism. Garden City, N.Y.: Anchor, 1960. This fine review of existentialism includes a chapter focused on Jaspers. Kaufmann’s penetrating scholarship dispels some misunderstandings of Jaspers and places him in the context of the existential movement with respect to Friedrich Nietzsche in particular, while sharply criticizing Jaspers’s own understanding of Nietzsche and Sigmund Freud.
Olson, Alan M., ed. Heidegger & Jaspers. Philadelphia: Temple University Press, 1994. The work of Heidegger and Jaspers is presented and studied.
Samay, Sebastian. Reason Revisited: The Philosophy of Karl Jaspers. Notre Dame, Ind.: University of Notre Dame Press, 1971. An examination of Jaspers’s philosophy, particularly with respect to the relations of subject and object, being and reason, and transcendence. Very detailed, but somewhat dated in his conclusions about the influence of Jaspers’s thought.
Schilpp, Paul, ed. The Philosophy of Karl Jaspers. Rev. ed. Chicago: Open Court Publishing, 1981. In addition to an autobiographical summary, this book offers commentaries by twenty-four prominent scholars who critically examine many diverse aspects of Jaspers’s work, such as death, guilt, suffering, communication, history, citizenship, religion, art, and psychopathology. They address their remarks directly to Jaspers, who replies. Bibliography.
Wallraff, Charles F. Karl Jaspers: An Introduction to his Philosophy. Princeton, N.J.: Princeton University Press, 1970. A fine introductory study of Jaspers’s life and thought, including a critical analysis of his terminology and a useful bibliography. | <urn:uuid:498b5205-4503-442c-87fb-8db54d83c8ad> | CC-MAIN-2022-33 | https://www.enotes.com/topics/reason-existenz | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00497.warc.gz | en | 0.943381 | 4,693 | 3.203125 | 3 |
When you use social media, who are you communicating with? And who else is paying attention? This chapter is about producing, consuming, and controlling online content. It’s also about the data, cultural norms, and terms of service that you create, accept, and influence.
Not “the public” – They’re publics, and they’re networked
Let’s go back to that ampitheater in Chapter 2. We envisioned an athlete on the ground, spewing insults about her opponent. (Yes, there were women athletes and gladiators in Ancient Rome.) I imagine the athlete shouting, “I say before the public that my opponent has the stench of a lowlife latrine!” And we have a mass of spectators roaring in approval, disapproval, excitement, laughter.
That mass of spectators is . The definition of a public is complicated (see danah boyd, It’s Complicated pp 8-9). But for simplicity’s sake I define a public as “people paying sustained attention to the same thing at the same time.”
When the gladiator calls the mass of spectators “the public” it deepens the effect of her insult to suggest that “everyone in the world” is watching. Although it is imaginary, “” is a powerful idea or “construct” that people refer to when they want to add emphasis to the effects of one-to-many speech. But really, there is no “the public.” There is never a moment when everyone in the world is paying sustained attention to the same thing at the same time. There only are various publics, overlapping each other, with one person potentially sharing in or with many different publics.
If you use social media, you interact with many publics that are connected to one another through you and likely through many others. Publics that intersect and connect online are “” (pg. 8.) In the terminology of social network analysis, whenever an individual connects two networked publics (or any two entities, such as two other people), that connector is called a . Think about the publics you form a bridge between. How are you uniquely placed to spread information across multiple publics by forming bridges between and among them?
Bridging information between publics can be exciting, and controversial. Networked publics really work each other up, forming opinions, practices, and norms together. And they occasionally get in fights in the stands, clubbing each other with ancient Roman hot dogs and Syrian tabouleh.
Social Media Based on Culture
Everyone has different perspectives and experiences on social media. These different perspectives and experiences mainly depend on the different traits that identify you and make you who you are. Such as being raised in a different place, having different cultures, different hobbies, and who you interact with on a daily basis.
I was born in Chile and recently just moved to the United States five years ago. Being from somewhere else really expands your knowledge on everything, since you are surrounded by specific communities. In the five years that I have been here I have noticed that individuals here have a different perspective about social media and several different topics.
I have had the amazing experience of being able to understand and put myself in someone else’s shoes when it comes to social media. The fact that I am foreign gives me the opportunity to look at social media in a different way. I grew up with technology since I was little. I got my first phone around 10 and then I was introduced to different platforms. What really surprised me when I moved to the United States is that people here were more attached to their phones and their instagram pages, or snapchat or other apps.
I grew up with people posting things on their social media that were simple, you could see that the person posting that did not put much thought into it and just wanted to show what their real interests were. Now individuals spend hours and hours checking what they are going to post concerned about what others will say. This makes me think that others take what others say under consideration too much instead of just being themselves.
My social media page differs on several things from those of my friends that I made here. Since I am Hispanic, I have different interests and also use other platforms more than others. For example, people here use snapchat a lot more than instagram, however I use instagram more because I am still connected to my friends and family that live all the way across the world and they do not use snapchat anymore.
Something else that I have noticed that social media here is different than from where I am from is that people cyber bullying here is a much bigger thing that in a Hispanic country. That is the reason why people worry too much about what they are posting and what others will think of them when they see who they really are and what their real interests are. However people should not be afraid of what others say and then they would have a better experience when it comes to technology. People could see all your real talents and maybe one day be recognized for that.
I think I have showed my friends here the difference in social media and they see how people in other countries relate to social media compared to here. Even though there are some differences on how people express themselves on the different platforms, most teenagers still are way too worried on what others think about them and that is something I have brought to everyone’s attention here.
Even though people have different backgrounds and perspectives on things when it comes to life and publishing it on social media, at the end of the day we are all trying to show others who we are.
About the Author
Sofia Diaz is a first year student at the University of Arizona. She spends her time walking and napping with her beloved dog, Boss.
Privacy Norms in Online Publics
It is important to understand networked publics because they help us understand that the dichotomy of private vs public is an oversimplification of social relationships. When you post on social media, even if you post “publicly,” you probably envision certain people or publics as your audience.
Controlling the privacy of social media posts is much more complex than controlling the privacy of offline communication. On social media, as boyd notes, what you post is public by default, private by design (It’s Complicated, p. 61). Face-to-face, you can generally see who is paying attention and choose whether to speak to them, making your communications – note that is flipped from how it is on social media. While popular media claim younger generations do not care about privacy, there is a great deal of evidence that youth care a lot about privacy and are developing norms to strategically protect it.
Norms take time. There are norms that societies have developed over many centuries of face-to-face communication. These offline norms have long helped members of these societies get along with each other, and negotiate and protect their privacy. Let’s study one of these offline norms: civil inattention.
It’s time to imagine an awkward face-to-face scenario, together. You’re in an eatery, which is bustling with people. You’re engaged in a conversation with two friends – and suddenly a passing stranger stops to lean over you and tries to join in your conversation. Another person from the next table over is also blatantly staring at you and your friends talking. You weren’t even talking to these people, and now they’re in your business!
That scenario is unlikely to happen in real life, because of a social norm sociologist Erving Goffman named . In crowded spaces, civil inattention is the common understanding – by you and by others in that society – that you don’t get in other people’s business. You may acknowledge that you are sharing the space with them through small interactions, such as holding the door for the person behind you, making eye contact, and nodding or smiling. But you don’t stare, or listen in, or join in without an invitation.
So is civil inattention also an online norm? Well, that may depend on who we are and which publics we interact with online.
The online world is young, and norms in our networked publics are still being decided. Online norms are also dynamic, which means they are based on a changing set of deciders, including software developers and evolving publics of users. It could be that the most effective forms of privacy protection online will be based on social and cultural norms as we develop these.
But once we figure out what works in the online world in terms of privacy, we will have to articulate it – and then fight for it, because our data is immensely profitable for developers of the platforms we use.
Ibrahim Sadi’s Story
My knowledge and understanding of social media are much more different than a lot of people I know and my friends. Growing up as a kid I always wanted to have a social media platform, but when you get older you realize the beneficial things and negative things that could happen to yourself being on social media. For a younger kid like I was, I had Facebook at a young age, I’m sure many kids did as well. Being an Arab and coming from Jordan makes me much different than most people I know, especially on social media.
There have been times on social media, people have tried to put me down for being Arab or making disrespectful comments to me on a social media platform as well. People do this because they think they’re funny but the person being made fun of is being bullied, it’s hard sticking up for yourself when 20 other kids are laughing at you, and you’re the only person that you have. The good for me on social media was talking with my family, sharing cool memories with good friends, and getting jobs off social media as well!
What makes social media unique for me is the interests I have and bringing my family more business as well. My family has a local business and during a time like this, it’s very hard to make money as a local owner because of the business loss during COVID. Without having social media, I wouldn’t have been able to get extra customers to help support my Fathers local business, I wouldn’t have been able to get more people to apply to father business either. The interests I have for social media could be all kinds of things, like watching UFC which is my favorite hobby to do when I have nothing better to do. Learning cool recipes to cook for my family and me, watching all kinds of national sports like football, basketball, and soccer.
Another reason why using social media so unique for me is because of Job opportunities, without social media I wouldn’t have the job I have today. Being able to “share”-this is my GL it wouldn’t work by trying to make it a GL term. your interest in jobs and share your thoughts through social media to your friends and family is also why social media so unique Job opportunities are so important for our generation especially because everything nowadays is almost based on technology. For example, students right now are going through a pandemic we have never been in and we are using the app “zoom” to do basic home school.
About the Author
Ibrahim Sadi is a second year student at the University of Arizona.
When publics fixate, attack, troll, and bully
The term received a great deal of attention as the internet reached widespread adoption, and it is entangled moral panics that caused and used it. As parents and educators in the early 2000s struggled to recognize the longstanding issue of bullying in online discourse, they sometimes conflated bullying with all online interaction. Meanwhile, many of the cases the media labeled cyberbullying are not actually , which is a real phenomenon with specific criteria: aggressive behavior, imbalance of power, repeated over time. (These criteria were laid out by Swedish psychologist Dan Owleus; an excellent analysis of cyberbullying in the context of these is in boyd’s fifth chapter of It’s Complicated.)
Still, some online interactions are toxic with cruelty, whether or not we can scientifically see them as bullying. Another term in popular use to describe online attacks is trolling, perhaps derived from the frequent placement of trolls’ comments below the content, like fairytale trolls lurking below bridges.
- Individuals troll. Some seem to lash out individually from personal loneliness or trauma, as with a Twitter troll to whom celebrity Sarah Silverman recently responded with surprising compassion.
- Mobs also troll. A distinctly frightening modern scourge is when critical networked publics and trolls attack in a coordinated effort, or mob. More visible examples of online troll mobs include hateful vitriol directed at a 13-year-old musician’s Youtube explorations, at a black actress in a sequel to a white male film, and at a columnist who is proud to call herself fat – but trolls attack less visible people incessantly as well.
- Not all are affected equally by trolling. While attacks do plague some men online – and specifically men of color – online hatred is directed more often and more viciously at women. Women of color are particularly vulnerable. Many online spaces with widespread usership such as Reddit have cultures of sexism and bigotry – and while there is evidence of efforts to combat toxic online cultures, many of these sites have a long way to go.
John Suler wrote in the early days of the internet about the , exploring the psychology behind behaviors that people engage in online but not in person; he noted while some disinhibition is benign, much of it is toxic. More recent research connects online trolling to narcissism. As we perform before online publics, we enter an arena of unleashed and invisible audiences.
Why privacy is such a tangled issue online
is a notion relating to self-determination that is too complicated to be reduced to one simple idea. Privacy can be defined in many ways – and so can invasion of privacy and its potential consequences. This is one of the reasons software companies’ Terms of Service or TOS are never adequate protections for users of their services. How do we demand protection of privacy when it is so multilayered and impossible to define?
Consider these two passages by Daniel Solove in his article, “Why Privacy Matters Even if you have Nothing to Hide.”
Privacy… is too complex a concept to be reduced to a singular essence. It is a plurality of different things that do not share any one element but nevertheless bear a resemblance to one another. For example, privacy can be invaded by the disclosure of your deepest secrets. It might also be invaded if you’re watched by a peeping Tom, even if no secrets are ever revealed.
Privacy, in other words, involves so many things that it is impossible to reduce them all to one simple idea. And we need not do so.
I agree with Solove that privacy is too complicated to be reduced to one simple idea. But often we are still called on to present a simplified definition of our privacy – for example, we have to justify why it is wrong to give companies such rampant uses of our data.
Samantha Clayton’s Social Media Experience
Social media has become a very popular place that people go on for a variety of reasons. Whether it be a reason to go on for the latest gossip, daily news, to post your fresh new haircut, a good laugh, or even to get up to date on the latest trends. Although, in my opinion, I’m terrified of social media and I really don’t believe a lot of people are. When I first joined the world of social media, I was endlessly tweeting random ideas I had, silly pictures of my friends, and too many memes. Even though I wasn’t tweeting anything to personally attack or offend anyone I had learned, the more I used social media, and the older I got, it’s a huge risk to be active on a social media account nowadays. People will find absolutely anything to be offended by and you will never hear the end of it if you do offend someone in any possible way.
I like to call myself an observer. I hardly post on social media, but I actively use it. I don’t necessarily like, share someone’s post, retweet, comment, etc. on any posts on social media platform, I just sit back and watch. I feel as if it’s better that way because people are constantly looking for a fight on social media. I can’t lie, I do post on social media but it’s a rare occurrence when I do. What I do post and what I only will post is photos of me/friends/family with no caption or an emoji as a caption and use my platform to spread awareness or touch base on something serious like the Black Lives Matter Movement, justice for George Floyd, Breonna Taylor, and the other Black lives that have been taken away by the police.
My perspective on social media is literal fear. People I don’t even know are in my direct messages constantly. It ranges from people saying they know where I live (basically blackmail), old men asking me to have sex with them for money, and hackers trying to get me to share my passwords. These occurrences have also made me afraid to post on social media, but the block button has been my best friend and has solved a lot of these problems I have faced on social media. I’m not sure if any other women, or even men have experienced this issue or if the people attacking me in my direct messages are even real. I like to think it’s just a robot of some sort but I’m still reading those scary words when I go on any of my social media accounts. (It mostly happens on Instagram). Social media should never have to be a place where people are afraid to go on to speak their mind (only if it isn’t offensive or bullying), be able to share photos without having some predator after you, etc. Hence exactly why I like to call myself an observer due to the many problems I have faced just by going on different social media apps.
About the Author
Samantha Clayton is a sister, future teacher, activist, women’s rights advocate, and she love cats!
The value of human data
We are learning the hard way that we must fight for our privacy online. As an early leader in the social media platform market, Facebook set very poor standards for the protection of user privacy because access to personally identifiable user data was immensely profitable for the company. Before Facebook, it was standard for users of online sites to use avatars and craft usernames that didn’t connect to details of their offline lives.
Still, countless online sites permit or encourage users to create online identities apart from their face-to-face identities. Many of today’s younger internet users choose platforms with higher standards for privacy, limiting the publics that their posts reach and the periods of time that posts last. Youth frequently have “finsta” accounts – “fake” Instagrams that they share with nosy family and acquaintances, while only good friends and in-the-know publics have access to their “real” Instagrams. Practices like these force developers to offer users more control over user privacy and the reach of their posts, at the risk of losing users to competitors.
Users shape platforms and platforms shape user behavior. And social and cultural norms shape both user behavior and software platforms.
people paying sustained attention to the same thing at the same time
a construct; an idea of "everyone, everywhere" that people imagine, and refer to when they want to add emphasis to the effects of one-to-many speech
a term danah boyd uses in her book It's Complicated, these are sets of people paying sustained attention to the same thing at the same time that intersect and connect online
In the terminology of social network analysis, whenever an individual connects two networked publics (or any two entities, such as two other people), that connector is called a bridge.
Sociologist Erving Goffman's term for the common understanding in crowded spaces that you don’t may politely acknowledge others, but you do not get in their business
a term entangled in moral panics that caused and used it as parents and educators in the early 2000s struggled to recognize the longstanding issue of bullying in online discourse
a real phenomenon with specific criteria: aggressive behavior, imbalance of power, repeated over time. Defined by Dan Olweus.
The psychology theory finding and predicting that people behave online in ways they would not in person. For more information see Suler, J. (2004). The Online Disinhibition Effect. Cyberpsychology & behavior : the impact of the Internet, multimedia and virtual reality on behavior and society, 7 3, 321-6 .
a notion relating to self-determination that is too complicated to be reduced to one simple idea
a phrase used by danah boyd to emphasize the work required to controlling the privacy of social media posts - the opposite of face to face communication, which is private by default, public by design. (It's Complicated, p. 61.)
based on a changing set of deciders. An examples the way online norms are based on changing deciders including software developers and the evolving practices of publics of users.
a collective attack built upon the practice of using social media to call people out for perceived wrongs | <urn:uuid:5e946fb0-7ba6-404c-a75a-18c447ae809d> | CC-MAIN-2022-33 | https://opentextbooks.library.arizona.edu/hrsm/chapter/privacy-and-publics/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573533.87/warc/CC-MAIN-20220818215509-20220819005509-00296.warc.gz | en | 0.963031 | 4,493 | 2.984375 | 3 |
Tuesday, November 20, 2018
With Virginia seeking to be the third state to ratify the ERA in recent times, the discussion has reignited over passing the ERA. An excerpt from my recent book chapter places this development in legal historical context:
See Tracy Thomas & TJ Boisseau, After Suffrage Comes Equal Rights? ERA as the Next Logical Step, in 100 Years of the Nineteenth Amendment: An Appraisal of Women’s Political Activism (Holly McCammon & Lee Ann Banaszak eds.) (Oxford Univ. Press 2018) (tracing the complete legal and political history of the ERA from 1921).
The National Organization for Women (NOW), newly formed in 1966 by Betty Friedan and Murray, pressed for full enforcement of the new Title VII and actualizing its mandate of equality in employment (Fry 1986). By 1970, federal courts, the Department of Labor, and the EEOC all interpreted Title VII as invalidating women-specific rules, including protective labor legislation and, more importantly, requiring extension of any protections like minimum wages to men rather than eliminating them for women (Mansbridge 1986). Union and social feminist opposition to the ERA finally began to wane, with the long-standing concern over worker protection laws now addressed (Mayeri 2004).
NOW quickly prioritized the ERA. The 1960s had seen few litigation successes with the judicial approach, and legal activists believed they needed the political leverage, if not the substantive right, of an equality amendment campaign (Mayeri 2004). NOW adopted the ERA as a top priority at its conference in 1967. It rejected Pauli Murray’s alternative proposal for a human rights amendment that would have more broadly granted a “right to equal treatment without differentiation based on sex,” potentially encompassing sexual orientation and explicitly addressing private action and reproductive rights (Mayeri 2004: 787). Long-standing ERA proponents, now much older, adamantly opposed any change in the wording of the ERA that might broaden it to more radical agendas, fearing it would jeopardize existing support . This had the effect of reducing feminist demands to “their lowest common denominator” rather than pursuing a wider social justice agenda (Mayeri 2004: 785). Pursuing a constitutional amendment, however, did not mean abandoning the Fourteenth Amendment litigation. By 1970 “most legal feminists had reached a consensus that the constitutional change they sought could and should be pursued simultaneously through the dual strategy” of amendment and litigation (Mayeri 2004: 800).
In early 1970 the Pittsburgh chapter of NOW used direct action to support its demand for the ERA, disrupting a hearing of the US Senate Subcommittee on Constitutional Amendment on another proposed amendment, with protesters demanding hearings on the long-proposed ERA (Mansbridge 1986; Mathews and De Hart 1990). A Citizens’ Advisory Council on the Status of Women petitioned President Richard Nixon to endorse the amendment, and for the first time the US Department of Labor supported the ERA. In May, the Senate Amendment Subcommittee held hearings and referred the equality amendment positively to the Senate Judiciary Committee. There Senator Samuel Ervin Jr. (D-NC), a states’ rights opponent of the civil rights’ laws, and later of Watergate hearings fame, “became the amendment’s chief antagonist” (Mathews and De Hart 1990: 36). He opposed the ERA because of its threat to social norms, concerned about losing the traditional physiological and functional differences of gender to what he characterized as a passing fad. He attacked “militant women who back this amendment,” saying “they want to take rights away from their sisters” and pass laws “to make men and women exactly alike” (Mathews and De Hart 1990: 37–39). Ervin moved the debate beyond the abstract principles of equality to concerns with specific effects of gender equality, including the draft, divorce, family, privacy, and homosexuality. Harvard Law professor Paul Freund also testified about the “parade of horribles the ERA might produce, including the legalization of same-sex marriage, the abolition of husbands’ duty of familial support, unisex bathrooms, and women in military combat” (Mayeri 2004: 808). The opposition succeeded, and the bill failed in the Senate (Mansbridge 1986).
Meanwhile, the ERA passed in the House. Martha Griffiths used a rare procedural move of the discharge petition to “pry the ERA out of the House Judiciary Committee,” where it had languished for years while the liberal chair, Emanuel Celler (D-NY), “kept it in his bottom drawer” because of the persistent opposition by labor (Mansbridge 1986: 13). After only an hour’s debate, the House passed the ERA by a vote of 350-15 on August 10, 1970. When the Senate failed to pass the bill, it was reintroduced the next year, when the House passed the ERA for a second time on October 12, 1971, by a vote of 354-23. This time the Senate passed the ERA on March 22, 1972, by a vote of 84-8 with a seven-year timeline for the required three-fourths of the states to ratify the amendment (Mansbridge 1986). States initially rushed to ratify the ERA. Hawaii was the first state to ratify the amendment, twenty-five minutes after the Senate vote. The next day, three states ratified, and two more the following day. By early 1973, less than one year after Congress’s passage, twenty-four states had ratified, most unanimously or with quick hearings and debate.
This trajectory halted in 1973 with the Supreme Court’s decision in Roe v. Wade finding a woman’s constitutional right to choose abortion. Roe stopped the advancing ratifications, shifted the public discourse, and overturned previous support by Republicans (Ziegler 2015). “The battle against the ERA was one of the first in which the New Right used ‘women’s issues’ to forge a coalition of the traditional Radical Right” , of those concerned with “national defense and the Communist menace” (Mansbridge 1986: 5), and religious evangelicals to activate a previously apolitical segment of the working and middle classes that “was deeply disturbed by cultural changes” (Mansbridge 1986: 16). Through these groups, he ERA became linked with abortion as both were sponsored by radical “women’s libbers” who were a threat to traditional women and family values. The debate became framed as women versus women.
The face of women’s opposition to the ERA was conservative activist Phyllis Schlafly and her STOP ERA (Stop Taking Our Privileges) organization (Berry 1988; Neuwirth 2015). Schlafly, a mother to six children, offered herself to the anti-ERA movement as a voice for stay-at-home mothers in need of special privileges and protections under the law. The irony that she, much like all the most prominent reformers historically lining up on either side of the ERA amendment (such as Alice Paul, Florence Kelley, and Pauli Murray), held a law degree and enjoyed a flourishing decades-long career in the public eye, was utterly elided in her rhetoric. Doggedly focused on women’s roles as mothers and homemakers, Schlafly trumpeted the cause of women’s difference from men—championing the special rights of women as citizens who, ideally, did not work outside the home. She asserted that equality was a step back for women: “Why should we lower ourselves to ‘Equal Rights’ when we already have the status of ‘special privilege’?” (Wohl 1974: 56). She and other ERA opponents reframed the issue as forcing women into dangerous combat, coeducational dormitories, and unisex bathrooms. Feminist advocates responded by clarifying that privacy rights protected concerns about personal living spaces in residences and bathrooms, but their counsel was unheard in the din of threat to traditional family and gender roles. Opponents equated the ERA with homosexuality and gay marriage, as the amendment’s words “on account of sex,” “were joined with ‘sexual preference’ or homosexuality to evoke loathing, fear, and anger at the grotesque perversion of masculine responsibility represented by the women’s movement” (DeHart-Mathews and Mathews 1986: 49). Schlafly hurled insults at the ERA supporters, urging her readers to view photographs of an ERA rally and “see for yourself the unkempt, the lesbians, the radicals, the socialists,” and other activists she labeled militant, arrogant, aggressive, hysterical, and bitter (Carroll 1986: 65). When ERA supporters “gathered at the federally financed 1977 International Women’s Year Conference in Houston and endorsed homosexual rights and other controversial resolutions on national television, they helped to make the case for ERA opponents” (Berry 1988: 86).
The shift in debate slowed and then stopped ratification of the ERA. In 1974, three states ratified the amendment, one state ratified in 1975 and one in 1977, and then ended the campaign with only thirty-five of the thirty-eight required (Mansbridge 1986). At the same time, states began to rescind their prior ratifications, with five states voting to withdraw their prior approval (Neuwirth 2015). The legality of the rescissions was unclear, but these efforts had political reverberations in the unratified states (Mansbridge 1986). When the deadline arrived without the required three-fourths approval, Congress voted in 1978 to extend the ratification deadline three years to June 30, 1982. Not a single additional state voted to ratify during this extension (Berry 1988). In 1980, the same year President Jimmy Carter proposed registering women for the draft, the Republican Party dropped ERA from its platform and newly elected president Ronald Reagan came out in opposition to the ERA. Businesses, manufacturers, and insurance companies all increasingly opposed the amendment (Burroughs 2015). ERA supporters escalated with more militant demonstrations of hunger strikes and marches. They chained themselves to the gates of the White House fence and Republican National Committee headquarters and trespassed on the White House and governors’ lawns. But such protests had little effect, proving counterproductive as they alienated Republican sponsors and reinforced portrayals of the radicalness of the proposed amendment (Carroll 1986). Despite an extension, the ERA was defeated on June 30, 1982, three states short of the required super-majority of states. Congress immediately reintroduced the amendment, holding hearings in late 1983. The floor vote of 278-147 in the House came six votes short of the two-thirds needed for passage. Despite how close this generation of campaigners had come to achieving their goal, for most, the ERA was now dead (Farrell 1983; Mayeri 2009).
The broader goals of the ERA, however, were not dead or abandoned. All through the previous decade, legal feminists led by the ACLU and Ruth Bader Ginsburg had been pursuing the second front of litigation and doing so with some success. In 1971, the Supreme Court struck down a law for the first time as arbitrary sex discrimination under the Fourteenth Amendment. In Reed v. Reed (1971), the high court overturned a state law that presumptively made a father, and not a mother, the administrator for a deceased child’s estate. Two years later in Frontiero v. Richardson (1973), a plurality of the Court applied heightened scrutiny to strike down a law automatically granting military benefits to wives, but requiring military husbands to show dependency. The pros and cons of the dual constitutional strategy played out in Frontiero. The Court’s plurality endorsed strict scrutiny for sex-based classifications because of congressional passage of the ERA, thus harmonizing the two. But the concurrence held that the pendency of legislation weighed against judicial decision, and required waiting for the final outcome of the constitutional process. In 1976 a majority of the Court definitively applied equal protection to sex discrimination in Craig v. Boren (1976), adopting, however, only an intermediate judicial scrutiny, one more permissive than that for race. As Mayeri (2004: 826) notes, “This Goldilocks solution” in Craig captured the “Court’s ambivalence about both the procedural and the substantive aspects of a revolution in gender roles.” The ambivalence is apparent in that while striking down the law in Craig denying young men equal access to 3.2% beer, the Court upheld other discriminatory laws, like veterans’ preferences for men, statutory rape for minor women, and military pensions for men (Schlesinger v. Ballard 1975; Kahn v. Shevin 1974; Geduldig v. Aiello 1974). Equal protection proved an imperfect solution, and easily manipulable in the hands of the Court. For many activists, this indicated that perhaps an equal rights amendment was needed after all.
In the 1980s, at the time of ERA’s defeat, polling found that a majority of the electorate remained in support of the amendment (Businessweek 1983; Gallup Report 1981; Mansbridge 1986). According to Pleck (1986: 107–108), “In the midst of a national conservative tide, popular support for the ERA was very strong.” Most national leaders, political conservatives, and “major national organizations from the American Bar Association to the Girl Scouts had gone on record in favor of it” (Pleck 1986: 108). Then why did the ERA fail? Scholars and activists have searched for possible explanations. Some suggest that a rushed political process failed to build the necessary state consensus on women’s rights to match the federal consensus, along with inadequate state organizational structure to secure ratification, outdated campaign tactics and failure to use mass media, and lack of legislative prioritization (Berry 1988; Carroll 1986; Mansbridge 1986; Mayo and Frye 1986; Pleck 1986; Steinem 1984). Other scholars point to deep substantive disagreements about women in military combat and revolutionary changes in traditional motherhood, which threaten women personally as they perceive a danger to themselves and their daughters (DeHart-Mathews and Mathews 1986). Berry (1988: 85) notes that “equality may have seemed simple to proratificationists, but to others it meant sexual permissiveness, the pill, abortion, living in communes, draft dodger, unisex men who refused to be men, and women who refused to be women. . . . And a fear that men would feel freer to abandon family responsibilities and nothing would be fined in exchange.” Legal scholar Catharine MacKinnon (1987: 770) thought ERA failed because it did not go far enough, and more radically “mobilize women’s pain and suppressed discontent” derived from systemic, social realities of male supremacy. And still others questioned the need for an equal rights amendment, given intervening Supreme Court decisions extending equal protection to women and federal legislation like Title VII and Title IX of the Education Amendments (Mansbridge 1986; Mayeri 2004).
Congress continued to reintroduce the Equal Rights Amendment every year after its defeat, but it went nowhere. Glimmers of action appeared in 2007 when a bipartisan group of lawmakers rechristened the amendment the “Women’s Equality Amendment” (Mayeri 2009: 1224) and in 2013 when Representative Carolyn Maloney (D-NY) proposed new language for an equality amendment to make the equality abstraction more concrete: “Women shall have equal rights in the United States and every place subject to its jurisdiction.” But the time and urgency for an equal rights amendment seemed to have passed. If ERA was not politically dead, it was at least comatose (MacKinnon 1987).
Conclusion: Equal Rights One Hundred Years after Suffrage
In 2014 a new ERA Coalition of major women’s rights organizations formed, fueled by a new generation of young people outraged at continuing inequality and energized to action (Neuwirth 2015). The year brought renewed grassroots interest in the ERA, sparking popular reconsideration of an equality amendment endorsed by celebrities like Meryl Streep and feminist icon Gloria Steinem (Babbington 2015). Justice Ruth Bader Ginsburg publicly called for the ERA to ensure future generations that women’s equality is “a basic principle of our society,” just as she had thirty-five years earlier (Schwab 2014). Even legal feminist scholar Catharine MacKinnon (2014: 569), previously opposed to the ERA as a weak, formalistic attempt at equality, now believed that an ERA is “urgently needed, now as much as or more than ever.” Surveys have shown over the last decade that most voters, as high as 96%, support equality for women, and 91% believe equality should be guaranteed by the Constitution (Neuwirth 2015), indicating perhaps a gendered cultural opportunity for change (McCammon et al. 2001). However, these surveys also show that 72% of people believe, incorrectly, that such rights are already included in the Constitution.
The ERA Coalition believes the time is ripe again for an equal rights amendment, given the next generation’s interest and recent political activity (Neuwirth 2015). In 2014 Oregon passed a state ERA referendum with 64% of the vote. Illinois and Virginia also passed state ERA laws, two states that had not previously ratified the federal ERA. Federal ERA proponents advocate a “three-states-more” strategy, which assumes the continued validity of the prior ratifications and seeks ratification of ther required three additional states. One state, Nevada, ratified the ERA in March 2017. This extended ratification strategy is supported by the delayed the ratification of the Twenty-Seventh Amendment (salary change for Congress must take effect the following term), as it was sent to the states for ratification in 1789, but not ratified until 1992, when the last states joined (Burroughs 2015).
A key question is whether women legally need the ERA, or whether its goals of general equality and specific rights have effectively been accomplished through other means. The virtually unanimous consensus of legal scholars is that the ERA’s goals have been effectively achieved through the Supreme Court’s equal protection jurisprudence (Mayeri 2009; Siegel 2006). Courts now review gendered state action under intermediate scrutiny, requiring that any laws treating women differently be justified by important governmental interests and that the laws be closely tailored to those interests (United States v. Virginia 1996; Mississippi University for Women v. Hogan 1982). Other scholars, however, have emphasized the limitations of equal protection analysis for sex equality (Brown et al. 1971; MacKinnon 2014; Mansbridge 1986). For gender discrimination cases under equal protection, the Court utilizes a lower standard of intermediate scrutiny, rather than the strict scrutiny used in race and religious discrimination. This lower standard tolerates many of the continuing instances of less overt sex discrimination and laws that have discriminatory effect rather than textual prohibitions on gender (Siegel 2002). The equal protection approach is also limited because it requires proof of intent—defendants thinking bad thoughts about women—which, MacKinnon (2014: 572) notes, “doesn’t address how discrimination mostly operates in the real world,” where “the vast majority of sex inequality is produced by structural and systemic and unconscious practices” inherited from centuries of gender hierarchy. Equal protection law’s formal classification structure, she explains, which rigidly treats only exactly similar things the same, is incapable of assessing the ways in which people “can be different from one another yet still be equals, entitled to be treated equally” or where affirmative diversity is needed to treat alike those whom are different (MacKinnon 2014: 571).
Some scholars ( Schwab 2014; Hoff-Wilson 1986) also conclude that equality for women has essentially been achieved for women without the ERA because the specific substantive goals of the amendment were accomplished through a variety of federal legislation on specific issues as well as the parallel state constitutional amendments. Twenty-three states adopted mini-ERAs, and such amendments have helped strengthen women’s ability to challenge discriminatory laws in those states. Courts often interpret the state ERAs to require strict scrutiny, and two states mandate an even higher absolute standard that presumes any discriminatory law to be unconstitutional (Burroughs 2015; Wharton 2005). In addition, federal legislation has mandated equal employment and education in the Equal Pay Act of 1963, Title VII of the Civil Rights Act of 1964, Title IX of the Education Amendments of 1972, the Pregnancy Discrimination Act of 1978, and the Violence Against Women Act of 1994. Such piecemeal legislation, however, is subject to the political ebb and flow and can be rolled back, as the Violence Against Women Act was when the Supreme Court held in United States v. Morrison (2000) that Congress had no power to address civil remedies for domestic violence (MacKinnon 2014).
The renewed campaign for an equal rights amendment emphasizes the continued systemic harms to women of economic inequality, violence against women, and pregnancy discrimination and the limits of existing laws to address these concerns (MacKinnon 2014; Neuwirth 2015). Proponents of an equal rights amendment emphasize the need for a permanent constitutional guarantee to control an overarching legal and social principle of women’s equality. The United States, unlike the majority of other countries, has refused to incorporate such an express guarantee in its written constitution or adopt the international women’s bill of rights by ratifying the United Nations’ treaty (MacKinnon 2014; Neuwirth 2015). The absence of an express guarantee permits traditional literalists like Justice Antonin Scalia to opine, “Certainly the Constitution does not require discrimination on the basis of sex. The only issue is whether it prohibits it. It doesn’t” (California Lawyer 2011). The ERA offers a corrective to this thinking and the equivocal state of women’s rights under the law. It offers a textual guarantee of sex equality, an inspiration for public policy, and a powerful symbolic support of women’s equality in all social and legal venues (Ginsburg 2014; MacKinnon 2014).
The equality amendment fulfills the hope first envisioned by proponents of a suffrage amendment to fully integrate women into every aspect of the citizenry with full recognition of their humanity (Siegel 2002). Now, almost one hundred years later, perhaps the time is right. Or perhaps the time is right to embrace the larger social justice legacy of the women’s equality movement and expand the amendment to all human rights to include aspects of sexual orientation discrimination and reproductive rights. These broaden the concept of sex discrimination to encompass the ways in which gender is practiced and experienced in our society. Perhaps dovetailing with recent advances and political consensus in civil rights of same-sex marriage will give women’s equality the final push it needs to be enacted.
One federal court upheld the rescissions, but expiration of the ERA ratification deadline mooted the question before the Supreme Court could review the case. Idaho v. Freeman, 529 F. Supp. 1107 (1981), stayed, Jan. 25, 1982. The evidence against the legality of rescission is that states attempting to rescind their ratification of the Fourteenth Amendment were still included as enacting states (Berry 1988).
The strict scrutiny test requires that state laws based on race be justified with compelling interests that are narrowly tailored to necessary regulation, thus invalidating most laws based on race. Loving v. Virginia, 388 U.S. 1 (1967); McLaughlin v. Florida, 379 U.S. 184 (1964).
For Ginsburg’s early pro-ERA writings, see Ruth Bader Ginsburg, “The Fear of the ERA,” Washington Post, April 8, 1975: A21; Ruth B. Ginsburg and Kathleen W. Peratis, “Equal Rights for Women,” New York Times, December 31, 1975: 21; Ruth Bader Ginsburg, “Let’s Have E.R.A. as a Signal,” ABA Journal, January 1977: 70; Ruth Bader Ginsburg, “Sexual Equality under the Fourteenth and Equal Rights Amendment,” Washington University Law Review (1979): 161-178.
The United States is one of only seven countries that has not ratified the UN Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW), including Iran, Somalia, Sudan, South Sudan, Palau, and Tonga. The treaty was signed by President Carter in 1980, but failed to get the two-thirds congressional vote necessary for ratification (Neuwirth 2015). | <urn:uuid:0e538ca5-6ef4-4f11-a4aa-4150ea11666a> | CC-MAIN-2022-33 | https://lawprofessors.typepad.com/gender_law/2018/11/the-modern-legal-history-of-the-equal-rights-amendment.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00097.warc.gz | en | 0.940812 | 5,173 | 2.796875 | 3 |
Chapter 11 The Climate Crisis
For 11,000 years, the stable Holocene era has afforded humanity the playground to thrive. Now we have created the Anthropocene which is driven by human impacts on the planet. Carney does not go into any counter-arguments to say that the impact of human activity on the planet is correlated 1% with climate change or 99%…for him the data is conclusive. In the 1850s, the Industrial Revolution drove up global temperature averages by 0.07 degrees Celsius per decade. The planet’s average temperature is up by 1 degree Celsius since the 19th century.
Other climate changes noted
- The oceans are 30% more acidic since the Industrial Revolution;
- Sea levels have risen 20 centimeters in the last 100 years;
- The 5th mass extinction has shifted to the 6th with extinctions at a rate that is a hundred times higher than an average from millions of years;
- There has been a 70% drop in mammals, fish, birds, amphibians and reptiles since 1965, assuming evolution is not at play, although how do you define evolution?;
Now, market prices of assets are being impacted. Climate change is likely creating:
- a. a feedback loop of rising sea levels,
- b. massive human migration away from rising coastal sea levels,
- c. extreme weather events that are damaging insured property,
- d. More impaired assets on the balance sheets of companies,
- e. A reduction work productivity with the lethal heatwaves,
- f. Global conflict over scares resources,
- g. collapse of coral reefs destroying the livelihood of 500 Million people and ¼ of all biodiversity,
- h. Increased regime change,
- i. Increased citizen unrest,
- j. Increased spread of disease
What is the cause?
The UN’s Intergovernmental Panel on Climate Change (IPCC) has argued that there is a 95% chance that human activity is CAUSING the global warming / climate change. The release of GhGs (Greenhouse Gases) with the most problematic being CO2 which, during rapid industrialization and growth has meant that over 250 years, humans have burned ½ trillion tons of carbon. Trends suggest another ½ trillion could be released in the next 40 years…¾ of the warming impact of emissions is CO2, with the remainder being methane, nitrous oxide and fluorinated gases. Trees cannot carbon capture to rebalance. Temperature and CO2 emission move roughly together, therefore we know what the carbon budget ie. the amount of carbon dioxide that be released into the atmosphere before temperature thresholds are surpassed.
The planet as a system would accelerate into a dangerous feedback loop if average global temperatures go past 1.5 degrees Celsius. The IPCC predicts that if temperatures reach 2 degrees Celsius above pre-industrial levels then 1) sea levels could risk 10 centimeters, 2) ¼ of all people could experience severe heat waves, 3) coral reefs will die off almost completely… 4) permafrost could further unlock CO2 and methane accelerating the trends would blow the budget wide open.
Carney advocates a stabilization of temperatures at 1.5 degrees Celsius. above pre-industrial levels. To do that:
- Emissions have to fall by a minimum of 8 percent for the next 2 decades….
- We release about 42 +/- GigaTons of CO2 per year,
- Planetary budget is 420 GigaTons of CO2 remaining before we hit 1.5 degrees Celsius and 1500 GigaTons of CO2 before we hit 2 degrees Celsius..
Children born in 2021 will have to generate ⅛ the amount of CO2 emissions compared to baby boomers into order stay on carbon budget of 420 GigaTons of CO2 remaining before we hit 2 degrees Celcisu. We need to reduce excess carbon from:
- Industrial processes are 30%
- Buildings 18%
- Cars 17%
- Energy generation 17%
- Agriculture 10%
To reduce GhG, the solutions must
- Change how we create energy (fossil fuels must shift to renewables);
- Change energy usage (decarbonizing industrial processes, increased energy efficiency for buildings);
- Increase the carbon capture, use and storage (and maybe terra forming, although Carney doesn’t mention this)….
Basically, we need to convert the creation of all industrial process to electric and then shift the source of electric from fossil fuels to renewables. The first step may appear impossible considering the amount of energy needed to manufacturer most items in our homes, however that’s what has to happen. Bill Gates details the technologies needed in his book “How to Solve the Climate Crisis.”
Geography of Emissions
most pollution is from cities. By region its:
- China 28%,
- Asia – Other 16%
- USA 15%
- EU-28 10%
- India 7%,
- Russia 5%,
- Japan 4%,
- Europe – Other 3%,
- Africa 3%, Canada 2%, Australia 1%
The Consequence of Climate Change
How much do we value the future? The estimates of the costs of climate change and value of the sustainability contain many uncertainties that enable doubters. The GDP, employment and wage impacts are one way (a 25% reduction in GDP at the tipping point of 3 degree Celsius), the net present value of all future cashflows. What we really value such as the lives of species, livelihood adaptation, birth rate drop aren’t easily monetized.
How central bankers view climate change? There are two types of risk
- physical risks
- transition risks
Risk Type 1: Physical risks =
increased rate of climate and weather related events (storms, fires, floods). The underwriting risk shows that the entire livelihoods buckle as inflation adjusted losses have increased over 8x over the last few years.
- Insurers are in the front line of climate change, beach houses aren’t getting insured at the prices they were 10 years ago.
- The insurance sector is adjusting and pricing-in some climate change using various projected models, subject to re-writes like any other model…Carney feels that coupling “sophisticated forecasting, forward-looking capital regime and business models built around short-term coverage has left insurers relatively well placed to manage physical risks” (277, Value(s)).
- Carney argues that there areas of the economy that will need a public backstop because insurance companies will not insure those areas between $250 Billion and $500 Billion on the US coastal property by 2100.
- Insurers and reinsurers are expecting trouble, Lloyd’s of London has a 20cm assumption which coupled with a hurricane would cause Manhattan damage that is 30% more severe than Hurricane Sandy.
- Coastal flooding is projected to rise by 50% by the end of this century.
- Lethal heatwaves are projected to effect 1.2 billion people annually by 2050;
- The Network for Greening the Financial System (NGFS) is an 80 central bank strong group that have created representative scenarios to show climate risks may evolve affecting the real and financial economies:
- Hothouse earth shows that at 3 degrees Celsius, sea levels rise x cm and extreme weather events result in a 25% GDP loss by the end of the century.
Risk Type 2: Transitional Risk
The second category of costs of risk. The costs and opportunities are more apparent as the crsis worsens and impositions become more overarching:
There will be stranded assets
- Tropical deforestation of palm oil, soy, cattle and timber is for commercial use 70% of the time.
- Automotive industry that will, in Carney’s mind, be disrupted by electric vehicles, driverless vehicles and car-sharing services.
- Coal producers have gone bankrupt in the US.
- Demand and Supply Shocks: demand shocks affect consumption, investment, government spending and net exports in the GDP = C + I + G +(X – M). Demand shocks are short-term usually and therefore don’t effect the productivity of the economy. Supply shocks effect growth, the growth of labour supply, physical capital, human capital and natural capital and the degree of innovation in the economy. So the impact of climate change on GDP is very tough because the sample of prior shocks also contained policy adjustments …
Calculating the Impact of Climate Change on GDP
- Feedback loops amplify quickly and suddenly (ice melting off of the antartica rapidly) and the north pole, it’s dynamic and not inherently predictable even if there is no human variable in the atmosphere itself (all chemistry, geology and hard sciences):
- The relationship between GDP and temperature is not linear;
- Do physical climate events actually have a negative effect or simply impact growth (feedback loops and bad social impacts);
- The degree of adaptation and innovation to mitigate the impact of climate change could be much more significant (i.e humans turn a disaster into a strength leading to more prosperity due to new opportunities that are created).
- Factors like the mass climate refugees which could be over 200 million people, the poorest being dislocated.
- The 6th Mass Extinction: the biodiversity that provides natural capital from the Amazon to the coral reefs will be effected.
For Carney, it is a big deal that the CEO of Shell (Sir Mark Moody-Stuart) says that the probabilities of climate change’s negative impact on humanity is 75% and acknowledges that despite that uncertainty in predicting climate change, Moody-Stuart through the course of his career made larger strategic bets with much lower probabilities…
For Carney, climate change is a ‘tragedy of the horizon.’ The worst impacts are beyond the life-span of the decision-makers of today. The horizon is beyond: the business cycle, political cycle and central bank cycles.
- The horizon of central banks is 2 to 3 years.
- The horizon of financial stability is about 10 years.
- The horizon of political decision-making is about 4 years.
The benefits of mitigating greenhouse gases which stay in the atmosphere for centuries is massive, but for the people who don’t vote today, because they don’t exist yet. “Halving emissions over 30 years is easier than halving them in a decade.” (285, Value(s)) The welfare of future generations should not be discounted as heavily as the financial calculations typically demand.
The rate of adoption of new technology has three phases; 1) research and development, 2) mass adoption, 3) maturity. The rate of S-Curve over the years has been accelerating. James Watt who invested the stem and in 1769 did not see coal over take peatmoss until 120 years later in the 1900s (technically, Watt died before see that development). So, technology to tackle climate change is emerging at quicker paces. The S-Curve needs a nudge from the market as well as the public sources of capital investment.
Tragedy of the Commons
The original example is the unregulated grazing rights on the common lands of Ireland and England in the 19th century there was a negative externality in which a decision is taken which then effects others who aren’t party or even benefit from that decision, is taken. We, the consumer, and we the producer don’t pay for the CO2 emitted to produce most goods. Other examples:
- 1) Overfishing to the point at which that stock of fish is depleted (Cod on East Coast of Canada);
- 2) Deforestation to the point where the forest is spoiled (Easter Island…);
- 3) Commons grazing to the point where the land was destroyed…
Three solutions to the Tragedy of the Commons:
- Pricing the externality: putting a price on carbon. This has only worked well in theory. There is a price of $15 per ton but you would need $50 to $100 per ton to meet the Paris Accord target…
- Privatization of the public spheres: Public grazing lands in the UK to privatization however this created a wealth transfer to those who had the right to charge a fee.
- Supply management by the community to cooperate or regulate the scarce resources there in. Popularized by Elinor Ostrom (1933 – 2012) as economic governance. Get political consensus with shared management.
Carney goes on to draw the analogy that COVID is like climate change, it is a global problem. But climate change has no boundaries at all. Now, there are echoes of Bretton Woods style nationalist self-interest, huge debts and new institutions to tackle climate change:
- 1992 – Rio Earth Summit; a good start..
- 1997 – COP (conference of the parties) 1 and 3 the Kyoto Protocol: Kyoto was flawed, didn’t have teeth, a more serious call to action;
- 2009 – COP15 the Copenhagen Accord flawed, advanced countries pledge financial flows to reduce emission in poorer countries;
- 2015 – COP21 the Paris Accord, more stakeholders, financial firms, turning the agreement into legislative objectives as the UK did (already a low emitter, but limited recycle programs and lots of trash in the streets)
Our political systems don’t overcome these items. True leaders are stewards of the system. Leadership is about being custodians.
Current Financial Sector
Financial markets aren’t really pricing in a carbon price transaction. There is a low urgency effort that will lead to hot house earth, according to Carney. Most financial energy numbers don’t use a price test for their carbon stress test of capital investment They usually use a static price. Their prices are well below the medium to get to zero. BP has $100 per ton in its internals. Only 4% of banks and insurers think these climate risks are being priced accurately. Only 16% used a dynamic price.
Transition Pathway Initiative (TPI) is a consortium of thirteen + five asset owners/managers that are trying to better understand the transition to low-carbon impacts investment strategies. They also launched the FTSE TPI (Climate Transition Index) to articulate who is on the right side of history in Carney’s mind. Investors are shifting capital away from hydrocarbon investments incrementally suggesting that they are pricing in a transition. In other words, the markets are responding to something akin to inevitability about a low-carbon economy. But these are strategic bets, it doesn’t mean they are certain. Moody’s “recently identified sixteen sectors with $3.7 trillion in debt with the greatest exposure to transition risk” (297, Value(s)).
- § For the Goldman Sachs, capital expenditure in oil and gas is being hindered by this transition of asset manager value in oil and gas. Major projects have been mitigated by 60% over the last five years, big oil is moving to big energy.
- § Portfolio managers are engaging and pulling down their oil and gas investment incrementally. Also, in part due to the collapse of prices.
- § Transition bonds. In the fullness of time, climate change will incentivize brown companies to raise capital for green innovation.
- § Carney argues we cannot diversify away from climate change.
- § For Carney he argues that we need financial markets to build a virtuous cycle, better pricing for investors and smoother transition.
- § Sustainable financial systems are being built and the next chapter discusses this in more detail.
Analysis of Part 2 Chapter 11:
- After An Inconvenient Truth (2006), most of the emphasis was on awareness coupled with government intervention. Stephane Dion in Canada led the Liberal Party to massive defeat in 2009, a campaign built on a Green Shift. Of course a myriad of variables determined that election which is why blaming the Green Shift is a political statement that Canadians “don’t really care about climate change” which obviously varies as an opinion per Canadian. Such is our complex world… However, the lesson taken there is that the idea that government and by extension the civil service and regulation are primary means of driving punitive costs to polluters has always been deemed suspect by the smart-money folks. Obama was hands off for example because of the coalition he had backing him and the legislative strategy he needed to implement. And his biography, his mother knew that jobs were more important then environmental for poor Indonesians that she worked with for years. Now, financial institutions, which enable the allocation of scarce capital in as optimal a manner as possible, are being marshalled (by the general investment customer base) to more seriously address climate change + the general investment. Carney does not put the most devastating case forward because it gets harder and harder to know how that would play out in a complex eco-system such as our planet. But basically, after the sea levels rise 10 cm….10 cm or 20cm? So what? We were talking about 20 feet in 2006. So he has to combine weather with flooding to say Hurricane Sandy would have been 30% worse than it was from an insurance perspective. I feel so bad for insurance companies having increase their rates….not!
- Just how bad is this going over 2 degrees Celsius? It’s a prediction but it seems likely that Earth could actually cook-up with this accelerated feedback loop to the point where the Antarctic ice sheet completely melts and we’d all have to learn the breast-stroke…We might even develop gills…..okay, that’s a stretched. But I don’t know if people get how serious climate change consequences are….at the point in which the earth floods over significantly, we would start seriously looking at terra forming: ie. dropping the global ocean level by moving ocean bed onto non-arable land or deploying white reflective material at the poles to bounce more light back up or sprinkle dust into the sky as Bill Gates suggest…you know, things that sound crazy right now until we’re all learning to swim breast-stroke.
- Carney seems to have removed the claim that water levels would rise 20 feet…..I’m not sure why his book doesn’t mention this often quoted measurement?
- Some of my wacky ideas:
- In India, hardworking people are paid much less then a Canadian is paid. What does Carney have to say about inequality? Or post-materialism?
- Alberta conversation, needs a better answer around some support system, although historically when a sector struggles we do not necessarily intervene, but with climate change, government is intervening to accelerate the transition.
- What if the scientists are wrong? How much punishment to be allocated to their grandchildren for the lives hindered incorrectly? The climate physics is rock solid but its an important philosophical question. What commitment can Carney truly make?
- The problem is that in the long run you’re dead. The future is discounted to zero once dead. There is no proof that any other person exists, Berkley style / simulation.
- The Yeah But…a lot of the public still can’t connect this slow moving crisis to their lives, most people see this is a transitionary problem over decades and decades, predictions have been wrong and continue to be wrong, if saving humanity was so lucrative then why haven’t we paid to terra form the planet to prevent sea level rising: there is a funding problem: no one wants to pick up the cheque, where is the global fund to pay for these changes, you are asking people to suffer for an abstraction that isn’t flawlessly defined…
- Carney fails to address the command economy advocacy imbedded in prescribing with science what every person’s carbon footprint ought to be. Here, there is no invisible hand, the government of the world is most equipped with providing each citizen with their responsibility. The counter by Carney is of course, seatbeats wouldn’t have been imposed without political and regulatory force. It is the use of that force that can spur innovation in concentrated points throughout the economy.
- Putting a price on pollution is like putting a price on negative social media comments, the state is imposing a costs for doing something that perceived as bad but has some positive value (_ and making the receiver stronger.
- A general rule in life is to identify that if someone is claiming that there is a single cause to a problem, they are trying to convince you of something, sell you something. The language problem is best exemplified with climate change discussions. Human activity has caused climate change immediately sounds misleading because the “climate is always changing” and how does one know to what degree “human activity” is contributing to climate change. The counter argument that human activity is not causing climate change conflates causing with contributing. If someone says human activity is not contributing to climate change then that’s a sniff test for an intellectual lightweight as well. To prove that humanity is contributing to changes in the environment apply the counterfactual of the no-humans version of earth. In that version of earth, on this day, you would now be outside rather than in your home or office where you are reading this article. There would be no roads, etc etc. All those human tools and technology requires heat energy to produce. That heat energy generates CO2 amongst other gases. So if any one says that humans do not contribute to climate change, they necessarily have to deny that absent humans there would be roads, houses mysteriously populating this no-humans version of earth.
- Carney neglects to acknowledge that the research cited is subject to grants. If someone is obsessed with derivatives, then they will likely think derivatives are really important. They will have biases that warp their world to the point where their own brain notices patterns elsewhere that relate back to derivatives: this is confirmation bias. To not acknowledge that any human being, regardless of credentials is subject to the same confirmation bias should they study climate change, is intellectually controlling. I suspect Carney knows there should be some doubt but he may not trusts readers with this nuance.
- Carney doesn’t have an entrepreneurial spirit really, if he did he would understand that extinction is a necessarily part of evolution. If the cause is human habitat encroachments which by definition is going to continue to happen, then we should be sympathetic. However, extinction is not by definition bad. Does anyone miss Pan-American Airlines? Does anyone miss the Dodo Bird? Of course, we all miss these things or would like to see them in the wild but the cause of their disappearance cannot be 100% human activity alone, but definition we live in a mult-variate world.
- Measuring the acidity of the oceans: a 30% increase in acidity is significant if the acidity of the acid content is 1 -> 1.3 part per 100 but not if it is 1 0> 1.3 part per 1M….
- Carney does not mention that there is an increase in human habitats such that extreme weather events like flash flooding are on the rise…in geographies where the events are newsworthy. An analogy might be that coverage of gun violence is only newsworthy when a random citizen is the victim rather than a gang related victim.
- Carney does not address terra forming solutions accept for the socially accepted one: carbon capture which involves sucking carbon out of the air or releasing CO2 into the ground.
Citations Worth Noting for Part 2: Chapter 11:
- ‘What is Ocean Acidification’, PMEL Carbon Program.
- IPCC, Special Report: Global Warming of 1.5 degree Celsius (2018).
- Saul Griffith, Rewiring America, e-book (2020).
- Stockholm Environment Institute, ‘Framing stranded asset risks in an age of disruption’ (March 2018).
- Norman Myers, ‘Environmental Refugees: An Emergent Security Issue’, Oxford University (May 2005).
- Sandra Batten, ‘Climate Change and the Macro-Economy – A Critical Review’, Bank of England Staff Working Paper No. 706 (January 2018).
- IMF, ‘The Economics of Climate’ (December 2019).
- Ryan Avent, ‘Greed is good isn’t it?’, American Spirit, 18 April 2020. | <urn:uuid:1e968f2b-7480-45bc-8d03-b8c480a543b5> | CC-MAIN-2022-33 | https://professornerdster.com/tag/sir-mark-moody-stuart/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571869.23/warc/CC-MAIN-20220813021048-20220813051048-00097.warc.gz | en | 0.939786 | 5,038 | 3.359375 | 3 |
Artificial Intelligence (AI) is the new buzzword, and we are constantly hearing or reading about Artificial Intelligence in the news, like the development of self-driving cars or driverless cars. Anyone interacting with a chatbot on any website is an AI tool. But did you ever wonder how exactly artificial intelligence in talent acquisition is used now-a-days?
What is Talent Acquisition?
Before deep-diving into how AI plays a major role in the recruitment industry, let’s learn about talent acquisition.
Gartner defines Talent Acquisition is the process of identifying organizational staffing needs, recruiting qualified candidates, and selecting the candidates best suited for the available positions.
The stakeholders include recruiters, HR managers, hiring managers, and top-level executives. The team’s goal is to identify, acquire, assess, and hire candidates to fill open positions within the organization. For the majority of organizations, the talent acquisition team will be part of the HR team. In a few larger organizations, talent acquisition is a different team that collaborates with the HR team.
Artificial Intelligence in Talent Acquisition:
How will AI assist the Talent acquisition to find highly skilled candidates?
In today’s competitive world, it is challenging task to hire right talent in a shorter time. In the future, finding suitable candidates will depend on how the recruiters and talent acquisition teams will automate their workflow.
According to a study, 96% of recruiters view that AI can enhance talent acquisition and retention. However, when it comes to implementing AI in recruitment, as of 2018, merely 13% of the HR team had implemented it. Still, AI in recruitment has tremendous potential as more than 55% of recruiters plan to implement it by 2023.
In this article, we will learn how AI in talent acquisition will benefit the firms in the longer run and how it will increase the recruiters and talent acquisition teams’ productivity.
Below we have listed some of the major features of AI in Talent Acquisition
1. Candidate Screening:
Candidate screening is one of the challenging tasks for recruiters. For a job opening, recruiters receive more than 250 resumes, and out of them 75% are unqualified. A recruiter spends 23 hours screening resumes for one open position; more than 60% of recruiter’s time is spent checking for the correct resume.
According to a survey, 52% of talent acquisition leaders mentioned that the most challenging part of recruitment is screening candidates from a large pool of candidates.
AI-powered tools will be a lifesaver for the recruiters. AI tools can find the right candidate either in the job boards or in the in-house database. Based on the keywords, AI tools will search in the database and list the qualified candidates. AI tools can be programmed to be legally compliant and avoid bias while screening the candidates based on their demographics (age, ethnicity, physical challenges, language, etc.).
One successful example of AI candidate screening is when IBM used AI screening of candidates back in 2013. According to IBM, it was able to relocate 80% of its staff from a closed business unit to other divisions.
2. Candidate Sourcing:
Sourcing Candidates from different job boards and social media is a time-consuming process, and recruiters need to put a lot of effort into finding the right channel to find the right candidate for an open position. In today’s competitive world, organizations do need to find and acquire the candidate in the shortest amount of time, and AI is the perfect tool.
Using AI, recruiters will find candidates from millions of resumes on different job boards like Dice.com, Monster, social media websites such as LinkedIn and GitHub. Furthermore, an AI tool lists the top candidates based on the recruiter’s search criteria and ranks them in order, which increases the recruiter’s efficiency in sourcing candidates.
Another significant advantage of the AI tool is that it can parse job requirements from the job description and include them while searching the database. AI considers various factors such as candidate education, experience, projects, and skillset matching the job requirements to find the right candidates.
3. Finding Candidates from Niche Websites:
Recruiters are aware that some of the positions related to data science and machine learning are hard to fill. Candidates for such roles are recruited through reference or internal hiring. But passive candidates are active in some niche websites such as GitHub, but it is next to impossible to hire from such websites.
GitHub is one of the most prominent places in the IT industry where candidates with different backgrounds join the platform for learning, sharing their views, and developing software.
However, by implementing the AI tool, such candidates can be sourced without using any Boolean tools. The AI tools will deep-dive into such websites and use different parameters to find candidates based on their skill set and languages they are proficient.
4. Job Posting and Targeting:
In the past, recruiters need to log in to the dashboard and add the jobs manually, which takes quite a bit of time. But Artificial Intelligence is revolutionizing job posting by running targeted ads. AI programming is done so that job is posted and targeted to the specific candidates who fit that role.
Targeted ads are possible based on the potential candidate skills (social media websites like LinkedIn), search history (cookies), and demographic profile. The candidate’s search history will reveal the job the candidate is interested in and might accept in the future.
Role of Machine Learning and Artificial Intelligence in Talent Acquisition:
Before deep diving into the topic, we need to understand what Machine Learning is and what roles it plays along with AI in the recruitment industry.
John McCarthy, widely acclaimed as one of the godfathers of Artificial Intelligence (AI), defined AI as the science of creating intelligent machines that can perform ‘human-like’ cognitive functions.
Machine learning is a subset of AI, and it offers smart devices that can learn from a particular environment. Machine learning, along with AI, redefines how organizations work, and recruitment is one such domain. AI removes bias and increases the productivity of the organization. In recent years, major firms such as Hilton, Humana, AT&T, Procter & Gamble, and CapitalOne adopted AI and ML to source candidates, schedule interviews, perform an initial screening of candidates, and other processes related to recruitment.
Here we will be discussing, what effect AI and Machine Learning will have on talent acquisition.
1. Candidate Experience
When recruitment firms compete against each other for similar roles, candidate experience is the difference between them. AI with Machine Learning will improve the candidate experience and add value. Using the AI, the interaction will be smooth, and all the queries raised by the candidate will be resolved.
For example, recruiters will not be available 24*7. Using a chatbot or live chat will provide essential information to the candidate’s role, and recruiters might take off from there and add more value to the conversation with the candidate.
AI and Machine Learning will attract and engage candidates during the early stages of the interview process and increase the overall candidate experience. AI and Machine Learning will boost the candidate’s confidence to stick with the recruiters, and it can also provide new roles that were previously not considered by the candidate.
Global cosmetic firm L’Oréal adopted AI in its recruitment process to improve its candidate experience. The AI and ML removed non-value-added tasks and relevant assessments given to the job seekers.
2. Predictive Hiring
According to the 2017 Glassdoor Report, by 2020, one in three millennials plan to quit their job. Of course, with the current COVID-19 situation, the number of millennials leaving the job will fall less than stated in the report. But when a similar situation arises when people are planning to leave the firm, how does the organization get to know and prepare for the future.
AI with Machine learning can improve the candidate and employee experience by using predictive analytics. AI platforms can predict whether a new candidate can fit into the company’s culture or not. Better prediction leads to more chances of the candidate working in the same organization for a more extended period.
3. Speed up Candidate Sourcing
As mentioned earlier in this article, candidate sourcing is one of the most formidable recruiters’ tasks. Organizations are looking for ways to speed up the sourcing of candidates, and implementing AI and ML can be one of the fastest way around. Recruiters can move away from tedious tasks such as candidate sourcing by adopting to chatbots.
Chatbots such as Helena perform various recruitment tasks such as screening candidates, collecting required information from the candidates, scheduling interviews for shortlisted candidates and even conducting an initial round of interviews before the organization’s actual interview process is started.
Artificial Intelligence Tools for Talent Acquisition Process:
To assist the Talent Acquisition teams, many start-ups and established firms have developed AI tools. The Talent Acquisition teams will benefit from using these tools to decrease the cost and time to hire the candidate and increase their productivity.
In this article, we have listed essential AI tools that can be used by the Talent Acquisition teams and stakeholders in the recruitment process.
1. AI Tools for Candidate Sourcing
Sourcing of right candidate from the right channel is one of the most crucial steps in recruitment sourcing. The talent acquisition teams have different channels to source candidates, but using AI tools will quickly help get their desired results.
Entelo is an AI tool developed to source candidates. The search engine developed by Entelo allows recruiters to find prospective candidates with requires skills. Entelo gathers information on the candidate from various social media websites. Furthermore, recruiters can identify candidates based on different criteria, such as race, gender, and veteran status.
The tools use predictive analytics and Natural Language Processing, and it is useful to recruit passive candidates. Some additional features include job posting, allowing candidates in the software or company’s website. Major brands such as Lyft, PayPal, and Target uses Entelo to hire highly skilled candidates.
Hiretual is an AI sourcing tool to find the best available talent in the market. The talent acquisition team can source candidates from more than 30 different platforms. Recruiters can find any candidate’s email id and phone number using the Hiretual tool. One of the tool’s advanced feature is converting any job title or job description into a smart Boolean string. The AI powered tool is used by well know companies such as IBM and Intel.
Arya is a recruitment automation platform designed to empower recruiters with AI. The Arya tool uses AI and behavioural pattern recognition to analyze 130 million+ social profiles to provide the right candidates and predict move probability. Arya is currently used by firms such as Kimco Services, Headway workforce solutions, Personify, and others.
2. AI Tools for Candidate Screening
Once the recruiters have identified potential candidates from a large pool of databases, the next step is to screen candidates and shortlist them for the first round of interview.
As mentioned earlier, screening candidates is a tedious task; the talent acquisition team can use AI tools for resume parsing. Some of the AI tools that can be used for candidate screening are listed below.
Pomato is an AI tool used for IT and technical recruiting. Pomato’s Resume-Analyzer and Job-Matching engine can deep dive and find the right candidates in a shorter time. According to the company, it can perform over 200,000 computations and provides a visualization of the candidate to determine, do they fit in that particular role or not. From there, talent acquisition can develop custom interview questions based on the job description.
CVViZ is an Artificial Intelligence powered, cloud-based online recruitment software solution for talent acquisition teams. Machine learning and NLP based algorithm help recruiters to find the most suitable candidates in the quickest time. Machine learning goes beyond the simple keyword-based search as it screens resumes contextually, learning from the recruiter’s hiring process to identify the best candidates. The AI-driven tools also knows the kind of candidates an organization engages in its interview process. It understands the type of candidates an organization hires. Based on such and multiple other parameters, it matches the right candidates with the right opportunities. CVVIZ’s significant clients include Alstom, Societe Generale, Headsnminds, and Iglobus.
3. AI Tools for Candidate Assessment
The recruitment teams need to assess whether the candidate applied for a particular role is suitable or not. Fortunately, the talent acquisition team need not spend much time on the candidate profile to decide he/she is a good fit or not.
Instead, they can opt for AI tools that will provide instant results and save time for recruiters, we have analyzed AI-driven candidate assessment tools and listed them below.
Mya is a chatbot built using Natural Processing and Machine Learning. Mya offers a conversational AI platform tool that enables computers to simulate real human-like conversations. The engagement with candidate increases as Mya can create emotions and trust among the candidates. While engaging with the candidate, Mya can also create candidate screenings and help the recruiters understand whether they are fit for the position.
Mya offers its services to hundreds of top enterprises and agencies, including 52 of the Fortune 500 and six of the eight largest global staffing firms.
Harver is an AI driven pre-employment assessment and predictive hiring platform. Harver’s proprietary AI algorithm uses IO Psychology and Data Science to predict a candidate’s chance of success in the organization. By measuring the candidates’ aptitude, culture fit, soft skills, and more, the talent acquisition teams will have the necessary data to make hiring decisions.
Harver offers customized assessment modules on candidate cognitive aptitude, culture fit, personality, multitasking capabilities. The candidate is asked to take a situational judgment test with potential workplace scenarios, where the candidate needs to choose a response or rank them accordingly.
Talview is an AI-powered candidate assessment platform to remotely screen, interview, and test top talent. Talview offers video interviewing, remote proctoring, and advanced assessment solutions, leveraging NLP, Machine Learning, Computer Vision, and Video Analytics. Talview tool enables anytime and anywhere interviewing using its video interviewing platform.
The tool also integrates the Applicant Tracking System (ATS) and Learning Management System to help the recruiters and talent acquisition teams to automate their routine tasks and find the right candidates for their organization.
Talview AI-powered tools had assisted top organizations such as Amazon, Deloitte, Swiss Re, Cognizant, and Sephora in finding top-notch talent.
Artificial Intelligence Reshaping Talent Acquisition Process:
Artificial Intelligence is bringing radical changes in the talent acquisition process. The AI program collects and analyzes a large set of data (candidate’s information) and lets us know what should be done with that data (suggest the talent acquisition team do they need to hire the candidate or not).
In the Talent acquisition space, AI is already in use as an assessment tool, but that’s not all; there are many other areas where AI can do. Below we have listed some of the major areas where AI can improve in the Talent Acquisition process.
1. Chatbots – Answer for Basic Queries
Chatbots are already used for recruitment purposes by large organizations such as Sutherland using Bot called Tasha, which will answer candidates’ queries. But in the future, more organizations will start using chatbots to resolve the candidate’s questions.
Furthermore, chatbots can be used as a tool to guide the candidate in the recruitment funnel. When the candidate arrives on the organization’s career webpage, bots will have a conversation with the candidate and drive them to the funnel. Bots can spot the candidate’s skills in their resume and give a clear picture to the recruiter on which role the candidate fits.
2. Streamline Recruitment – Finding Highly Qualified Talent
Let us consider that an organization hires 200 employees per year, and on average, 100 applicants apply for each of these positions. The talent acquisition team needs to go through 20,000 resumes per year, which is a daunting task. The talent acquisition team must process these applications, review the candidates, shortlist the best candidate, and schedule the interview which is time-consuming and expensive.
One of the most severe difficulties for the organization is that out of the 100 applicants, the top 10% will be available in the market for less than two weeks. Using the manual process will make it difficult for the recruiters to identify the top talent and hire them. One way to overcome this challenge is to implement AI that will cut down the manual tasks performed by recruiters.
AI tools can find candidates, analyze the resume, and gather information on the candidates from multiple social media websites. AI tools can mine candidates based on skills, years of experience, previous job titles, and based on the job description; the AI tool will decide if the candidate right fits for the role or not?
The talent acquisition team can do what they do best, engage with the best-fit candidates, and hire for the open-position and build a talent database.
3. Remove Bias in Hiring – Selecting Candidates based on their skillset
Many candidates would have come across unconscious bias while they come across an interview. Unconscious bias happens when a hiring manager or recruiters form an opinion about candidates based solely on first impressions. The unconscious bias does happen ever before the face-to-face interview.
Recruiters can solely judge the candidate’s resume picture, name, age, gender, or even based on their hometown could influence their opinion. So how does an organization remove the bias while hiring a candidate? Well, it can be done by implementing an AI tool.
AI tool is a bias-free platform and doesn’t choose the candidate based on their demographics, style of the resume, or reject the resume if it founds any typo errors. Rather the AI tool will judge candidates based on their skills, previous experience, salary, and other relevant parameters.
4. Facial and Speech Recognition Software
In a typical face to face interview, the interviewer can’t judge the candidate facial expression (the candidate is positive or nervous), the correct choice of words, and tone. However, when it comes to AI, the Natural Language Processing and Facial Recognition software can judge facial expression, choice of words, gestures, tone, and capture the interview transcript for further evaluation.
Apart from these, AI can analyze massive chunks of data from different video interviews and find patterns that otherwise were not visible by the recruiters or hiring managers. Companies such as HireVue offer video interview software that judge the candidate’s body language, tone of voice, stress level, and more.
5. Identity Best Fit Candidate – Hire only Top Talent
AI tools can identify the best fit candidate for the organization with better accuracy. The AI tools are programmed to tweak the job descriptions and add new keywords to search the profiles, resulting in additional profiles not founded by the recruiters.
For instance, if a recruiter is searching for a java developer, they will use keywords like java, java developer, etc. However, some software engineers or software developers might also have experience in java, which the recruiters omitted. In such cases, AI will list profiles where the candidate had java experience.
AI can study the shortlisted candidates and find what variables do they have. For example, Java developer worked in startups; java developer worked in any FAAMG companies (Facebook, Amazon, Apple, Microsoft, and Google).
6. To Find Passive Candidates – Searching Candidates for Tomorrow
Recruiters can find the candidates for the open position on various job boards or social media websites. On the other hand, to make sure you build a talent database for the future and have the best talent for the future, AI can perform such tasks.
AI in the future can help organizations to find the best-fit candidates whom they can approach when there is an open position. The AI can source candidates from websites such as LinkedIn and Xing and analyze different pointers such as how to job the candidate worked in a position when he/she got promoted and can also gather additional information on how the company is performing and what is the current turnover rate. Based on such parameters, an organization can list such candidates who might join in the future when a suitable position is open for them.
Will Article Intelligence be the Future of Recruitment?
The current COVID-19 pandemic had forced us to adopt new routines such as working from home for months or even forever. The COVID-19 pandemic had forced organizations to adopt AI faster than ever. According to a McKinsey survey in June 2020, the pandemic has accelerated digital technologies’ adoption for several years across different industries.
In the post-pandemic world, once the economy picks up and the organization starts various pools of candidates in large number, screening, and sourcing candidates in the old-fashioned method by the recruiters will no longer work. AI will be adopted on a large scale by all organizations of different sizes to overcome these challenges and make the recruiters and talent acquisition team concentrate on their core job.
The million-dollar question across the industry is, does AI will be the future of the recruitment? With the increase in AI adoption during the pandemic, it is clear that AI will stay in the recruitment industry and will play a significant role in hiring the candidates.
When we step into the future, it is more likely that only one AI tool will perform all the tasks such as screening candidates of resumes without any bias, sourcing candidates, have a conversation with the candidates when they enter the website, identify a job for the prospective candidates, and rank the best-fit candidates. This might be the future that many recruiters and HR managers imagined, making their lives easier and simple.
But somewhere down the line, the AI tool is not the virtual assistant for the recruiters, and it even might replace them. We are already reading about how AI might end up taking our jobs, and the possibility is real, be it the driverless cars, driverless trains, or robot miners. In the recruitment industry, AI can learn to perform various recruiters’ tasks in a faster way without any errors.
If AI runs the recruitment process in the future, how does the organization gets benefited? According to Chase Wilson, Vice-president of Product Innovation for Monster, Organizations get benefited by developing their brands and the recruitment process’s speed. If all the recruitment firms are using the same set of AI tools, then the firms’ challenges will be different from the crowd.
In a nutshell, AI will be part of the recruitment companies in the future, and more organizations will understand the benefits of AI and embrace AI. Future without AI in recruitment is unimaginable.
Artificial Intelligence is playing a vital role in the recruitment industry and redesigning candidate hiring. Recruiters are no longer required to spend hours together to check all the resumes in their inbox and find the right candidate. Recruiters need not search for matching resume in the job boards using the Boolean strings, or the organizations must hire more recruiters to make sure that they answer all the queries raised by the candidate.
AI tools can perform tasks such as sourcing, screening, chatting, and even shortlist best-fit candidates. In other words, all the manual tasks performed by recruiters can be given to AI, and recruiters can do what they are best at “engage with candidates and hire them.”
Many AI tools are available in the market that performs some tasks that the recruiter handles and implanting such tools will increase the recruiters’ productivity. On the other hand, organizations will be benefited as these AI tools will decrease the cost and time to hire the new candidate and build a database of passive candidates for future hiring.
Before the COVID-19 pandemic hit us, no one predicted when the AI tools would be implemented. Pandemic has accelerated AI implementation by several niches, and the recruitment industry is one of the early adopters. Interviews are no longer conducted face to face, and some of the organizations are using face and speech recognization software for the interviews.
So, in the future, we might see a single AI tool perform all the recruitment tasks and even conduct the first round of interviews. The job of recruiters will become more comfortable, and all they need to have is conversations with the candidates and hire highly skilled candidates. | <urn:uuid:39605b29-5ffb-4055-b7ab-ce5773cd7cd2> | CC-MAIN-2022-33 | https://content.wisestep.com/artificial-intelligence-talent-acquisition/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573163.7/warc/CC-MAIN-20220818033705-20220818063705-00097.warc.gz | en | 0.934239 | 5,123 | 2.5625 | 3 |
A population is not just a collective of separate , unrelated individuals. It is a society made up of many interlinked communities that are sustained and regulated by social relationships and social institutions. There are three main institutions that are central to the Indian society, they are caste, tribe and family.
Caste and the Caste System
Caste is an ancient social institution that has been part of Indian history and culture for thousands of years. As an institution it is still a central part of the Indian society. However the forms of caste system have changed. The caste system prevalent in the past was very different to how it is prevalent in the present.
Caste in the Past
Caste as in institution is uniquely associated with the Indian sub-continent. Although a central aspect of the Hindu society, it has spread itself to major non-Hindu communities especially Muslims, Christians and Skihs.
The term caste is essentially taken from the Portuguese word casta which means pure breed. It refers to a broad institutional arrangement that is referred in the Indian languages by two distinct terms, varna and jati.
Varna : The word varna literally means colour, but it refers to the four fold division of society into – brahmana, kshatriya, vaishya and shudra. This term, however, excludes the panchamas or the fifth category which comprises outcastes, foreigners, slaves, conquered people and others.
Jati : The word jati generally refers to the species or kinds of all things. In Indian language, it is a term that refers to the institution of caste.
The relationship between the terms and meaning of varna and jati has been a subject of much discussion amongst scholars. For many varna is a broad all-Indian aggregative classification common to all India while jati is regional or local sub-classification involving a complex system of castes and sub-castes that vary from region to region.
Features of Caste
The four varna classification is said to be roughly three thousand years old, however, opinions may differ. the caste system during these years was characterised by different features. For example, the caste system prevalent in the late Vedic period (900-500 BC). The caste system during the vedic period was a varna system consisting of only four major divisions.
These divisions were not very elaborate or rigid and they were not dependent on birth. Thus, movement across varna was common. It is only after the Vedic period that the rigidity within the system became prevalent.
Keeping the above statement in mind the most commonly cited features of caste system are as follows:
(i) Caste is determined by birth : a child is born into the caste of its parents, it is not a matter of choice. Thus, one can never leave, change or choose to join it. A person can, however, be expelled from it.
(ii) Membership in a caste involves strict marriage rules : Caste groups are endogamous i.e. marriage is restricted to the members of the group.
(iii) Caste membership involves rules about food and food sharing : The kinds of food that one can eat and the people with whom food can be shared is prescribed.
(iv) Caste involves a system consisting of many castes arranged in a hierarchy of rank and status : A person always has a caste and a caste also has a place in the hierarchy. The place of a caste in the hierarchy can differ from region to region.
(v) There is a segmental organisation in caste system : Caste involves sub-divisions within themselves as castes have sub-caste and sometimes sub-castes may also have sub-sub-castes.
(vi) Caste were traditionally linked to occupation : A person born into a caste would have to practice prescribed occupation. Thus, occupation became hereditary as a result of which occupation could only be pursued by one caste. Members of other caste could not enter the occupation.
The features given above are prescribed in the ancient scriptural texts and were not always practiced. Thus, one cannot understand the extent to which they tell about the practical aspect of caste. However, one thing is very evident that they are all restrictions and prohibitions. This fact states that caste was an unequal institution wherein one caste was greatly benefitted while another suffered without any hope of change in circumstances.
Principles of Caste System
The caste system can be understood as the combination of two sets of principles. These two are:
(i) Based on Difference and Separation : Each case is different and is strictly separated from every other caste. Many scriptural rules prevent the mixing of castes. These rules include marriage, food sharing social interaction, occupation etc.
(ii) Based on Wholism and Hierarchy : The different and separated castes do not have an individual existence. In other words, they do not exist in isolation but only exist in the relation to the whole society that is comprised of all other castes. Further, the caste-based society is not based on equality. It is essentially hierarchical wherein each individual caste occupies a distinct place in the ordered rank.
Hierarchy of Castes
The hierarchical order of caste is based on the distinction between purity and pollution. The word ‘purity’ connotes division between something believed to be closer to the sacred and the word ‘pollution’ represents something which is distant from or opposed to the sacred. Castes that are considered to be ritually pure have high status, while those considered less pure have low status.
Apart form purity, material power, economic power or military power is also associated with social status. Therefore, those in power have higher status and those defeated have lower status.
Castes in the past were not only unequal to reach other in ritual terms, but also complementary and non-competing. Thus, each caste has his own place in the system which cannot be taken by any other castes. Further, as castes are associated with occupation. The caste system often functions as the social division of labour wherein there is no movement or mobility.
Colonialism and Caste
Colonial period or the period before Indian independence strongly shaped the future of caste system and the formation of caste as a social institution. According to many scholars what we know today as caste is merely a product of colonialism than of the ancient Indian tradition.
Not all the changes that occurred in the caste system within the colonial period were deliberate or intended. The British administration initially began to understand the complexities of caste system in an effort to learn a way to efficiently govern the country. This learning included methodical and intensive surveys as well as reports on the customs and manner of the tribes and castes of the country.
The most important of these efforts to collect information on caste was through census which began in 1860s to become a regular ten-yearly exercise by 1881. The 1901 Census under Herbert Risley is central as it sought the data on social hierarchy prevalent in regions. This effort had a huge impact on social perception of casts as many castes claimed higher position in the social scale while offering historical and scriptural evidence.
Scholars feel that his kind of direct attempt to count caste and to officially record caste status changed the institution. As caste began to be counted and recorded, the system became more rigid and less fluid.
Intervention by the colonial states had a huge impact on the institution of caste by the following means:
(a) The land revenue settlements and related arrangements as well as laws gave legal recognition to the customary caste based rights of the upper castes making them land owners in a modern sense.
(b) Large scale irrigation schemes like that of Punjab as an effort to settle populations there, also had caste dimensions.
(c) The administration interest in the welfare of downtrodden class, also known as depressed class, led to the Government of India Act of 1935. This act gave legal recognition to the lists of schedules of castes and tribes (Scheduled Castes and Scheduled Tribes) and made them legible for special treatment. The most discriminated ‘untouchables’ were included in Scheduled Castes.
Thus, colonialism brought about major changes in the institution of caste.
Caste in the Present
Indian independence in 1947 bought about a partial break in the institution of caste system prevalent the colonial past. Caste consideration had inevitably played a role in the mass mobilisation of the national movements. The efforts to organise the depressed class, specially the untouchables began before the nationalist movement in the late 19th century.
Initiatives were taken by the upper caste progressive reformers and by the member of lower castes such as Mahatma Jyotiba Phula, Baba Saheb Amdebkar in the Western India. Ayyankali, Sri Narayana Guru, Iyotheedass and EV Ramaswamy Naickar in the South.
Both Mahatma Gandhi and BR Ambedkar began organising protests against untouchability from the 1920s. Infact, untouchability became a central agenda of Congress. By the time of the Indian independence, there was an agreement to abolish caste distinctions. The dominant view of the nationalist movements was that caste was a social evil devised to divide India.
The nationalist leaders, especially Mahatma Gandhi worked hard for the upliftment of the lower castes. He advocated the abolition of untouchability and other caste restrictions and at the same time, reassured the upper castes that their interests would be looked after.
Problems of Caste in Post Independent India
The post-Independence Indian state reflected these contradictions about caste system and upper caste’s interests. On the one hand, the state was committed to the abolition of caste and mentioned it into the Constitution. On the other, it was unable as well as unwilling to bring fundamental reforms which would remove case inequality.
The state assumed that completely ignoring the caste would undermine the caste based priviledges, automatically abolishing it.
For example, in the case of government job all individual compete on ‘equal’ terms irrespective of their castes. The only exception to this was in the from of reservations for the Scheduled Caste and Scheduled Tribes. Thus, state did not put much efforts towards caste inequality prevalent in the society.
The development activity of the state and the growth of private industry also affected the caste hierarchy through the speedy and intense economic changes. Modern industry created all kinds of new jobs without any caste supremacy. Urbanisation and conditions of collective living in the cities made difficult for caste system to survive.
At a different level, the liberal ideas of individualism and meritocracy (merit based credit) attracted modern educated Indian, because of which they began abandoning extreme caste practices. But on the other hand, the recruitment in industries, whether in the textile mills or elsewhere, continued to be organised along the lines of caste and kinship.
The middle-men to recruited labour from their own caste and region. As a result, even industries were often dominated by specific castes. Therefore, prejudice against the untouchables remained quite strong and was not absent from the city as well.
The resilience of caste proved most strong in cultural and domestic front. This is clearly evident in marriages and politics.
Endogamy : or the practice of marrying within the caste, remained largely unaffected with modernisation and change. Even today, most marriages are within caste boundaries. While some flexibility is allowed, the border of castes of similar socio-economic status are still very rigid. For example, inter-caste marriages between upper caste are still prevalent but that between upper castes and SCs/STs are rare.
Politics : Democratic politics in the independent India is still deeply conditioned by caste. Since the 1980s caste based political parties have emerged and caste became decisive in winning elections.
Many sociologists have coined new concepts to understand such changes. The most common amongst them were given by MN Srinivas. They are as follows:
It refers to a process whereby members of a (usually middle or lower) caste attempts to improve their own social status by adopting the ritual, domestic and social practices of higher status. In simple words, it is copying of the model of upper caste by lower or middle caste.
Sanskritisation practices included adopting vegetarianism, wearing of sacred thread, performances of specific prayers and religious ceremonies etc.
Sanskritisation usually accompanies the rise in the economic status of the caste attempting it, though it may also occur independently. Many suggestions and modification believed that such claim is defiant rather than just a mere imitation.
The term ‘dominant caste’ is used for those cases which had a huge population and were granted landrights by the partial land reforms effected after the independence. The land reforms took away the claiming rights of the upper castes or the absentee landlords who lived mostly in towns and cities, and had no role to play in the agricultural economy other than taking rent.
With the reforms , the lands were claimed by the next layer of caste, who were involved in the management of the land. These people depended on the labour of lower castes especially untouchable for tilling and tending the land. With land, these people gained economic as well as political power thus becoming the dominant caste in the countryside. Examples of such dominant castes are:
- Yadavs of Bihar and Uttar Pradesh
- Vokkaligas of Karnataka
- Reddys and Khammas of Andhra Pradesh
- Marathas of Maharashtra
- Jats of Punjab, Haryana and Western Uttar Pradesh
- Patidars of Gujarat
One of the most significant changes in the caste system is that it was becoming invisible for the upper caste, upper middle and the upper class.
For them, the concept of castes seems to decline. Paradoxically, their caste status ensures that these groups have the economic and educational resources to take advantage of the opportunities offered by rapid development. They were able to take advantage of the following:
The subsided education especially professional education in science, technology, medicine and management.
Expansion of state sector jobs in early decades for independence.
Their superiority ensured that they did not face any serious competition. As this privilege was passed to their future generations, they came to believe that their advancement was not related to caste. The matter is further complicated by the fact that such a privilege was not enjoyed by every upper caste person.
On the other hand, for the SCs and STs, caste has been more visible eclipsing other dimensions of their identity. Because of their lack of education, social capital as well as the fact they they must face competition, they cannot lose their caste identity which is the only thing that the world recognises.
The policies of reservation and other forms of protective discrimination instituted by the state in response to political pressure serve as their lifelines. Such a contradiction is central to the institution of caste prevalent in the present India.
Tribe is a modern term used for communities that are very old, whose people are among the oldest inhabitants of the sub-continent. These are the communities that did not practice a religion with a written text, that did not have a state or political form of normal kind; did not have sharp class or caste divisions. The term ‘tribe’ administrative convenience.
Classification of Tribal Societies
Tribes have been classified according to their ‘permanent’ and ‘acquired’ traits. Permanent traits include region, language, physical characteristics and ecological habitat.
The tribal population of India is widely spread with concentration being visible in certain regions. About 85% of the tribal population lives in ‘middle’ India, stretching from Gujarat and Rajasthan in the West to West Bengal and Odisha in the East, Madhya Pradesh, Jharkhand, Chhattisgarh and some part of Maharashtra and Andhra Pradesh. Of the remaining 15% over 11% is in the North-Eastern states and 3% in the rest of India.
The North-Eastern states have the highest concentration of tribal’s ranging more than 60% going up to 95% in states like Arunachal Pradesh, Meghalaya, Mizoram and country the tribal population is less than 12% except Odisha and Madhya Pradesh.
Tribal categorisation takes place into various divisions:
Language : On the basis of language, tribes are categorised into four categories. Indo-Aryan, Dravidian, Austric and Tibeto-Burman. The Indo-Aryan accounts for 1% of the population and the Dravidian accounts for 3%. The other two languages are primary spoken by tribals having 80% of the ceoncentration.
Physical Racial : Concerning physical racial terms, tribes are classified under the Negrito, Australoid, Mongoloid, Dravidian and Aryan categories. The last two are shared by the majority of the Indian population.
On Size : Tribes sizes vary in great number with some having 7 million people to some Andamanese islanders with only 100 people. The biggest tribes one the Gonds, Bhils, Santhals, Oraons, Minas, Bodos and Mundas.
The total population of tribes amounts to about 8.2% of the Indian population or 84 million people according to 2001 Census which has grown to 8.6% of 104 million tribal population according to 2011 Census Report.
Acquired traits are based on two criteria i.e. mode of livelihood and extent of incorporation into Hindu society or a combination of the two.
One the Basis of Livelihood : On the mode of livelihood, tribes can be categorised into fishermen, food gatherers and hunters, shifting cultivators, peasants and plantation and industrial workers.
Extent of Incorporation into the Hindu Society : The dominant classification of tribes as used in academic sociology as well as public and political affair is the extent of assimilation in Hindu mainstream. This assimilation can further been seen from the point of view of tribes and from the Hindu mainstream.
- From the tribes’ point of view, the attitude of the people towards the Hindu mainstream is important with the differentiation between tribes that are positively inclined towards Hinduism and those who oppose it.
- From the mainstream point of view, tribes may be viewed according to the status in the Hindu society, wherein high status is given to some, and low status accorded to most.
The argument for a tribe-caste distinction was founded on an assumed cultural difference between Hindu castes, with their beliefs in purity and pollution and hierarchical integration and the tribals with their equal and kinship based modes of organisation. The debate posed whether tribal was one end of the caste based society or a different kind of community.
Some of the scholars view who were the part of this debate mentioned that:
(i) Tribes should be seen as one of the whole society with caste-based (Hindu) peasant society which is just less stratified and more community based. However, some opponents argued that tribes were wholly different from caste because they had not notion of purity and pollution which is central to the caste system.
(ii) Some argued that the tribe-peasantry distinction did not hold in terms of any of the commonly advanced criteria: size, isolation, religion, and means of livelihood. Some tribes such as Santhal, Gonds and Bhils are very large with extensive territory. Some other tribes such as Munda, Hos and other are pursuing settled agriculture while hunting gathering tribes like Birhors of Bihar employ special households to make basket, etc.
(iii) Caste-tribe differences was accomplished by large body of literature through tribes were absorbed into Hindu society with Sanskritisation, acceptance into Shudra fold following conquest by caste Hindus, through acculturation, etc. The Hindu society history is often seen as an absorption of different tribal groups into Hindu society at varying level of hierarchy as their land was colonised and forests cut down. Such processes are either seen as natural or exploitative.
(iv) Most common arguments of scholars are that there is no coherent basis for treating tribes as pristine (pure or original) or societies uncontaminated by civilisation. Rather, tribes should be seen as secondary phenomena arising out of exploitative and colonialist contact between pre-existing states and non state groups. This contact creates the ideology of tribalism wherein tribals defined themselves as tribal to distinguish themselves from others.
(v) The belief that tribes are like stone age hunting and gathering societies have remained untouched is still common, even though it is not true. Adivasis were initially not oppressed. There were several Gond kingdoms in Central India such as Garha Mandla or Chanda. In addition, many Rajput kingdoms of Central and Western India emerged through a process of stratification. Adivasis exercised dominance over plains through their capacity to raid and through their services as local militias. They also occupied special trade niche, trading forest produce, salt and elephants.
The capitalist economy’s drive to exploit forest resources and minerals as well as to recruit cheap labour has brought tribal societies in contact with mainstream society.
Mainstream Attitudes Towards Tribes
Colonialism had bought about irreversible changes in the world including the tribal communities.
- On the political and economic front, tribal societies faced the incursion of money lenders.
- Tribal societies were losing their land their access to forests to the non-tribal immigrant settlers because of the government policies and mining operations. The forest land resources of the tribal’s became the main source of income for the colonial government.
The various rebellions in tribal areas in the 18th and 19th centuries, forced the colonial government to set up ‘excluded’ and ‘partially excluded’ areas, where the non-tribals were prohibited or regulated. In these areas, the British favoured indirect rule through local kings or headmen.
If we consider the isolation side of the tribal (i.e. if we believe the tribe society as a separate society) of 1940s we find that they needed protection from traders, moneylenders and Hindu and Christian missionaries, who intended to reduce tribals into detribalised the act of causing tribal people to abandon their customs and adopt urban ways of living landless labour.
The integrationist i.e. the scholars who believed that tribes are just a category of Hindus. On the other hand, argued that tribals were merely backward Hindus, and their problems had to be addressed within the same framework as other backward classes. In these areas, the colonial government exercised indirect rule.
The opposition in these two views had led the Constituent Assembly which as settled along the lines of a compromise advocated welfare schemes that would enable controlled integration. The subsequent scheme such as , Five Year Plans, tribal sub-plans, tribal welfare blocks etc. work for the same.
The basic issue concerning the tribes is that in the process of integration, tribes had neglected their own needs and desires. Integration till now has been done according to the mainstream society for its benefit. In the name of development, their resources are taken away and their communities are shattered.
National Development Versus Tribal Development
The imperatives of ‘development’ has not only governed the attitudes towards tribes but also shaped state policies. The National development taken under the leadership of Nehru focused on the construction of dams, factories and mines. As tribal areas were located in mineral rich forested areas, they were largely affected.
(a) The benefits of development that took place were at the price of the tribal communities who were displaced from their land for the exploitation of minerals and utilisation of land sites for setting up hydroelectric power plants.
(b) The forest land taken away from the tribals were systematically exploited during the British rule and still continue to be exploited.
(c) The coming of private property in land has also adversely affected tribals, whose community-based forms of collective ownership were placed at disadvantage in the new system.
(d) Another problem that development had bought for the tribes is the heavy in-migration of non-tribals. This not only disrupts and overwhelms the tribal communities and culture but also increases their exploitation.
One can find many examples of such disadvantages faced by development.
(i) Most of the costs and benefits flowing from the series of dams being built on the Narmada, disproportionately to different communities and regions.
(ii) The industrial areas of Jharkhand have suffered a dilution of the tribal share of population.
(iii) The North-Eastern states like Tripura had the tribal share of its population halved within a single decade, reducing them to a minority. Similar pressure is being felt by Arunachal Pradesh.
Tribal Identity Today
The forced incorporation of tribes into the mainstream society had impacted the tribal culture, society and economy significantly. Tribal identities are formed by the process of interaction rather than any primordial (original, ancient) characteristics peculiar to tribes. As interaction with the mainstream turned unfavourable to tribal communities, many tribal identities are now based on the ideas of resistance and opposition to the force of non-tribal world. The positive impact of the resistance and opposition were:
(a) Achievement of Statehood for Jharkhand and Chhattisgarh : However, this is not free from problems. These states are still to make complete use of its statehood and the system still leaves the tribal communities powerless.
A similar problem occurs in the North Indian states wherein individuals do not enjoy the civil liberties enjoyed by other citizens of the country. State repression in cases of rebellion paves the way for further rebellions which heavily impacts the economy, culture and society of the North-Eastern states.
(b) Emergence of Educated Middle Class communities among tribal communities with the policies of reservation. The resultant of such an emergence is the creation of an urbanised professional class.
(c) Emergence of Identity Assertions with tribal societies becoming more differentiated, different bases for the assertion of tribal identity are also emerging.
(c) There are two sets of issues that gave rise to tribal rebellion or movements. They are :
- Issues relating to control over economic resources.
- Issues relating to ethnic-cultural identity.
Generally, both these issues go hand in hand but with differentiation of the tribal society they may also diverge. Thus, the reason why middle class tribal people and poor tribal people join tribal movement may differ.
Family and Kinship
Family is where we begin our lives. It is a space of great warmth and care as well as a site of bitter conflicts, injustice and violence. Stories of disputes in family and kinship are as much a part as are stories of compassion, sacrifice and care. The structure of the family can be studied both as a social institution in itself and also in its relationship to other social institutions of a society.
A family in itself can be defined as nuclear or extended. It can be male-headed or female headed and the line of descent can be matrilineal patrilineal. This internal structures of the family represents the other structures of the society, namely political, economic, cultural etc. This implies that any change in the composition and structure of a family are linked to the other spheres of society.
For example, the migration of men from the Himalayan villages leads to the women-headed families or work schedules of young parents in software industry leads to the increasing number of grandparents moving in as care-givers to young grandchildren.
Therefore, the private family is linked to the public spheres such as economic, political, cultural and educational.
Each family, it can be said, has a different structure which undergoes change. Sometimes these changes occur accidently, such as in cases of wars or migration. Sometimes, they are deliberate as can be seen in cases where young people choose their own partners.
It is evident that the kind of changes that take place in the society not only changed the family structure but also the cultural ideas, norms and values.
Nuclear and Extended Family
The term ‘nuclear family’ refers to the family that consists of only one set of parents and their children. On the other hand, an extended family also known as ‘join family’ can take different forms, but typically has more than one family couple with two generations, living together.
The term extended family is often considered to be symptomatic indicative of India. This is not true as extended family is confined to certain sections and regions of community. In fact, the term ‘joint family’ according to IP Desai is not native. The words used for joint family in most Indian languages are just equivalent translations of the English word.
The Diverse Form of the Family
Different societies have diverse family forms. We can understand such societies with regards to different rule.
On the basis of residence:
(i) Matrilocal : In such a society, a newly married couple stays with the women’s parents.
(ii) Patrilocal : In this society, the couple lives with the man’s parents.
On the basis of inheritance:
(i) Matrilineal : This society passes on property from mother to daughter.
(ii) Patrilineal : In this society, there is a property shift from father to son.
A patriarchal family structure exists where the men exercise authority and dominance, matriarchy where the women plays a similar dominant role. It is to be noted here that matriarchy is more theoretical as there is no evidence of such a society. Matrilineal societies do exist where women inherits property but do not control it. For example, the khasi and Jaintia tribes of Meghalaya. | <urn:uuid:93a9733f-f985-4021-b536-4fef00118bbf> | CC-MAIN-2022-33 | https://qforquestions.in/social-institution-continuity-and-change-notes-class-12-cbse/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00497.warc.gz | en | 0.963975 | 6,192 | 3.578125 | 4 |
Do you want to learn how much is 31.37 kg equal to lbs and how to convert 31.37 kg to lbs? You couldn’t have chosen better. You will find in this article everything you need to make kilogram to pound conversion - theoretical and practical too. It is also needed/We also want to emphasize that all this article is dedicated to one number of kilograms - exactly one kilogram. So if you need to learn more about 31.37 kg to pound conversion - read on.
Before we move on to the more practical part - that is 31.37 kg how much lbs calculation - we want to tell you a little bit of theoretical information about these two units - kilograms and pounds. So let’s move on.
We will start with the kilogram. The kilogram is a unit of mass. It is a basic unit in a metric system, that is International System of Units (in short form SI).
Sometimes the kilogram can be written as kilogramme. The symbol of this unit is kg.
Firstly, the definition of a kilogram was formulated in 1795. The kilogram was defined as the mass of one liter of water. This definition was not complicated but hard to use.
Later, in 1889 the kilogram was described using the International Prototype of the Kilogram (in short form IPK). The IPK was made of 90% platinum and 10 % iridium. The IPK was used until 2019, when it was substituted by another definition.
The new definition of the kilogram is based on physical constants, especially Planck constant. Here is the official definition: “The kilogram, symbol kg, is the SI unit of mass. It is defined by taking the fixed numerical value of the Planck constant h to be 6.62607015×10−34 when expressed in the unit J⋅s, which is equal to kg⋅m2⋅s−1, where the metre and the second are defined in terms of c and ΔνCs.”
One kilogram is 0.001 tonne. It could be also divided to 100 decagrams and 1000 grams.
You know some facts about kilogram, so now we can move on to the pound. The pound is also a unit of mass. We want to highlight that there are not only one kind of pound. What are we talking about? For example, there are also pound-force. In this article we want to concentrate only on pound-mass.
The pound is in use in the Imperial and United States customary systems of measurements. Naturally, this unit is used also in another systems. The symbol of the pound is lb or “.
The international avoirdupois pound has no descriptive definition. It is defined as 0.45359237 kilograms. One avoirdupois pound is divided into 16 avoirdupois ounces and 7000 grains.
The avoirdupois pound was enforced in the Weights and Measures Act 1963. The definition of this unit was placed in first section of this act: “The yard or the metre shall be the unit of measurement of length and the pound or the kilogram shall be the unit of measurement of mass by reference to which any measurement involving a measurement of length or mass shall be made in the United Kingdom; and- (a) the yard shall be 0.9144 metre exactly; (b) the pound shall be 0.45359237 kilogram exactly.”
Theoretical part is already behind us. In next section we want to tell you how much is 31.37 kg to lbs. Now you know that 31.37 kg = x lbs. So it is time to know the answer. Let’s see:
31.37 kilogram = 69.1590115894 pounds.
This is a correct outcome of how much 31.37 kg to pound. It is possible to also round it off. After rounding off your result is exactly: 31.37 kg = 69.014 lbs.
You learned 31.37 kg is how many lbs, so have a look how many kg 31.37 lbs: 31.37 pound = 0.45359237 kilograms.
Naturally, in this case you can also round off the result. After rounding off your result will be as following: 31.37 lb = 0.45 kgs.
We also want to show you 31.37 kg to how many pounds and 31.37 pound how many kg outcomes in charts. See:
We are going to begin with a chart for how much is 31.37 kg equal to pound.
|Kilograms (kg)||Pounds (lb)||Pounds (lbs) (rounded off to two decimal places)|
|Pounds||Kilograms||Kilograms (rounded off to two decimal places|
Now you learned how many 31.37 kg to lbs and how many kilograms 31.37 pound, so it is time to go to the 31.37 kg to lbs formula.
To convert 31.37 kg to us lbs you need a formula. We will show you two versions of a formula. Let’s begin with the first one:
Number of kilograms * 2.20462262 = the 69.1590115894 outcome in pounds
The first version of a formula give you the most correct result. In some cases even the smallest difference could be considerable. So if you want to get an exact result - this formula will be the best for you/option to convert how many pounds are equivalent to 31.37 kilogram.
So go to the second version of a formula, which also enables calculations to learn how much 31.37 kilogram in pounds.
The another version of a formula is as following, look:
Amount of kilograms * 2.2 = the outcome in pounds
As you see, the second formula is simpler. It can be the best choice if you want to make a conversion of 31.37 kilogram to pounds in easy way, for example, during shopping. You only need to remember that your outcome will be not so exact.
Now we want to learn you how to use these two formulas in practice. But before we will make a conversion of 31.37 kg to lbs we are going to show you easier way to know 31.37 kg to how many lbs totally effortless.
Another way to know what is 31.37 kilogram equal to in pounds is to use 31.37 kg lbs calculator. What is a kg to lb converter?
Converter is an application. It is based on first version of a formula which we showed you above. Thanks to 31.37 kg pound calculator you can easily convert 31.37 kg to lbs. Just enter number of kilograms which you need to convert and click ‘calculate’ button. You will get the result in a second.
So let’s try to convert 31.37 kg into lbs using 31.37 kg vs pound calculator. We entered 31.37 as an amount of kilograms. It is the result: 31.37 kilogram = 69.1590115894 pounds.
As you can see, this 31.37 kg vs lbs calculator is easy to use.
Now we are going to our chief topic - how to convert 31.37 kilograms to pounds on your own.
We will begin 31.37 kilogram equals to how many pounds calculation with the first version of a formula to get the most exact result. A quick reminder of a formula:
Amount of kilograms * 2.20462262 = 69.1590115894 the outcome in pounds
So what need you do to check how many pounds equal to 31.37 kilogram? Just multiply amount of kilograms, this time 31.37, by 2.20462262. It is equal 69.1590115894. So 31.37 kilogram is 69.1590115894.
It is also possible to round it off, for example, to two decimal places. It is equal 2.20. So 31.37 kilogram = 69.0140 pounds.
It is high time for an example from everyday life. Let’s convert 31.37 kg gold in pounds. So 31.37 kg equal to how many lbs? And again - multiply 31.37 by 2.20462262. It gives 69.1590115894. So equivalent of 31.37 kilograms to pounds, if it comes to gold, is exactly 69.1590115894.
In this example you can also round off the result. It is the outcome after rounding off, this time to one decimal place - 31.37 kilogram 69.014 pounds.
Now we are going to examples converted using short formula.
Before we show you an example - a quick reminder of shorter formula:
Number of kilograms * 2.2 = 69.014 the outcome in pounds
So 31.37 kg equal to how much lbs? As in the previous example you need to multiply amount of kilogram, in this case 31.37, by 2.2. Let’s see: 31.37 * 2.2 = 69.014. So 31.37 kilogram is exactly 2.2 pounds.
Let’s make another conversion with use of this formula. Now convert something from everyday life, for instance, 31.37 kg to lbs weight of strawberries.
So calculate - 31.37 kilogram of strawberries * 2.2 = 69.014 pounds of strawberries. So 31.37 kg to pound mass is 69.014.
If you know how much is 31.37 kilogram weight in pounds and are able to calculate it with use of two different versions of a formula, let’s move on. Now we want to show you all outcomes in tables.
We know that outcomes shown in charts are so much clearer for most of you. We understand it, so we gathered all these outcomes in tables for your convenience. Thanks to this you can easily make a comparison 31.37 kg equivalent to lbs results.
Let’s begin with a 31.37 kg equals lbs table for the first formula:
|Kilograms||Pounds||Pounds (after rounding off to two decimal places)|
And now see 31.37 kg equal pound chart for the second version of a formula:
As you can see, after rounding off, when it comes to how much 31.37 kilogram equals pounds, the results are the same. The bigger amount the more significant difference. Keep it in mind when you need to do bigger amount than 31.37 kilograms pounds conversion.
Now you know how to calculate 31.37 kilograms how much pounds but we will show you something more. Are you interested what it is? What do you say about 31.37 kilogram to pounds and ounces calculation?
We will show you how you can calculate it step by step. Begin. How much is 31.37 kg in lbs and oz?
First thing you need to do is multiply number of kilograms, this time 31.37, by 2.20462262. So 31.37 * 2.20462262 = 69.1590115894. One kilogram is equal 2.20462262 pounds.
The integer part is number of pounds. So in this case there are 2 pounds.
To check how much 31.37 kilogram is equal to pounds and ounces you need to multiply fraction part by 16. So multiply 20462262 by 16. It is exactly 327396192 ounces.
So final outcome is exactly 2 pounds and 327396192 ounces. You can also round off ounces, for instance, to two places. Then your result will be equal 2 pounds and 33 ounces.
As you see, conversion 31.37 kilogram in pounds and ounces quite simply.
The last conversion which we will show you is conversion of 31.37 foot pounds to kilograms meters. Both of them are units of work.
To convert it you need another formula. Before we show you this formula, have a look:
Now see a formula:
Number.RandomElement()) of foot pounds * 0.13825495 = the outcome in kilograms meters
So to calculate 31.37 foot pounds to kilograms meters you need to multiply 31.37 by 0.13825495. It is exactly 0.13825495. So 31.37 foot pounds is exactly 0.13825495 kilogram meters.
It is also possible to round off this result, for instance, to two decimal places. Then 31.37 foot pounds will be exactly 0.14 kilogram meters.
We hope that this conversion was as easy as 31.37 kilogram into pounds calculations.
This article was a huge compendium about kilogram, pound and 31.37 kg to lbs in calculation. Due to this calculation you know 31.37 kilogram is equivalent to how many pounds.
We showed you not only how to make a calculation 31.37 kilogram to metric pounds but also two another calculations - to check how many 31.37 kg in pounds and ounces and how many 31.37 foot pounds to kilograms meters.
We showed you also another solution to make 31.37 kilogram how many pounds calculations, that is using 31.37 kg en pound calculator. This will be the best solution for those of you who do not like calculating on your own at all or need to make @baseAmountStr kg how lbs conversions in quicker way.
We hope that now all of you are able to make 31.37 kilogram equal to how many pounds calculation - on your own or with use of our 31.37 kgs to pounds calculator.
So what are you waiting for? Let’s calculate 31.37 kilogram mass to pounds in the way you like.
Do you need to make other than 31.37 kilogram as pounds conversion? For example, for 10 kilograms? Check our other articles! We guarantee that conversions for other numbers of kilograms are so simply as for 31.37 kilogram equal many pounds.
We want to sum up this topic, that is how much is 31.37 kg in pounds , we prepared one more section. Here you can find the most important information about how much is 31.37 kg equal to lbs and how to convert 31.37 kg to lbs . It is down below.
What is the kilogram to pound conversion? To make the kg to lb conversion it is needed to multiply 2 numbers. How does 31.37 kg to pound conversion formula look? . Have a look:
The number of kilograms * 2.20462262 = the result in pounds
So what is the result of the conversion of 31.37 kilogram to pounds? The accurate result is 69.1590115894 lbs.
There is also another way to calculate how much 31.37 kilogram is equal to pounds with second, easier version of the equation. Check it down below.
The number of kilograms * 2.2 = the result in pounds
So now, 31.37 kg equal to how much lbs ? The answer is 69.1590115894 lbs.
How to convert 31.37 kg to lbs in a few seconds? You can also use the 31.37 kg to lbs converter , which will make whole mathematical operation for you and give you an accurate answer .
|31.01 kg to lbs||=||68.36535|
|31.02 kg to lbs||=||68.38739|
|31.03 kg to lbs||=||68.40944|
|31.04 kg to lbs||=||68.43149|
|31.05 kg to lbs||=||68.45353|
|31.06 kg to lbs||=||68.47558|
|31.07 kg to lbs||=||68.49762|
|31.08 kg to lbs||=||68.51967|
|31.09 kg to lbs||=||68.54172|
|31.1 kg to lbs||=||68.56376|
|31.11 kg to lbs||=||68.58581|
|31.12 kg to lbs||=||68.60786|
|31.13 kg to lbs||=||68.62990|
|31.14 kg to lbs||=||68.65195|
|31.15 kg to lbs||=||68.67399|
|31.16 kg to lbs||=||68.69604|
|31.17 kg to lbs||=||68.71809|
|31.18 kg to lbs||=||68.74013|
|31.19 kg to lbs||=||68.76218|
|31.2 kg to lbs||=||68.78423|
|31.21 kg to lbs||=||68.80627|
|31.22 kg to lbs||=||68.82832|
|31.23 kg to lbs||=||68.85036|
|31.24 kg to lbs||=||68.87241|
|31.25 kg to lbs||=||68.89446|
|31.26 kg to lbs||=||68.91650|
|31.27 kg to lbs||=||68.93855|
|31.28 kg to lbs||=||68.96060|
|31.29 kg to lbs||=||68.98264|
|31.3 kg to lbs||=||69.00469|
|31.31 kg to lbs||=||69.02673|
|31.32 kg to lbs||=||69.04878|
|31.33 kg to lbs||=||69.07083|
|31.34 kg to lbs||=||69.09287|
|31.35 kg to lbs||=||69.11492|
|31.36 kg to lbs||=||69.13697|
|31.37 kg to lbs||=||69.15901|
|31.38 kg to lbs||=||69.18106|
|31.39 kg to lbs||=||69.20310|
|31.4 kg to lbs||=||69.22515|
|31.41 kg to lbs||=||69.24720|
|31.42 kg to lbs||=||69.26924|
|31.43 kg to lbs||=||69.29129|
|31.44 kg to lbs||=||69.31334|
|31.45 kg to lbs||=||69.33538|
|31.46 kg to lbs||=||69.35743|
|31.47 kg to lbs||=||69.37947|
|31.48 kg to lbs||=||69.40152|
|31.49 kg to lbs||=||69.42357|
|31.5 kg to lbs||=||69.44561|
|31.51 kg to lbs||=||69.46766|
|31.52 kg to lbs||=||69.48970|
|31.53 kg to lbs||=||69.51175|
|31.54 kg to lbs||=||69.53380|
|31.55 kg to lbs||=||69.55584|
|31.56 kg to lbs||=||69.57789|
|31.57 kg to lbs||=||69.59994|
|31.58 kg to lbs||=||69.62198|
|31.59 kg to lbs||=||69.64403|
|31.6 kg to lbs||=||69.66607|
|31.61 kg to lbs||=||69.68812|
|31.62 kg to lbs||=||69.71017|
|31.63 kg to lbs||=||69.73221|
|31.64 kg to lbs||=||69.75426|
|31.65 kg to lbs||=||69.77631|
|31.66 kg to lbs||=||69.79835|
|31.67 kg to lbs||=||69.82040|
|31.68 kg to lbs||=||69.84244|
|31.69 kg to lbs||=||69.86449|
|31.7 kg to lbs||=||69.88654|
|31.71 kg to lbs||=||69.90858|
|31.72 kg to lbs||=||69.93063|
|31.73 kg to lbs||=||69.95268|
|31.74 kg to lbs||=||69.97472|
|31.75 kg to lbs||=||69.99677|
|31.76 kg to lbs||=||70.01881|
|31.77 kg to lbs||=||70.04086|
|31.78 kg to lbs||=||70.06291|
|31.79 kg to lbs||=||70.08495|
|31.8 kg to lbs||=||70.10700|
|31.81 kg to lbs||=||70.12905|
|31.82 kg to lbs||=||70.15109|
|31.83 kg to lbs||=||70.17314|
|31.84 kg to lbs||=||70.19518|
|31.85 kg to lbs||=||70.21723|
|31.86 kg to lbs||=||70.23928|
|31.87 kg to lbs||=||70.26132|
|31.88 kg to lbs||=||70.28337|
|31.89 kg to lbs||=||70.30542|
|31.9 kg to lbs||=||70.32746|
|31.91 kg to lbs||=||70.34951|
|31.92 kg to lbs||=||70.37155|
|31.93 kg to lbs||=||70.39360|
|31.94 kg to lbs||=||70.41565|
|31.95 kg to lbs||=||70.43769|
|31.96 kg to lbs||=||70.45974|
|31.97 kg to lbs||=||70.48179|
|31.98 kg to lbs||=||70.50383|
|31.99 kg to lbs||=||70.52588|
|32 kg to lbs||=||70.54792| | <urn:uuid:3863e203-3323-4e7f-bfe9-4785c3a48ed4> | CC-MAIN-2022-33 | https://howkgtolbs.com/convert/31.37-kg-to-lbs | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00497.warc.gz | en | 0.884429 | 4,860 | 3.3125 | 3 |
History Of Patentability
The concept of patents is considered to have originated from the Venetian Statute of 1474 in Italy, issued by the Republic of Venice. The main aim of this concept was to recognize the efforts of inventors in the form of innovative devices communicated to the Republic, thus deserving legal protection against potential infringers. This practise had encouraged the merchandisers in Venice to bring out products and processes not known at that time.
In England, the first patent was registered in the name of John of Utynam by King Henry VI in 1449, for a process of making stained glass. (Macqueen 2008)
European Patent Convention
The European Patent Convention (EPC) is a multipartite agreement, which offers autonomous legal system to all contracting states for the grant of patents governed by the European Patent Organisation. After long negotiations by the contracting states of European Patent Organisation in 2000, there were certain new reforms introduced in the provisions of original convention of 1973, which was suppose to be antiquated due to fast growth in international law and procedural flaws of EPO. There were widespread changes made in European Patent System with the emergence of European Patent Convention, 2000, which came into force on 13th December 2007. (Pierre-André Dubois and Shannon Yavorsky, 2008)
Article 52(1) of EPC 2000 provides that, “European patents shall be granted for any invention, in all fields of technology, provided that they are new, involve an inventive step and are susceptible of industrial application”. (Macqueen 2008)
Exclusions To Patentability – Article 52(2) Of Epc 2000
“The following in particular shall not be regarded as inventions within the meaning of paragraph 1:
(a) discoveries, scientific theories and mathematical methods;
(b) aesthetic creations;
(c) schemes, rules and methods for performing mental acts, playing games or doing business, and programs for computers;
(d) presentations of information.
Paragraph 2 shall exclude the patentability of the subject-matter or activities referred to therein only to the extent to which a European patent application or European patent relates to such subject matter or activities as such”. (Macqueen 2008)
Substitute Protection Of Copyright
Copyright protects expression of ideas or functions whereas functionality is protected by patents. The provisions of Article 52(2) raise some doubts within its boundaries. Subsection (b) excludes aesthetic creations which can be protected by copyright, is justified thus placing less burden on patent system. The application of the same principle would be unreasonable purely for two reasons, namely
- unlike aesthetic creations, computer programs are functional in nature.
- the nature of the program determines the worth beyond its expressive properties, restricting infringers to benefit from it.
Patent protection is more favourable to computer program or software than Copyright option. (Attridge, 2001)
The above analysis explains the point that some inventions falling under the category of excluded matter can be protected by copyright. For example, inventions related to aesthetic creations deserve copyright protection rather than patents. Sometimes, such inventions unreasonably entails burden on the patent system. Unlike aesthetic creations, computer programs are well protected by patents as compared to copyrights.
Interpretation Of Proviso ‘As Such’ In Paragraph 2 Of Article 52(2) Of European Patent Convention 2000
Many patent applicants, attorneys, examiners and judges are facing the difficulty to interpret the words “as such” in paragraph 2 of article 52(2) of EPC 2000. The European Patent Convention quashes out ‘software as such’ which still seems to be vague, even after numerous discussions by the European Patent Office (EPO) Boards of Appeal over last two decades. (Bakels, 2009) “The as such qualification is extremely important. It means that only claims which fall squarely within a category of excluded matter will be struck down. Inventions which straddle patentable and non-patentable matter can survive if the technical features meet the other criteria for patentability. ” (Macqueen 2008). The limitation created by such words in article 52(2) of EPC 2000 has restricted the claims to patentability falling under the category of excluded matter enumerated in the above article and consideration towards inventions giving evidence of technical contribution in technical field.
Let us understand the words “technical contribution in technical field”
The notion of ‘technical contribution in technical field’ is understood as transformation in the state, operation, or function of something tangible induced by the technical effect. The framers of EPC 2000 failed to consider the patentability of computer-related inventions on the basis of that invention making technical contribution in that respective field and left unchanged by expressing the view of drawing attention in future Conference. The decision of Board of Appeal in the Vicom’s case offers clear justification of granting patent to computer program inventions on the evidence of making technical contribution in technical field rather than more fundamental issues of patentability. The claim of Vicom in his application was “a method of digitally filtering a data array” but the application was rejected by the Examining Division on the basis of its findings which concluded the claim, first as being a mathematical method, and secondly as being a computer program. On Vicom’s appeal, the Board of Appeal set aside the decision of Examining Division amended the Claim 1 to be read as “A method of digitally processing images in the form of two-dimensional data array”. The Board of Appeal construed that the revised claim was capable of producing technical effect as a digital image is a physical entity and thus should be granted patent. A eye-opener delivered by the Board’s decision in Vicom’s case is the acceptance that “even if the idea underlying an invention can be considered to reside in a mathematical method, a claim directed to a technical process in which such method is used does not seek protection for the mathematical method as such” Basically, it is clear that, even if the invention is plunged in an excluded category, a claim will be patentable on the ground of making ‘technical contribution’ to the state of the art. (Davies, 1998)
The meaning of “as such” restriction is that any inventions which fall in the category of excluded matter will not be considered unless and untill it makes any technical contribution to the art. The judgement of Vicom’s case has clarified the patenting of computer programs to a great extent on the basis of technical contribution made by that invention. This decision has compensated for the ignorance of the framers of EPC 2000. It has made the way for the excluded subject matter to patentability with the requirement of technical features embeded in the invention.
The Outlook Of Epo On Examination Of Computer-Related Inventions
The term ‘invention’ is not clearly defined in the guidelines of EPC, which relies on the different national patent systems of its Member states. However, the aspect of ‘technical character’ is apparent in most of the former national patent systems of present EPC Member states as follows:
- An invention solving technical problem;
- the solution of a problem produces technical effect;
- the characteristics of invention are technical in nature, thus solving a problem;
The two important aspects to be considered in the examination of computer-related inventions are “computer program as such” and “technical character”. A claim of computer program falling under the restricted area of ‘as such’ will not be considered to be patentable, irrespective of its content. If a claimed subject-matter possesses a technical character which helps in solving a technical problem or its features makes a technical contribution in IT world, then it makes its way to patentability. The modus operandi adopted by EPO in examining the technical contribution made to the art was vague, thus lacking to define what is meant by “technical”. It is not convincing to consider the expertise of inventors in IT field as non-technical. The above approach does not even differentiate between the serious issues of “novelty” and “inventive step”, which could have simplified the procedure of granting patents to computer programs or software in Europe. There are two widely cited cases of the Board of Appeal of the EPO, in which the technical character of software-related invention was approved. They are “T06/83, IBM Data processor network” and “T1002/92 PETTERSSO/ Queuing system”. Finally, in the light of three cases, “T158/88 SIEMENS/Character shape”, “T38/86 IBM/Text Processing”, and “T204/93 ATT/Generation of Computer Components”, the claims were considered to be of non-technical nature and, therefore, not patentable. After these precedents, there are still ongoing negotiations to adopt the best way of protecting computer programs or software. But the fact is, that there are yet two barriers like “as such proviso” excluding computer programs to patentability and the requirement of “technical character or effect”. (Liesegang, 1999)
This article features the criteria to be considered while examining the computer-related invention. It emphasizes on two requirements which are “as such” qualification and “technical character”. It criticizes EPO’s approach towards the examination of grating patents to computer programs on the ground of technical character aspect that their approach fails to define the term “technical” and even lack of differentiation between “novelty” and “inventive step” which could clarify some doubts on granting patents. The author of the article finds it difficult to digest the fact that the claims of expert inventors in IT field lack technical character, thus not deserving patent protection. In spite of many precedents, there are debates in progress to find the best possible way to protect the computer programs or software. Indeed, the “as such” restriction and “technical character” still blocks the way of patentability of computer-related inventions.
The Criticism Of European Economic And Social Committee (Esc) Towards The Epo’s Interpretation Of Article 52(2)
The year 2002 had witnessed the European Economic and Social Committee (ESC) criticizing the European Patent Office’s analysis of article 52(2) of the European Patent Convention as ‘the product of legal casuistry’. The ESC has described the definition of the term ‘invention’ in article 52(2) as negative, because of excluding several categories of subject-matter to patentability. The clarification is provided in Article 52(3) by inserting the proviso ‘as such’. The EPO’s approach towards defining the term ‘invention’ can be perceived from the Board’s decisions in 1980 ‘s cases, such as ‘Christian Franceries/Traffic Regulations’, ‘Stockburger/Coded Distinctive Mark’, ‘VICOM/Computer-Related Invention’, ‘Kock & Sterzel/X-ray Apparatus’, ‘Sternheimer/Harmonic Vibrations’ and ‘IBM/Computer-Related Invention’. These judgements had justified the nature of Article 52(2) by revealing that the existence of ‘technical character’ would resolve the problem of patenting computer-related inventions, hence planted the roots for the upcoming inventions leading to the healthy growth in this field. In the late 1980s, there were issues between individual Boards and parties, where those parties demanded the Board to elucidate the importance of ‘technical character’ of Article 52(2) with regards to computer-related inventions. In this phase, there were two theories recognised, namely ‘whole contents approach’ in which subject matter was supposed to be considered as whole, used by technical means to solve a technical problem or produce technical effect, and ‘contribution approach’ in which the subject matter was supposed to be producing a result in a field covered by article 52(2), thus supporting denial of patents for computer systems. Post 1999, the approach of the board was much clear and straight-forward than that during last decade. This can be traced by the Board’s adoption of ‘whole contents theory’ of technical character to interpret the principles of EPC, especially article 52(3), considering the legal aspects along with the political factors associated with it. The pressure was built on the European states by Article 27(1) of TRIPS with regards to patentability of inventions in all fields of technology and to liberalize the patent granting practices with other jurisdictions such as United States of America and Japan, in order to harmonize the patent law at international level. The ESC went on to criticize the EPO’s approach of ‘whole contents theory’ which was turned down because of lack of support and reasoning. (Pila, 2005)
Here, the author Justine Pila has described the intense criticism of European Economic and Social Committee (ESC) towards the interpretation of article 52(2) of EPC by the European Patent Office. Certain board’s judgements in 1980’s cases has clarified the position of granting patents to computer programs on the basis of technical character, thus opened the way for forthcoming inventions leading to strong competition in the field. From 1999, the board embraced a new concept of dealing with the “technical character” aspect in the light of legal aspects and political factors connected with it. The TRIPS provision has compelled the Euorpean states to liberalize the patent system in order to match with other jurisdictions, leading to harminization of international patent law.
Uk’s Approach Towards Granting Patents For Computer-Related Inventions
On the verge of 20th century, the EPO’s approach was different from three cases starting with ‘Pension Benefits’ case which was earlier consistent to ‘VICOM’ and it was termed as ‘any hardware’ approach in ‘Macrossan/Aerotel’ case. Those three cases are ‘Pension Benefits’, ‘Hitachi/Auction Method’, ‘Microsoft/Data Transfer with expanded clipboard formats’. The Court of Appeal in UK was stubborn enough to reject the approach of EPO cases and High Court decisions. It felt obliged by following its previous decision in Merrill Lynch and followed the four-part test set out in Macrossan/Aerotel case. The criticism of UK Court of Appeal regarding the European approach can be found in the statement, “European patent judges ought, so far as they can, try to be consistent with one another, particularly in relation to the interpretation of national laws implementing provisions of the EPC”. The UK Court of Appeal believes that the correct approach is that of ‘VICOM’ in EPO and ‘Merrill Lynch’ in the United Kingdom, using the four-part test. ‘Merrill Lynch’ case has created a expectation that the EPO Enlarged Board of Appeal will simplify its approach. There are constant efforts made in harmonizing other patent areas across, whereby patentability of computer program is ignored, thus pushing ahead for future negotiations. (Lees, 2007)
Two years back, there was a benchmark case of ‘AEROTEL’ in the patent industry decided by the UK Court of Appeal. After considering the UK and EPO approach towards the scope of exclusions to patentability in article 52 of EPC, Jacob L.J devised the following four-step test.
- properly construe the claim;
- identify the actual contribution;
- ask whether it falls solely within the excluded subject matter;
- check whether the actual or alleged contribution is actually technical in nature.
This test was not consented by the Board of Appeal of EPO, hereby criticising it as ‘not consistent with a good-faith interpretation of EPC’. (Sharp, 2009)
In December 2006, the Gowers Review of Intellectual Property proposed that the UK should review its guidelines of not granting patents beyond its current limits, inspite of prevalent patenting practise of computer-related inventions all over Europe. (Macqueen 2008)
There was conflict of interests between the Intellectual Patent Office (IPO) and the patent applicants in succeding cases with regards to the application of the third and fourth stage of test recognized in the Aerotel case. The IPO elucidated that if any claim falls into the third stage of excluded matter, than the question of “technicality” does not arise. In Symbian case, the patent application was rejected by IPO on the ground that it was a computer program, which fall under the excluded matter of the act. When this issue reached the doors of High Court, the learned judge took different approach than IPO. It interpreted the reading of Aerotel approach that it does not involve completely seperate stages of analysis. It was more keen to discover whether an invention falling in excluded matter category makes a relevant technical contribution to the art. The High Court found that the claimed invention solves a technical problem within the computer, as a result it improves the realibility of the computer. Even the Court of Appeal consented to the decision of High Court, which simplified the UK law on the issue of patenting software. The concept of technicality is still not clear and specific, thus forms uncertainty and subject of criticism. (Mauny, 2009)
Cross Border Patent Systems
Patenting computer-related inventions is the issue making headlines and topic of discussion in many countries. The pressure has been mounting in order to furnish enough protection to the fruitful efforts and monetary investments of inventors in software market at both national and international level, because of the speedy growth in the field of Information Technology and increasing popularity of computer software. The inefficiency of World Intellectual Property Organization and Universal Copyright Convention to harmonize the Intellectual Property Law in global market can be marked by the conflict of approach by many nations towards the patentability of software at international level. The present condition of protecting software, through either copyright or patent protection laws, is in the state of disorderedness throughout the world. Many developed countries in the field of computer software like United States, West Germany, France, Japan and Canada are experiencing difficulties to unify the patent-granting laws for computer software.
United States :
The act governing the granting of patents is the Patent Act 1952, Title 35 USC. Section 101 of the above act states:
“Whoever intents or discovers any new and useful process, machine manufacture or composition of matter, or any new or useful improvement thereof, may obtain a patent therefore, subject to conditions and requirements to this title”.
Under the US Patent Act, computer program or software was not specifically precluded. The US Judiciary has the freedom to interpret the above section pertaining to granting patents to new technologies, bearing in mind the established principles. In Gottschalk v Benson, the US Supreme Court declared that it does not intend to exclude computer program from the patent system. For the first time in 1980s, the Supreme Court regarded a computer process as a statutory subject matter under Section 101 of 35 USC in a widely cited case of Diamond v Diehr. To ease the patent system in US, the Supreme Court precisely illustrated the exceptions to patentable inventions that are divided into three basic groups of subject matter. They are laws of nature, natural phenomena and abstract ideas. Unlike EPO approach, the US Supreme Court has stressed upon the significance of considering the invention as a whole in determining the eligibility of the claimed invention patent protection under Section 101 35 USC. The Federal Circuit in AT&T Corp. v Excel Communications Inc, extended the interpretation of Section 101 that it should be ascertained for all types of claims in the same way regardless of that claim being a machine or a process. The main criteria to be considered in a patentable subject matter are ‘useful, concrete and tangible result’ aspect. The initial case law of 1990 demonstrated the patentability test for software inventions in US courts which considers the computer made for special use, capable of performing particular functions or manipulate data to achieve a practical application. At the end of this decade, this theory was modified post State Street and AT&T case stating, patents shall be granted to software-related inventions if it produces useful, concrete and tangible result. (Park, 2005)
The definition of patentable invention under the Japanese Patent Law is as follows;
“a highly advanced creation of technical ideas utilising natural laws”.
Therefore, the exceptions to patentability of computer-related inventions can be construed from the definition itself as ideas involving absolute use of laws other than natural ones, like economic principles, arbitrary arrangements, mathematical methods or human mental activities. As compared to EPO and US approach in the field of software patents, the Japanese Patent system has prescribed examination guidelines instead of case laws to determine eligibility of software inventions to patentability. The 1993 Guidelines regarded a computer-related invention to be patentable only if its program performs certain functions with the help of hardware resources within or outside the computer. Like US, JPO follows the same principle of examination by considering the invention as a whole. In the year 2000, the guidelines were revised, which awards computer-related invention the position of ‘statutory invention’ and managed as ‘invention of a product’.
“In the Examination Guidelines of JPO dealing with patentability of computer-related inventions, there are some examples which exemplify the claims that utilise hardware resources. The following functions enable a claim to be patentable:
- a control function for apparatuses such as rice cookers, washing machines, hard disk drives or engines;
- information processing based on the physical or technical properties of an object such as the number of revolutions of an engine or rolling temperature”.
After contrasting major three patent offices, it is induced that every office has a different view-point to ascertain the scope of the requirement ‘technical nature’ in the matter of extending patent rights for computer programs or software. The USPTO avoids technical aspect of the invention and mainly concentrates on the tangible result, adopting broad modus operandi for a software patent. In US, the only requirement for promoting software inventions to patentability is ‘useful, concrete and tangible result’, whereas in Europe, the software invention is required to draw out the technical features for successful grant of patent under the provisions of EPC. Regardless of amendments and the precedents, the EPO’s approach is still narrow with respect to software patents. It is obvious that the chances of European Patent passing the US examination procedure, is more feasible than vice versa. The JPO appears to have adopted a middle path. The Japanese point of view is similar to that of EPO, but not so intense. (Park, 2005)
The Need Of Harmonization
There is an urgent need of certainty in the approach of different patent systems granting patents to computer programs for the benefit of both, applicants and third parties. At present, there is different approach adopted by different jurisdictions, thus increasing uncertainty in the field and affecting the economy. If this situation continues, then the inventors would not be motivated to come up with new inventions or ideas which will in turn degrade the technology and growth of IT field. Due to negotiations with EPO, the UK Intellectual Office has started following the EPO’s approach in relation to patentability of computer programs or software, in order to harmonize the law. Following the recent trends, other European states have tried to change their patent laws in order to merge with the Global Patent system. (Wallis, 2009)
Patentability of computer programs or software is one of the hot topic in the IPR field around the world. Every nation attempts to simplify the patent laws towards computer-related inventions based on two factors, “as such” provision and majorly “technical contribution to the art”. The conflict of interests between many jurisdictions still exists with regard to patenting computer programs. It has increased the rate of uncertainty in the IT field which is booming gradually. After analysing all the above factors, I conclude that global patent system for computer programs or software should be harmonized by all jurisdictions in the best interests of inventors, third parties, Patent Offices and majorly for the World economy. This is the best way to fight the problems for granting patents in modern world.
Attridge, D. J. (2001). Challenging claims! Patenting computer programs in Europe and the USA. Intellectual Property Quarterly .
Bakels, R. B. (2009). Software patentability: what are the right questions? European Intellectual Property Review , 1.
Davies, S. (1998). Computer program claims: the final frontier for software inventions. European Intellectual Property Review , 2-3.
Macqueen, H (et-al) (2008). Contemporary Intellectual Property . New York: Oxford University Press Inc.. 360, 408, 410, 421, .
Lees, W. C. (2007). Test clarified for UK software and business method patents: but what about the EPO. European Intellectual Property Review , 2-5.
Liesegang, E. (1999). Software patents in Europe. Computer and Telecommunications Law Review , 2-5.
Mauny, C. D. (2009). Court of Appeal clarifies patenting of computer programs. European Intellectual Property Review , 2-6.
Park, J. (2005). Has patentable subject matter been expanded?- A comparative study on software patent practices in the European Patent Office, the United States Patent and Trademark Office and the Japanese Patent Office. International Journal of Law & Information Technology , 8-20.
Pierre-André Dubois and Shannon Yavorsky, K. &. (2008). IP in the life sciences industries 2008. The European Patent Convention and the London Agreement , p. 27.
Pila, J. (2005). Dispute over the meaning of “invention” in Art.52(2) EPC – the patentability of computer-implemented inventions in Europe. International Review of Intellectual Property and Competition Law , 1-6.
Sharp, D. W. (2009). Patents: patentability of computer programs. European Intellectual Property Review , 2-3.
Wallis, H. (2009). Patentability of computer-implemented inventions: the changing landscape in 2008. Communications Law , 5. | <urn:uuid:77f341aa-998e-4e16-afc8-99df716658dd> | CC-MAIN-2022-33 | http://paper-market.com/free-essays/patentability-of-computer-programs-or-software/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00495.warc.gz | en | 0.928294 | 5,564 | 3.359375 | 3 |
A scandifuturist art creation myth
Studying Nordic folklore, one gets the sense that the performing arts were communicated and taught by dark, subterranean powers. The recent ancestors of contemporary Scandinavians lived in a world where the devil was a fiddler and the malicious water spirit known as nøkken, or the nix in English, could be heard playing sweet and seductive jigs from waterfalls and streams. It was said that he offered apprenticeships to those who dared bring him a sacrificial meal. But these entities do not represent creative independence and freedom without compromise: The devil is unable to perform his devilish deeds single-handedly – he is powerless without the initial consent of either god or man – and the nix possesses, like most goblins, wights and trolls, a murderously ill disposition towards mankind. Trolls and their ilk are not known for their innovation, and are in fact utterly passive creatures that must be coaxed or driven to action. Then, one might ask, what do wights and devils have to offer us? The answer is nature. They illustrate that man in one way or another must approach and confront nature if he is to realize culture. And since nature is rather suspicious, poisonous, capricious, etc., it is represented by such clandestine, anti-cultural agents.
Since trolls first and foremost are beings of nature, they are not motivated by cultural concerns. But though they are anti-cultural, they don’t thereby exist in a culture-less vacuum. Nature and culture reside in a mutually destructive relationship to each other, and one could say that the culture of the troll, as it were, is a reactionary necessity. They coil around one another. Norse poetic theory reveals that the shape-shifting nix was originally perceived as a mutant: a creature that was half one thing, half something else, as addressed in Kunstforum 1/2017. The medieval Icelandic poet and chronicler Snorri Sturlusson thus referred to the aesthetic ideal of pagan poetry as nýkrat: “nixy”, because its metaphors were constructed out of opposed elements – poetic, anti-naturalist mutants.
The perception of art in Nordic folk tradition up until the industrial revolution – the era Norwegians refer to as Det store hamskiftet (literally “The Great Shape-shift”) – may be considered an off-shoot of one we see even in Old Norse and Viking Age sources, and can still be traced in language today. This might seem like a bold statement. But languages reveal metaphors and deep psychological concepts and ideas that are often difficult to identify directly, but can be unveiled in etymology and euphemisms.
We usually apply negative connotations to the word “darkness”. To most of us, these lean towards uncomfortable, more or less anxiety-provoking subjects. Many of us are afraid of the dark, but darkness is also associated with seductive moods, instincts and subconscious pulls. The Norse realm of the dead, Hel, has the same etymological root as huldra, a seductive and dangerous subterranean spirit in Nordic folklore. Both words mean “the hidden”. It is precisely to the blackest underworld that gods and men alike must journey to retrieve knowledge and inspiration in Norse mythology.
“That trolls dwell in men is a fact known by all who have an eye for such matters,” wrote Jonas Lie in the introduction to his anthology of supernatural stories, Trold (“Trolls”) in 1891. To whichever end we may ascribe human personality traits to wights and trolls, it will more often than not appeal to our worst natures. The things we would rather hide. Greed, laziness, envy, exploitation or seduction. Any behavior Christianity considers sinful, comes naturally to the troll. Pursuing these metaphors, we may begin to discuss subterranean characteristics. The subterranean is where the trollish has its roots. The trollish doesn’t necessarily reside in the underworld itself, but relates to it much like the Sicilian mafia does to America. And nature is trollish in itself. Thus we may consider trollish personality traits, deeds, impulses, and patterns of thought. And, not least, we may consider trollish aesthetics. A trollish paradigm, not only for understanding art, but also mankind’s masochist struggle between order and chaos, nature and un-nature.
But before we can approach such a “trollish theory of art”, we must return to the beginning. As this is a theory that is best described in a metaphorical light, it is necessary that we spend part of the article within the confines of a mythological Scandifuturist reality, where laughable, eye-rolling terms such as troll will be used with total sincerity.
The Pre-Cosmic Room
In the beginning was Ginnungagap. The room without doors, windows, floor or ceiling. There was no art on the walls. In fact there were no walls either, but the room had two corners: one was hot and dry, the other cold and humid. In the middle of the room a poison fumed, dripping from the cold humidity. Out of this poison sprang a cow, and then a creature with arms and legs: Ymir, “He-Who-Hums”. Without kin, and of uncertain sex. The tooting primeval giant was not long alone, for new beings oozed from his sweating pores, and they looked a lot like himself. They were much smaller than Ymir, though small is probably a misnomer in this scale. His limbs and members fucked each other, and became with child. Children sprung forth in increasing numbers. Some had seven arms, others three legs, nine hundred heads, cauliflower ears, pig-eyes, and concave, inverted faces.
One day, a dewy rock came hovering. A salt block, cold and humid, levitated through the room without walls, through a window that did not yet exist. The cow began to lick away at the salty boulder, and I’ll be a monkey’s uncle if a new creature didn’t pop out of that too. A practically perfect and symmetrical being – a god. Búri, “He-Who-Shapes”. He settled down among Ymir’s kin, called the jötnar (singular jötunn), or giants. He got laid too, and from this line a new people sprung called the æsir, the gods. Odin, which somewhat simplified means “mind” or “ecstasy”, lived there with his brothers Vili and Vé in the æsir diaspora, who, despite some genetic relation, had little in common with their trollish neighbors.
Now it was getting hard to breathe in there. It was getting crowded. There were wights of all kinds. Ogres, trolls great and small, and buckets of jötnar. Grotesque critters who cried and chuckled, scratched, yanked, and bit each others’ tails. They shrieked and farted with rivalry in an eternal cacophony without beginning nor end. Yet, for the most part they were drowsy and did nothing at all. They merely sat there and trembled, so hard that they began cracking at the touch, while Ymir did their eardrums in with his incessant tooting. His atonal shriek was incomprehensible. It was so cramped and loud that one could hardly even hear oneself think. All was white noise and entwined, asymmetrical anatomies in this massive room of no ambition.
Then one day the gods decided they’d had it with this damn mess. They’d had it with time and space without direction, and so they killed Ymir and ripped him apart, piece by piece, limb by limb, so blood and guts spattered in all directions. That was the day that culture came to Ginnungagap. The jötnar clung for their lives to each hair of the primeval giant’s convulsing hulk of a body, members and genitals contorted in postmortem spasm. Severed feet and elbows stampeded across Ymir’s own carcass, without even a floor to fall on. Ginnungagap was filled with billions of ogre cries, both ear-deafening and faint, as their ancestral father trampled them all into oblivion. Their lifeless bodies flushed into the toilet of eternity by the blood that spilled out onto the anti-floor, flooding the pre-cosmic room. But the gods kept their cool. They squeezed the juice out, weighed the body down, tugged and broke it apart until a landscape revealed itself. The flesh was torn from the bones, standing tall as mountains. They fashioned the sky from the skull of the ancient giant and exhibited it in the heavens. Thus the world was created; an assemblage of bones and innards.
New life, new floras emerged. First natural, then artificial life. The few wights, ogres and trolls that survived the bloody deluge sought shelter in glens, caves, under bluffs, or in the deepest recesses of the earth. The gods engineered creatures of their own to toil in the various cosmological tasks. Once they had separated night and day, land and sea, they dug ditches, timbered houses, tilled the soil, ate food, drank drink, played games, and recited verse. But this was not enough, for the gods had much to do, and were too busy to populate the earth – the very frontline of the eternal war of gods and giants. They created man and woman, dull and impotent beta-copies of themselves. The humans impersonated the gods, but were ultimately fragile and unimpressive. The fact remained that their dependency, as well as their tendency to die like flies, was seen as a benefit by the divines, who desired no competition.
Mankind was separated from wild animals by virtue of their intelligence, consciousness, and self-destructive neurosis. We stood lodged in the middle between over-, middle-, and underworldly powers. While we descended from gods, we lived among trolls. It was inevitable that troll’s blood, too, came to pump through our veins.
We live in a world of time and change, where nothing stands still. The polar opposite of the pre-cosmic space. Quite unlike the world of ideal forms postulated by Plato, there were no archetypes in Ginnungagap that defined the ideal shape of a fish, or differentiated it from a chair. It demands that the pre-cosmic state is described through allegory and negation. Nonetheless, order would never occur if chaos did not allow it. We live in a world pretending to cater to humanity, though it exhibits cold animosity against us at every turn. That is why we saw the need to move indoors, assemble in cities and villages, make fences, till the soil and lay tarmac.
To the gods, art was the ultimate goal – to fashion a completely artificial world. But we don’t live in such a world. Art, in one sense, is the opposite of nature. The troll, whether it lives in so-called nature or within ourselves, forces itself to the surface as often as it can. There was a time when they lay with broken backs, in the mythological golden age, but that was long ago. Man was there when the troll rose again.
They want back, not to a world of yesterday, but the day before yesterday—to the primordial, boundless gap. They have not forgotten Ymir. Potent and vitalistic culture is microbial compared to the natural universe. In the Norse sagas, chaos and disorder are symbolically depicted as natural landscapes, uncultivated land and forests. People cannot live there. They must chop it down and make order, or die. The founding of any society relies on access to resources. We cannot get by without raw materials.
Apocalypse sooner or later
If we take the mythical timeline seriously, then the Eddas seem to imply that nothing is sustainable. The gods realize that it’s not a question of whether or not the world will end, but for how long they can stall the inevitable. A wholly different world had to be destroyed for our world to be created.
Ragnarok, the end of the world, is the conclusion of a spiral of violence, an endemic conflict that has raged since the world began. It is revealing that the first creative act came in the form of a murder. In Viking society, the extended family suffered collective blame if one member was found guilty of murder. You could scold your relative and damn them to hell, but you couldn’t flee your blood ties. And here you might have thought that your family was a hassle to deal with. Still, your hands are also tied to culture. We are liable for it, and it provides us with a safety-net which guarantees a certain level of physiological, mental, and social health, which is the opposite of nature – and I mean the raw, non-recreational nature – and her promise of permanent, inhumane stress.
All culture is an enemy of nature by implication. The need to create art is a need to defeat nature, and to say that cultivation is necessary. That nature is not enough. Art insists on comment. Trolls don’t. But the tension between the artificial and the organic is crucial, not only for art to exist, but for the world to exist.
The end is foreshadowed in the mythology as well as the third law of thermodynamics. We expect that entropy will make the universe uninhabitable in a few trillion years. Not that we need to worry about that – we are already well on our way to turn Earth into Swiss cheese ourselves. A tug of war also rages within nature itself, between competing forces with no relation to human activities. All is poisoned by energy, competition and the transfer of power. Opposing pairs rub against one another, nothing stands still.
The gods must frequently travel beneath the earth and visit the jötnar (who for simplicity’s sake are often called giants, though this is inaccurate). They never create anything on their own, but passively accumulate artifacts and stray seeds of culture. The interest trolls and jötnar have in these items, is measured only in their value as weapons that could be used against us, to settle the score and make primeval forces great again. The jötunn – raw, unfettered, anti-cultural nature – hate our guts. They corrode culture. But in actuality, nature’s hunger retaliates against the pain it suffers at the hands of culture. They eat each other.
Language reveals which team humanity plays for. Jötunn comes from the Proto-Indo-European root *h₁ed-, which means to eat. Nature is ferocious and insatiable, seen through the eyes of culture. Right back at ya, says nature. When cities are evacuated, they get overgrown. Nature bites and gnaws itself in. But it can be seen from the other side, too: culture chomps away at nature. Developed land, whether we are thinking about cities or farmland, is a kind of gentrification of the landscape. An agricultural field is wilderness subdued, a slave, of sorts, to culture. Crops and livestock live in a symbiotic relationship with people within the confines of culture. People tend to them and protect them, while taking nourishment back from them. When humans are gone, gardens collapse and crops are overrun by weeds. The opposite is true in the case of certain GMOs, which overrun naturally occurring plants unless kept in check, and are as difficult to control as any weed.
The Eddic poem Völuspá describes a golden age where the corn sowed itself, which is what wild plants do to survive. The ideal is an artifice that imitates nature, but is segregated and self-maintaining. It would seem then, that culture and nature have things in common, like the æsir and the jötnar do. Their mutual relationship is reactionary in the sense that the giants are forced to reinvent their own tactics when faced with the gods. It becomes a sort of dance where culture assumes the same organic methodology as the nature from whence it sprung, though culture is a rebellion against nature.
Into the Tame
Some have this idea that, in the past, people lived close to nature. They imagine that pagan Scandinavians practiced a so-called nature religion. I don’t think the pagans would recognize such a description of themselves.
The city is an expression of the same desire that the gods expressed when they kicked Ymir straight in the eye, and hung the veil between order and chaos. Though the experience can be very different, there is no essential distinction between building a farmyard and building a city, or between living in a turf hut or living in a skyscraper. Mega-cities are extreme expressions of the same impulse that drives mankind to clear forests. The weed of culture takes root where it can. If the gods could, they would free themselves entirely from nature. Mankind would live in sterile, white cube gallery-like dwellings, drink and play day and night, and get all their nourishment from obedient GMOs. There would certainly be no need for the gods to travel far and wide in order to beat the crap out of trolls. The transdivine city hovers above, without ever touching the ground.
We find the same contempt for nature in transhumanism, which under no circumstance accepts its place among trolls and gods. Humans are like the gods by simply being themselves, while transhumanists strive to make the gods obsolete, to be transdivine. They wish to complete what the gods started. Is there a trollish alternative? Antinatalism, perhaps, promoted by such philosophers as the Norwegian Peter Wessel Zapffe and the American Thomas Ligotti. Maybe there is a voice from within the woods, or within ourselves. A kind of troll voice, an ogre mafia that wants us to fail.
Cities are in themselves artificial, and this particularly goes for the big modern ones. By the steps of a subway station in the middle of Manhattan, there is a wall covered in a mosaic imitating the face of an overgrown cliff. The frieze is superimposed by a quote, and though the author is not named, it’s Carl Gustav Jung: “Nature must not win the game.” It is subverted by a green, tiled section of simulated overgrowth, but continues across the gate, where the stairs lead to one of the platforms: “But she cannot lose.” The frieze, called Under Bryant Park, was created by the artist Samm Kunce in 2002. It quickly occurred to me that this complicated relationship between art and nature is a recurring theme in much of the newer public art to be found in Manhattan. This is because the island’s last naturally occurring green spots have long since been paved over, no doubt. A person could probably spend their entire life in New York without ever touching a tree that wasn’t put there by human hands.
The landscape inspired art of the heavily gentrified Meatpacking District pays witness to similar ideas, where industrial chic sculptures in halfway smooth and processed, halfway roughly quarried natural stone are everywhere to be found. You’ve probably seen it before. This part of the city, by the way, houses one of New York’s most carefully meditated parks, a platform called The High Line, an old railway track converted into a recreational area for the city’s many nature deprived hip and wealthy. Among other things, they’ve planted a forest in the middle of the tracks, which also serves as a sculpture park. These days, The High Line is the site of an art exhibit called Mutations, which, to nobody’s surprise, explores points of intersection between industry and ecology.
To plant a suspended forest in the middle of the asphalt jungle initially seemed a bit too urban an idea for a Norwegian such as myself. Then again, Norwegians will pave roads through the woods and call it outdoorsmanship. It’s not too different. The expression reflects a wish to display a neutered version of nature, a safe and good nature that we may control. One that does not unsettle us. We believe that trolls can be contained, but trolls always find an expression.
The Thing Among Us, The Thing Within Us
In fairy tales, trolls tend to be procrastinating and ineffective types, but if awakened they are quite unmanageable. The Norwegian word vette (wight) is a euphemism for different trollish entities – beliefs of old dictated that if something was called by its true name, it might just pop you a visit. The word has its root in Proto-Germanic *wihtiz, mening “thing” or “being, essence”. The troll is one of the base ingredients of humanity. We don’t need to “become as the gods” – we already are: There is no part of us that is not the fault of the gods, and for most of us, not being a troll is struggle enough. But the trolls live inside and around us, everywhere. If we presume the existence of trollish traits and characteristics, then there must also be people who fulfill more or less trollish functions, or simply are trolls in terms of behavior. In day-to-day speech, trolls are perverted, devilish individuals who turn to the internet to satisfy their schadenfreude.
Let us assume that this is an impulse that dwells in all human beings, which could rise to the surface with certain techniques, or a peculiar sensitivity, or dispositions that are particularly strong in certain individuals. That emerge as self-dehumanizing patterns of behavior, and in worst case scenarios result in the total breakdown of social sensitivity, and a general disregard for the rules of interpersonal relations. That people, either through traumatic events or conscious choice, may renounce their humanity, surrender to their inner troll, and become a cultural antithesis – a process that will almost certainly result in bad hygiene, faltering physical and mental health, neurosis, self-loathing, hubris, distrust, superstition, or any other inability to nurture relationships with other people, not to say behavior that evokes all forms of both sympathy and disgust. It’s no wonder that the troll is often used is a metaphor to describe unreliable and frankly terrible people in Norse literature. At the same time, there is something alluring about the idea of the troll. Just as often, the troll will possess hidden knowledge, secret tricks, and transcendent ploys. Trolls are, for better or worse, with all their twisted cynicism, creatures who see the world with X-ray glasses, and reality for what it is. Though blind with rage as they can be, trolls have few illusions.
In the rustic vernacular, Scandinavians can sometimes be heard talking about the property of having a so-called troll splinter in the eye. The troll splinter is a trait that causes people to see the world in ways that other people do not. Above all, it leads to antisocial tendencies in those of us who have it, as we tend to think exceedingly bad thoughts about our fellow humans. It also causes a novel distortion in the perception of reality, and I believe many artists and authors suffer from this affliction. At the very least it’s clear that many artists have personalities and behavioral traits with trollish qualities. Theodor Kittelsen understood trolls, but there is no need to resort to those artists who use trolls as a motif to illustrate the point. “Degenerate art” was trollish, Dada was trollish. Edvard Munch, Egon Schiele, and Vincent van Gogh painted using a troll brush. M. C. Escher’s illusory motifs take a æsian form, but are philosophically trollish. Marcel Duchamp’s Fountain is a petrified troll. Norwegian author Olav H. Hauge was haunted by trolls, and the contemporary Norwegian poet Erlend Nødtvedt succeeds in making trolls his subject while also showing an understanding of their inner life. The characters in the novels of Thure Erik Lund are fully absorbed by their inner troll.
Trollish, Titanic, Thursian
If you’re vaguely familiar with Nietzsche, you might recall the Apollonian and the Dionysian, and how each of these terms make for aesthetic extremities of their own. Yes, one might as well say that each of them refer to two different – partially opposite, partially complementary – ways of perceiving and being in the world. The stoic, measured and lofty belongs to Apollo, who is counterpart to the ecstatic, spontaneous and promiscuous – traits we associate with Dionysus.
The problem is that these two categories of being share the same Olympian origins, a heavenly root. They lack the total subterranean dimension of being. Therefore I argue, subjectively and with trollish bias, that Greco-Romanesque interpretative frameworks may be too singular in how they place mankind up against the landscape in which it exists. The titan, the pure raw material, is totally lacking in Apollonian-Dionysian aesthetic theory.
In the Nordic worldview, the Apollonian-Dionysian dichotomy is not represented by two essentially different gods. They are dynamic traits found in the one and the same being. Odin, for example, is both the wise and measured chieftain of the gods – yet simultaneously he is also the transgressive bum. The king and the fool, the sage and the madman, are not necessarily mutually divided, atomized cells, though this is what the Greco-Roman paradigm is trying to sell us. Likewise, man carries properties inherent in both culture and nature, both æsir and jötunn, with the noteworthy extreme that nature within Nordic thought was a perilous and hellish place. In spite of all our dependence on it, nature was to be watched with suspicion and at a distance within conventional, polite culture. Nonetheless, culture requires that some of us see beyond this and, like the Ash Lad of Nordic folklore, makes friends with ogres and trolls. A dose of trolldom complements and informs artifice. A trollish gaze, or at the very least recognition of the trollish phenomenon, is a prerequisite of much great art.
The most deprived and malicious aspect of trolldom is, however, the thursian, the ogrish, which is antithetical to human artifice in every form. The gaze that perceives the world with a troll splinter in its eye is critical and serpentine, but not necessarily pathological and destructive. The thursian, on the other hand, is always pathological, and in any case indecent. It pisses all over Rousseau’s social contract. Victor, the wild boy of Aveyron, who tumbled out of the French woods around the year 1800, is a fabulous example of what might happen when a human being grows up in a thursian environment. He would often walk on all fours, refuse to wear clothes, defecate wherever and whenever, bite people, and masturbate whenever he pleased. He never learned to speak, but it goes without saying, his actions speak louder than any of the 300,000 or so words contained in the Norwegian language.
As opposed to what we conventionally consider well-adjusted people, Victor survived just fine in nature, on his own – though he could hardly even open a door. The rest of us are no more at home in the woods than a piece of Styrofoam or a shopping bag. The golden calf we call outdoorsmanship is related to expeditions to the North Pole, ascending the K2, or launching human beings into space. They are variant expressions of the narrative of the cosmic journey, a divine ordeal made possible in part through the confrontation of demonic beings. Such expeditions are impossible to survive without synthetic aid— in nature-phobic bubbles that defend us from hypothermia, asphyxiation, or insanity. To live at peace with nature, one must fully and wholly be a troll. Or you may die, and nature would be just as pleased; the roots of the forest lie in the earth beneath.
The underworld is always closer to nature than the over-worldly. When Norse mythology places the origins of art in the nether regions of the cosmic z-axis, it tells a story of latent potential. It is the fate of art that it must rise up and spread out. It associates with the frequently used metaphor of the cultural underground, thought to contain undiscovered talents – often of a particularly transgressive, unbound kind.
Eirik Storesund, Among machine elves and spaghetti beasts: Cognitive aspects in pre-Christian art
The Eddas are the main sources of Norse mythology.
A philosophical stance asserting that human reproduction is immoral.
In H. C. Andersen’s The Snow Queen (1844) the devil, a celestial troll, fashions a mirror that distorts anything reflected in it. While attempting to bring it to the realm of the gods in order to ridicule them, the mirrors shakes so hard with laughter that it is dropped down to earth, shattering into countless, tiny pieces.
In Norwegian folk tales, Askeladden is the small man who succeeds where all others fail . | <urn:uuid:3e6a8e8e-c55d-4779-be1b-83773db5470c> | CC-MAIN-2022-33 | https://kunstforum.as/2017/12/the-trollish-theory-of-art/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571502.25/warc/CC-MAIN-20220811194507-20220811224507-00096.warc.gz | en | 0.960805 | 6,219 | 2.640625 | 3 |
READ THE ThunderbirdStory
READ THE WRIGHT HISTORY OF TOWN WrightHistoryOpt
READ THE SNOWBIRD MOUNTAIN LODGE HISTORY by Robert Rankin
Early Inhabitants/Explorers - By Ronald E. Johnson
Many of today's routes were originally just an animal tracks worn down by buffalo, deer, and bear taking a path of least resistance. These rough trails were then used by Native American Cherokee Indians for centuries as trading/war paths. The earliest white men to use the trail were soldiers, explorers, hunters, and trappers moving into dangerous territory in the 1700s. Many of the old Cherokee Trails actually had natural markers, trees that were bent over by the Indians to point the way. These were called "bent", "yoke", or "marker" trees. On some of the major trails the Cherokee reportedly covered as much as 100 miles a day.
Chesquah, a Cherokee Indian born circa 1773, recalled seeing large herds of Buffalo grazing in what is now Robbinsville circa 1789. He lived at the junction of East and West Buffalo Creeks and the Cheoah River, today on Lake Santeetlah. Chesquah, who died circa 1880 and was buried at Ground Squirrel Gap, is said to have followed the last herd of buffalo heading west across Hooper Bald. This photo at right of Chesquah is very similar to one said to be Nathan Kirkland. It was not unusual for Indians to also take a white man's name.
In later years settlers used a steer or "cow-brute" to find the easiest passage on lands where animal trails had not yet been established. Heading the steer in the general direction they would follow and place stakes along the future path.
A number of historical figures had explored close to the Appalachians if not actually crossing into present day Tennessee. The earliest was Hernando DeSoto (c1500-1542) who came searching for gold in 1540 and according to one expert crossed the mountains before heading westward to the Mississippi River. Spanish explorer Juan Pardo came 27 years later in 1567 also looking for gold. He established a fort near present day Morganton, North Carolina before heading west stopping near Asheville before making his way through the Great Smoky Mountains of today.
By the late 1500s England, seeing the French claims in the north and Spanish claims in the south, decided they should resurrect their interest in colonizing the Virginia/North Carolina coast. It was not until the 1640s that Governor William Berkeley heard rumors of "a huge mountaine within five days journey, and at the foot thereof great Rivers that run into a Great Seas, to which people come in ships". It was more than twenty years later before Berkeley took action in searching for this supposed passage to India. He sent German physician John Lederer westward to the slopes of the Blue Ridge. He did not cross the mountains when told that bearded white men were ahead, which he interpreted as enemy Spaniards. Lederer returned to Virginia.
The next to confront the rugged Appalachian Mountains were Gabriel Arthur and James Needham (?-1673), a young English physician who crossed at one of three locations to meet with the Cherokee at Chota on the Little Tennessee River to establish trade. The routes were from Boone to the Watauga River; from Canton crossing at Swannanoa Gap; or following the French Broad and Pigeon Rivers to the Over the Hill Towns. This more southerly route could have led to crossing into Tennessee somewhere near present day Deals Gap or possibly the Unicoi Gap farther to the south.
Arthur remained at Chota while Needham returned to Virginia. En route Needham and his guide "Indian John" had an argument that resulted in the death of Needham. Arthur was accepted by the Cherokee and in Indian dress accompanied them on a raiding party on Spanish settlements in Florida and Shawnee towns on the Ohio. He was captured by the Shawnee who discovered he was a white man and returned him to his Cherokee wife back in Chota. The Cherokee chief later escorted Arthur back to Virginia.
At first relations with the Cherokee were good as both parties benefited from the trading that was taking place. But the many forces taking place at the time led to distrust and warfare. The Spanish, English and French sought comrades in the Indian Nations. The Indians themselves had conflicts not only with other tribes, but with members of their own who acted against the chief's wishes. It was destined to become a bloody time in American history.
An English fort was constructed circa 1756 near present day Vonore, Tennessee. Fort Loudon, located six miles from the Cherokee town of Chota, was rhombus shaped. Access to the site across the mountains proved treacherous with parties often managing only six miles in a day. The heavy cannons had to be lashed crossways on the backs of pack horses. Occasionally the protruding end of the cannon would catch on a tree along the narrow trails causing the animal to fall and break its neck. The exact path taken by these soldiers is unknown, but was most likely across the Unicoi Gap some 20 miles south of the Dragon. This Unicoi Trail was improved and opened to west bound settlers circa 1813. By 1820 property owners had begun collecting tolls from those using the now wagon passable roadway.
The mountain forests in the 16-1700s were nearly devoid of underbrush. The Indians had a practice of burning the forest floor to enhance hunting of wild game which included buffalo, elk, deer, wolves and even moose. By the early 1900s the forest thickets were returning with major growths of rhododendron, flame azaleas, and other shrubs.
A number of bloody incidents between the English and the Cherokee occurred in 1759-60. These escalated into all-out warfare. Many of the forts, including Loudon were attacked. Fort Loudon was abandoned in 1760. As some 200 soldiers marched in retreat, the Cherokee attacked killing twenty-nine and took the rest captive. The attack occurred about 10 miles south of the fort on the Tellico River. This route leads one to believe the access route across the mountains was at the Unicoi Gap southwest of Deals Gap.
A massive force was assembled to exact revenge on the Cherokee. Some three-thousand soldiers marched into the immediate area and burned all Cherokee towns and crops along their route. This force very likely took the Deals Gap route on the way back into North Carolina burning towns along the banks of the Tuckaseegee River. A peace treaty was finally adopted ending the Cherokee War in 1761.
New battles with the Cherokee erupted in June 1776 when war against the white man was declared by the Cherokee. Bands of Indians attacked helpless settlers across western North Carolina and northern Georgia. Retaliation was swift and merciless. Several armies were amassed and laid waste to most of the Cherokee towns including many west of the mountains. One army attacked Indian villages moving from Waynesville to Franklin near Wayah Bald Gap. Another army destroyed the Cherokee town of Stekoa (present day Stecoah), crossed the Tuckaseegee and Oconaluftee Rivers, destroying more. Most of the villages were deserted when the soldiers arrived, so there was little bloodshed.
Roads in the late 1700s were little more than trails, especially in the western parts of North Carolina. The state more or less left it up to the counties to maintain roadways. In the Blue Ridge mountain men walked or rode horses over trails that had been used by trappers and hunters.
The Botanist William Bartram explored western North Carolina in 1775 making it as far west as present day Robbinsville. The French botanist Andrea Michaux made several exploratory trips in the years of 1785-88. In 1793 he made it as far as present day Nashville. Most Europeans of this era deemed the mountains of western North Carolina as "impassable."
One of the first white men to possibly cross the Blue Ridge using present day Deal's Gap was John Sevier in March 1781. Sevier with a raiding party of 150 men on horseback "started to cross the Great Smoky Mountains over trails never before attempted by white men, and so rough in places that it was hardly possible to lead horses." History, Myths, and Sacred Formulas of the Cherokees, James Mooney, pg 59.
In the early1800s one of the paths became a crude roadway used to access the extremely remote settlement of Cades Cove via Happy Valley and Rabbit Creek. In 1830 Joshua Parson who lived near the Little Tennessee and Abrams Creek is credited with improving the Parsons Turnpike Toll Gate Road which more or less follows present day US 129 along the Little Tennessee.
Another route was improved in 1838 with Russell Gregory heading the project that lead from Forge Creek near Cades Cove following Parson Branch to the turnpike (US 129). Today this 8-mile gravel road still exhibits some of the excitement of the early days with 19 water fords. But note that the Parsons Branch Road is “one-way” out of Cades Cove to the Dragon. This route suffered severe damage in the floods of 2002 and was closed for several years as it underwent extensive repairs. It has been closed several times due to flooding. The latest closure in 2015 was due to the many dead hemlock trees that could potentially fall onto passerbys. The dying hemlocks of Joyce Kilmer Memorial Forest were dynamited recently for public safety.
In the 1840s Dr. Isaac Anderson was assigned the task of building a toll road between Knoxville and the Little Tennessee River valley in North Carolina. He enlisted the help of Cherokee Indians in laying-out the route and clearing it. Each Indian was paid with one yard of calico for each day worked. This was in the area of Bote Mountain east of Cades Cove. This road was only completed on the Tennessee side and soon fell into disrepair when North Carolina failed to connect to it. Cades Cove: The Life and Death of a Southern Appalachian Community, Durwood Dunn.
The Swiss-American geographer Arnold Henry Guyot explored the mountains of western North Carolina in the 1850s. He wrote that the western slopes were used by "Tennesseans for grazing cattle. Numerous paths, therefore ran up the western slopes, and along the dividing ridge. But the eastern slope is still a wilderness, little frequented. Here the Little Tennessee cuts that high chain by a deep widening chasm, in which no room is left for a road on its immediate banks the mountains near by rising to 3,000 ft. above it, and upwards; the point where it leaves the mountains being scarcely 900 ft. above the level of the sea."
There were a number of more treacherous paths connecting Cades Cove to North Carolina. One family migrated south along Forge Creek crossing the mountains at Ekaneetlee Gap (3,800 feet elevation) and descending into North Carolina settling in Possum Hollow on Hazel Creek. Another crude path followed Forge Creek to the south passing through Rich Gap (4,600 feet elevation) just to the east of Gregory Bald and descending along Twentymile Creek.
There were many conflicts between the Cherokee and white settlers who infringed upon their world in these early years. Both sides took lives in needless disputes and quarrels. This undeclared warfare resulted in one of the saddest events in our early American history – the Trail of Tears relocation of the Cherokee to Oklahoma. Many Native Americans refused to assemble and leave the only land they had ever known. The Dragon was one of the remote paths they used to evade the Army patrols sent to capture them.
The Civil War brought more bloodshed to the Dragon and surrounding areas. There is a gravesite located near mile marker 6.5 giving testimony to the times. It is where Union soldier Bas Shaw age 50 was buried after being killed in 1864 on the Old Tennessee River Turnpike.
During the war the few who resided in the Appalachian Mountains were called "outliers". Most of the mountain people wanted no part of either side. Raiding parties from North Carolina raided Tennessee while the Tennesseans stole from the North Carolinians. The mountain crossings themselves were primarily guarded by Confederate troops. When they could not able to capture passing bands they reverted to shooting them.
Shaw, a relative of John Jackson "Bushwack" Kirkland (photo) by marriage, was taken prisoner on the Little Tennessee River by Confederate soldiers. While en route to Asheville Bas Shaw was shot and killed December 8, 1864. It is unknown if he had tried to escape or was just murdered. He was buried on a ridgeline just uphill from US 129 (the Dragon). Many believe that his uncle John Kirkland pulled the trigger. Kirkland had killed two of Shaw's sons the year before who were Union soldiers.
Another incident in the area during the winter of 1864 involved Confederates pursuing several escapees along the Little Tennessee. Jeff Deavers and brothers George and Bartley Williams had stolen horses from the barn of William Coleman. The three were located near A. B. Welch's house on the Little Tennessee some 15 miles from Coleman's. All three were killed. They were buried along the Little Tennessee Turnpike where they had been shot. The bodies were later moved to the National Military Cemetery in Knoxville.
Another story cited in many histories describes a family that was robbed and taken into the woods next to the Turnpike by the Kirklands who were waiting to hold-up a Union soldier carrying pay to nearby units. The family's young baby began to cry threatening to give away the gangs location. When the parents were unable to quiet the infant, the gang killed the baby and stuffed it into a hollow tree trunk.
John Kirkland was so feared that lawmen refused to chase him into the remote forests. There were two known hideouts for the gang; one near present day Joyce Kilmer Memorial Forest and the other near the horse stables in Cades Cove. Kirkland relocated into Polk County, Tennessee where he died in 1902. He had never been arrested.
Gangs, such as the Kirkland Bushwhackers, often attacked patrols whether they were northern or southern. The bushwhackers preyed on anyone who happened their way. They killed two of Bas Shaw's sons nearby. The forested mountains offered the perfect hiding place to escape detection and law officers feared venturing after them as well.
Another incident involving U. S. Army Captain Lyon's raiding party took place near Robbinsville. It is likely they crossed from Tennessee at Deals Gap and descended on the Belding Trail into present day Graham County. They killed Jesse Kirkland, brother of Bushwhacker John, and several others on Isaac Carringer's Creek. Moving into Robbinsville they killed a Cherokee Indian. Retreating along the Santeetlah River they over-nighted at Stratton Meadow before crossing back into Tennessee.
Another historical figure who crossed the rugged trail through Deals Gap was John Denton in 1870. Denton and a friend named Gus Langford made a retreat from Tennessee after having a run-in with some local boys. Seems they got too friendly with several girls at a dance and were ganged-up by the locals. After throwing a few rocks at the locals coming out the door, Denton and Langford went home, packed up their things, and headed their wagons east for North Carolina where they knew they could "get lost". They crossed the Little Tennessee River at Calderwood and trekked through the mountains, crossing into North Carolina at Deals Gap. Denton made his way into present day Joyce Kilmer Memorial Forest where he made a lean-to for temporary protection from the elements and settled in. He was a true mountain man standing six-foot -three and literally strong as an ox. His story of survival is an interesting one.
As more settlers moved into the area landowners began collecting tolls for use of the road. Toll Booth Corner, located about midway over the Dragon, was a place to pay for the right to cross private property owned by George Davis. There were also corrals to keep the livestock in transit over night and meager sleeping quarters for guests. Local legends tell of some who tried to cross without paying the toll and were caught and hanged on the spot.
There is also a gravesite nearby just off the Cherohala Skyway with a plaque reading "HERE LIES AN UNKNOWN MAN KILLED BY THE KIRKLAND BUSHWACKERS". There is a marker on the top of Huckleberry Knob where two surveyors died in a winter storm, December 1899. One was buried there, the other body was removed. In 1982 nine United States Air Force crew members were killed when their C141 transport plane crashed on John's Knob during a low-level training mission. The crash was so intense that no remains were found over the half mile of wreckage. There are likely many more buried in the nearby mountains that have escaped attention.
The people of these remote areas were a most hardy bunch. Some today might question why anyone would settle in such a dangerous and desolate place. The following story describes the remoteness. Written by backwoods traveler Bud Wunst, it appeared in the July 22, 1900, issue of The Morning Post, Raleigh:
But I have only come three and a half miles from Mrs. Crowder's. In fact, she came part of the way with me. She did not wear the two large navy revolvers around her waist, as I had been told she would: but she did wear a pair of thick canvas leggins to keep the snakes from biting her ankles. I remember her resolute old face now, as she parted with me on the ridge near the Stack gap, while I made a sketch of the Hay-o, the Hang over and the Fodder Stack mountains. She is over sixty years of age. On her sun-tanned, bare arm hung a large tin bucket partly full of salt for the cattle she herds, and in her hand she carried a long stick to kill "rattle bugs" with, as she calls rattlesnakes. She was much distressed when I told her about the bear I had not killed, and she told me she would rather have heard of the death of that "old black man," as she called him, than any news I could have brought her. She ploughs, plants corn and rye and potatoes and turnips and all vegetables, slips her own rails, makes and renews all her fences, having laid over 300 panels last winter, mows her mountain meadow, cuts her own fire wood and hauls it on a sled to house, goes to mill and does all the work a man would have to do if there was one within three miles of her, which there is not, Bowers, her son-in-law, being her nearest neighbor. She is assisted by her two grown daughters, Marge and Caledonia, who is called Doan for short.
It was a hard matter to tear myself away from her; but I had no trouble in parting with Bowers, who, she said, is so lazy he couldn't get his breath if it didn't come "natural."
The rest of the article Wunst retells his meetings with Frank and John Swan, Andy Kirkland, born 1850, (brother of "bushwacker John"), Squire Stratton, age 72, Bill Depety, and Sheet's store on his trek to Jeffries' Hell via Citico, Whiteoak, Waucheesi, Rafter and the Tellico River. He also recalled sitting on a "wool-sac" at Doc Stewart's house on Big Santeetlah Creek and meeting hunter-trapper John Denton who wore "two long curls in front of either shoulder".
In a previous article dated July 15, 1900 in The Raleigh Morning Post he detailed more of his travels to the far western reaches of North Carolina. In his search for Jeffries Hell he hiked along Big Santeetlah Creek passing Arch Stewart's, Arch's son Doc Stewart, and then overnighting at John Swan's place at current day Swan's Cabin. Seven miles to the west he encountered William Stratton's thirty acres with 300 head of cattle on a high secluded meadow. He visited Absalom Stratton's grave site where the early settler is buried half in Tennessee and half in North Carolina. Today the grave which reads "A.S., Was Born 1757, Died 1839" can be seen just a few feet off the Cherohala Skyway at mile 1.6 where Santeetlah Creek Road intersects.
Wunst then hiked across Stratton Bald to Haoe which he called Hay-o and then out to Hangover for a wide view of the mountains. In the magnificent view were Fodder Stack, Slick Rock, the Cheoah River, Yellow Creek, Santeetlah, the Little Tennessee River, and the Snowbird Mountains.
Wunst's hiking partner was eleven-year-old Frank Swan, son of John Swan, who told him how Jeffries Hell came to be named:
If you want to know where Jeffries Hell is I will tell you. It lies between the Sassy Fac Mountain on the south, the Stae sic (State) ridge on the east and the Fodder Stack ridge on the north. Two forks of Citico creek come out of it. It is densely covered with spruce pines and laurel. It covers about twenty-five square miles of territory. It is almost impenetrable. When one gets into it, the trees and undergrowth are so thick that he cannot see out on any side or above. Consequently, one soon gets bewildered and I imagine two would get so too.
Years and years ago a man by the name of Jeffries - not the judge of evil fame - got lost in this wilderness and wandered about in it for days without food. When he got out he said he had been to hell and the name has clung to it ever since. I did not go into it. I can find hell enough to suit me outside.
The tolls on "toll gate road" continued into the 1910s. The last keeper of the toll gate for ten years was Dee Hill. The land, which consisted of some 12-miles of crude roadway, was supposedly owned by U.S. Supreme Court Judge Edward T. Sanford at the time. Tolls were 35 cents per wagon and 25 cents for a man on horseback. Walkers passed through for nothing.
According to Dee Hill's younger brother Green Hill, few people traveled the road and most of them walked. He recalled a number of men carrying a coffin over the road headed upriver to Bushnel in North Carolina. The coffin with a 14-year-old girl was suspended by rope from a stout pole with four men supporting it on their shoulders. Alcoa finally tore down the Toll Booth house circa 1916 after a public road was built during construction of Cheoah Dam. An historical marker marked the location for a number of years. KNS, August 12, 1962, pg 17. (1920 Census, Civil District 11, District 27, Blount Co, Green B. HILL, age 47, Farmer/Cotton Mill; wife Nannie 41)
In 1908 there was a boundary dispute between North Carolina and Tennessee on the line from Deal's Gap to Joe Brown Highway. Engineer Dana Blackburn Burns, a noted surveyor at the age of 31, surveyed the land on foot. The 58 miles was a desolate wilderness from beginning to end.
Circa 1913 a town was actually born on the Dragon. Calderwood, formerly the Howard farm property, was created as living quarters for employees constructing the Cheoah Dam in 1917, Calderwood Dam in 1930, and those workers who maintained the systems. A railway ran from Knoxville, through Calderwood, and followed the Little Tennessee River all the way to Tapoco in North Carolina. Equipment, supplies and workers were transported on this line. Calderwood was also used by Alcoa Aluminum, aka Tapoco, Inc., as a corporate retreat for their executives. There was even a golf course accessed by ferry across the Little Tennessee River. Today’s Tapoco Lodge was also built by Alcoa for their executives. The entire dam and reservoir system provided electric power to Alcoa’s large aluminum processing plant north of Maryville, Tennessee later to become the town of Alcoa.
By 1923 discussions were underway between Tennessee and North Carolina officials about constructing a highway connecting Knoxville, Maryville and Bryson City. The primary emphasis was to shorten the route from northern cities to Florida. At the time there were no roads crossing the mountains between Newport and Chattanooga Tennessee, a distance of 125 miles.
Two alternatives were studied at the time. One considered was from Maryville, through Cades Cove and crossing the Unaka Mountains at Ekanetelle Gap at an elevation of 3,900 feet. The other, which was several miles longer, followed the path of current day US 129 from Maryville, Calderwood and Deals Gap then passing through Tapoco and Robbinsville. This route crossed much lower elevations up to 2,100 feet. An alternate route would connect Bryson City and Deals Gap directly, current day NC 28.
Bryson City officials played an important role in helping Knoxville interested parties make a decision. A portion of the planned alternate route from Bryson City to Deals Gap had been constructed and monies were available for completion.
An interesting three-day trip was arranged for the Knoxville delegation to explore the proposed route and the existing improvements. The May 1924 exploratory committee was quite an adventure in itself. The group motored to Sevierville where people expressed their support for completing the highway. The party spent the night in Gatlinburg before leaving at 6:30 in morning in Ford automobiles for a two-mile ride on the existing roadway. Then most of the men mounted seventeen horses and mules for the twelve-mile trail ride to the state line. Nine members of the group preferred to walk. They arrived at Lufty Gap, today near Newfound Gap and US 441, at noon where they met a contingent of fourteen people from North Carolina who provided a meal of fried mountain trout. A number of the Tennessee group returned to Gatlinburg. From the State line the rest of the journey was on foot through Cherokee and ended at Bryson City. The group was feted by the North Carolinians and provided free boarding and meals along with gifts of cigars and cigarettes.
The next day the group returned to Knoxville by way of Deals Gap. The route included automobiles to Judson, then a passenger train to Bushnell, and then on flatcars on a lumber train to Fontana. After lunch a steamboat carried the group eight miles to Tapoco Dam. A motor launch was then taken through the boom of the dam. After a tour of the dam the party proceeded to Calderwood by railway motor car. It was the seventh method of transport for the party.
This map appeared in the February 8, 1925 issue of the Knoxville Journal. The Knoxville Automobile Club favored the route through Cades Cove and Ekenetelle Gap, but North Carolina advised this route was too difficult on the North Carolina side and preferred the Deals Gap plan. By June of 1925 contracts had been let to construct the highway number 72 to Deals Gap and connect to the existing roadway to Bryson City | <urn:uuid:1ff830c2-2c10-445a-9ea4-b960eb99dc8a> | CC-MAIN-2022-33 | https://thunderbirdresort.com/history/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00697.warc.gz | en | 0.978334 | 5,703 | 3.734375 | 4 |
Noah Webster’s 1828 Dictionary
ELONGATE — EMBER
1. To lengthen; to extend.
2. To remove farther off.
ELONGATE, v.i. To depart from; to recede; to move to a greater distance; particularly, to recede apparently from the sun, as a planet in its orbit.
ELONGATED, pp. Lengthened; removed to a distance.
ELONGATING, ppr. Lengthening; extending.
1. Receding to a greater distance, particularly as a planet from the sun in its orbit.
ELONGATION, n. The act of stretching or lengthening; as the elongation of a fiber.
1. The state of being extended.
2. Distance; space which separates one thing from another.
3. Departure; removal; recession.
4. Extension; continuation.
May not the mountains of Westmoreland and Cumberland be considered as elongations of these two chains.
5. In astronomy, the recess of a planet from the sun, as it appears to the eye of a spectator on the earth; apparent departure of a planet from the sun in its orbit; as the elongation of Venus or Mercury.
6. In surgery, an imperfect luxation, occasioned by the stretching or lengthening of the ligaments; or the extension of a part beyond its natural dimension.
ELOPE, v.i. [Eng. to leap.]
1. To run away; to depart from one’s proper place or station privately or without permission; to quit, without permission or right, the station in which one is placed by law or duty. Particularly and appropriately, to run away or depart from a husband, and live with an adulterer, as a married woman; or to quit a father’s house, privately or without permission, and marry or live with a gallant, as an unmarried woman.
2. To run away; to escape privately; to depart, without permission, as a son from a father’s house, or an apprentice from his master’s service.
ELOPEMENT, n. Private or unlicensed departure from the place or station to which one is assigned by duty or law; as the elopement of a wife from her husband, or of a daughter from her father’s house, usually with a lover or gallant. It is sometimes applied to the departure of a son or an apprentice, in like manner.
ELOPING, ppr. Running away; departing privately, or without permission, from a husband, father or master.
ELOPS, n. A fish, inhabiting the seas of America and the West Indies, with a long body, smooth head, one dorsal fin, and a deeply furcated tail, with a horizontal lanceolated spine, above and below, at its base.
1. The sea-serpent.
ELOQUENCE, n. [L. eloquentia, from eloquor, loquor, to speak; Gr. to crack, to sound, to speak. The primary sense is probably to burst with a sound; a fissure, from the same root; whence, to open or split; whence L. lacero, to tear; and hence perhaps Eng. a leak.]
1. Oratory; the act or the art of speaking well, or with fluency and elegance. Eloquence comprehends a good elocution or utterance; correct; appropriate and rich expressions, with fluency, animation and suitable action. Hence eloquence is adapted to please, affect and persuade. Demosthenes in Greece, Cicero in Rome, lord Chatham and Burke in Great Britain, were distinguished for their eloquence in declamation, debate or argument.
2. The power of speaking with fluency and elegance.
3. Elegant language, uttered with fluency and animation.
She uttereth piercing eloquence.
4. It is sometimes applied to written language.
ELOQUENT, a. Having the power of oratory; speaking with fluency, propriety, elegance and animation; as an eloquent orator; an eloquent preacher.
1. Composed with elegance and spirit; elegant and animated; adapted to please, affect and persuade; as an eloquent address; an eloquent petition or remonstrance; an eloquent history.
ELOQUENTLY, adv. With eloquence; in an eloquent manner; in a manner to please, affect and persuade.
Other; one or something beside. Who else is coming? What else shall I give? Do you expect any thing else? [This word, if considered to be an adjective or pronoun, never precedes its noun, but always follows it.]
ELSE, adv. els. Otherwise; in the other case; if the fact were different. Thou desirest not sacrifice, else would I give it; that is, if thou didst desire sacrifice, I would give it. Psalm 51:16. Repent, or else I will come to thee quickly; that is, repent, or if thou shouldst not repent, if the case or fact should be different, I will come to thee quickly. Revelation 2:5.
1. Beside; except that mentioned; as, no where else.
ELSEWHERE, adv. In any other place; as, these trees are not to be found elsewhere.
1. In some other place; in other places indefinitely. It is reported in town and elsewhere.
ELUCIDATE, v.t. [Low L. elucido, from eluceo, luceo, to shine, or from lucidus, clear, bright. See Light.]
To make clear or manifest; to explain; to remove obscurity from, and render intelligible; to illustrate. An example will elucidate the subject. An argument may elucidate an obscure question. A fact related by one historian may elucidate an obscure passage in another’s writings.
ELUCIDATED, pp. Explained; made plain, clear or intelligible.
ELUCIDATING, ppr. Explaining; making clear or intelligible.
ELUCIDATION, n. The act of explaining or throwing light on any obscure subject; explanation; exposition; illustration; as, one example may serve for an elucidation of the subject.
ELUCIDATOR, n. One who explains; an expositor.
ELUDE, v.t. [L. eludo; e and ludo, to play. The Latin verb forms lusi, lusum; and this may be the Heb. to deride.]
1. To escape; to evade; to avoid by artifice, stratagem, wiles, deceit, or dexterity; as, to elude an enemy; to elude the sight; to elude an officer; to elude detection; to elude vigilance; to elude the force of an argument; to elude a blow or stroke.
2. To mock by an unexpected escape.
Me gentle Delia beckons from the plain,
Then, hid in shades, eludes her eager swain.
3. To escape being seen; to remain unseen or undiscovered. The cause of magnetism has hitherto eluded the researches of philosophers.
ELUDIBLE, a. That may be eluded or escaped.
ELUSIVE, a. Practicing elusion; using arts to escape.
Elusive of the bridal day, she gives
Fond hopes to all, and all with hopes deceives.
ELUSORINESS, n. The state of being elusory.
ELUSORY, a. Tending to elude; tending to deceive; evasive; fraudulent; fallacious; deceitful.
To wash off; to cleanse.
ELUTRIATE, v.t. [L. elutrio.] To purify by washing; to cleanse by separating foul matter, and decanting or straining off the liquor. In chimistry, to pulverize and mix a solid substance with water, and decant the extraneous lighter matter that may rise or be suspended in the water.
ELUTRIATED, pp. Cleansed by washing and decantation.
ELUTRIATING, ppr. Purifying by washing and decanting.
ELUTRIATION, n. The operation of pulverizing a solid substance, mixing it with water, and pouring off the liquid, while the foul or extraneous substances are floating, or after the coarser particles have subsided, and while the finer parts are suspended in the liquor.
ELVERS, n. Young eels; young congers or sea-eels.
ELVES, plu. of elf.
ELYSIAN, a. elyzh’un. [L. elysius.] Pertaining to elysium or the seat of delight; yielding the highest pleasures; deliciously soothing; exceedingly delightful; as elysian fields.
ELYSIUM, n. elyzh’um. [L. elysium.] In ancient mythology, a place assigned to happy souls after death; a place in the lower regions, furnished with rich fields, groves, shades, streams, etc., the seat of future happiness. Hence, any delightful place.
EM, A contraction of them.
They took ‘em.
EMACERATE, v.t. To make lean. [Not in use.]
EMACIATE, v.i. [L. emacio, from maceo, or macer, lean; Gr. small; Eng. meager, meek.] To lose flesh gradually; to become lean by pining with sorrow, or by loss of appetite or other cause; to waste away, as flesh; to decay in flesh.
EMACIATE, v.t. To cause to lose flesh gradually; to waste the flesh and reduce to leanness.
Sorrow, anxiety, want of appetite, and disease, often emaciate the most robust bodies.
EMACIATE, a. Thin; wasted.
EMACIATED, pp. Reduced to leanness by a gradual loss of flesh; thin; lean.
EMACIATING, ppr. Wasting the flesh gradually; making lean.
EMACIATION, n. The act of making lean or thin in flesh; or a becoming lean by a gradual waste of flesh.
1. The state of being reduced to leanness.
EMACULATE, v.t. [infra.] To take spots from. [Little used.]
EMACULATION, n. [L. emaculo, from e and macula, a spot.]
The act or operation of freeing from spots. [Little used.]
EMANATE, v.i. [L. emanano; e and mano, to flow.]
1. To issue from a source; to flow from; applied to fluids; as, light emanates from the sun; perspirable matter, from animal bodies.
2. To proceed from a source of fountain; as, the powers of government in republics emanate from the people.
EMANATING, ppr. Issuing or flowing from a fountain.
EMANATION, n. The act of flowing or proceeding from a fountain-head or origin.
1. That which issues, flows or proceeds from any source, substance or body; efflux; effluvium. Light is an emanation from the sun; wisdom, from God; the authority of laws, from the supreme power.
EMANATIVE, a. Issuing from another.
EMANCIPATE, v.t. [L. emancipo, from e and mancipium, a slave; manus, hand, and capio, to take, as slaves were anciently prisoners taken in war.]
1. To set free from servitude or slavery, by the voluntary act of the proprietor; to liberate; to restore from bondage to freedom; as, to emancipate a slave.
2. To set free or restore to liberty; in a general sense.
3. To free from bondage or restraint of any kind; to liberate from subjection, controlling power or influence; as, to emancipate one from prejudices or error.
4. In ancient Rome, to set a son free from subjection to his father, and give him the capacity of managing his affairs, as if he was of age.
EMANCIIPATE, a. Set at liberty.
EMANCIPATED, pp. Set free from bondage, slavery, servitude, subjection, or dependence; liberated.
EMANCIPATING, ppr. Setting free from bondage, servitude or dependence; liberating.
EMANCIPATION, n. The act of setting free from slavery, servitude, subjection or dependence; deliverance from bondage or controlling influence; liberation; as the emancipation of slaves by their proprietors; the emancipation of a son among the Romans; the emancipation of a person from prejudices, or from a servile subjection to authority.
EMANCIPATOR, n. One who emancipates or liberates from bondage or restraint.
EMANE, v.i. [L. emano.] To issue or flow from.
But this is not an elegant word. [See Emanate.]
EMARGINATE, EMARGINATED, a. [L. margo, whence emargino.]
1. In botany, notched at the end; applied to the leaf, corol or stigma.
2. In mineralogy, having all the edges of the primitive form truncated, each by one face.
EMARGINATELY, adv. In the form of notches.
EMASCULATE, v.t. [Low L. emasculo, from e and masculus, a male. See Male.]
1. To castrate; to deprive a male of certain parts which characterize the sex; to geld; to deprive of virility.
2. To deprive of masculine strength or vigor; to weaken; to render effeminate; to vitiate by unmanly softness.
Women emasculate a monarch’s reign.
To emasculate the spirits.
EM`ASCULATE, a. Unmanned; deprived of vigor.
EMASCULATED, pp. Castrated; weakened.
EMASCULATING, ppr. Castrating; felding; depriving of vigor.
EMASCULATION, n. The act of depriving a male of the parts which characterize the sex; castration.
1. The act of depriving of vigor or strength; effeminacy; unmanly weakness.
1. To make up into a bundle, bale or package; to pack.
2. To bind; to inclose.
EMBALM, v.t. emb’am.
1. To open a dead body, take out the intestines, and fill their place with odoriferous and desiccative spices and drugs, to prevent its putrefaction.
Joseph commanded his servants, the physicians, to embalm his father; and the physicians embalmed Israel. Genesis 50:2.
2. To fill with sweet scent.
3. To preserve, with care and affection, from loss or decay.
The memory of my beloved daughter is embalmed in my heart.
Virtue alone, with lasting grace,
Embalms the beauties of the face.
EMBALMED, pp. Filled with aromatic plants for preservation; preserved from loss or destruction.
EMBALMER, n. One who embalms bodies for preservation.
EMBALMING, ppr. Filling a dead body with spices for preservation; preserving with care from loss, decay or destruction.
EMBAR, v.t. [en and bar.] To shut, close or fasten with a bar; to make fast.
1. To inclose so as to hinder egress or escape.
When fast embarr’d in mighty brazen wall.
2. To stop; to shut from entering; to hinder; to block up.
He embarred all further trade.
EMBARCATION, n. Embarkation, which see.
EMBARGO, n. In commerce, a restraint on ships, or prohibition of sailing, either out of port, or into port, or both; which prohibition is by public authority, for a limited time. Most generally it is a prohibition of ships to leave a port.
EMB`ARGO, v.t. To hinder or prevent ships from sailing out of port, or into port, or both, by some law or edict of sovereign authority, for a limited time. Our ships were for a time embargoed by a law of congress.
1. To stop to hinder from being prosecuted by the departure or entrance of ships. The commerce of the United States has been embargoed.
EMBARGOED, pp. Stopped; hindered from sailing; hindered by public authority, as ships or commerce.
EMBARGOING, ppr. Restraining from sailing by public authority; hindering.
1. To put or cause to enter on board a ship or other vessel or boat. The general embarked his troops and their baggage.
2. To engage a person in any affair. This projector embarked his friends in the design or expedition.
EMB`ARK, v.i. To go on board of a ship, boat or vessel; as, the troops embarked for Lisbon.
1. To engage in any business; to undertake in; to take a share in. The young man embarked rashly in speculation, and was ruined.
EMBARKATION, n. The act of putting on board of a ship or other vessel, or the act of going aboard.
1. That which is embarked; as an embarkation of Jesuits.
2. A small vessel, or boat. [Unusual.]
EMBARKED, pp. Put on shipboard; engaged in any affair.
EMBARKING, ppr. Putting on board of a ship or boat; going on shipboard.
1. To perplex; to render intricate; to entangle. We say, public affairs are embarrassed; the state of our accounts is embarrassed; want of order tends to embarrass business.
2. To perplex, as the mind or intellectual faculties; to confuse. Our ideas are sometimes embarrassed.
3. To perplex, as with debts, or demands, beyond the means of payment; applied to a person or his affairs. In mercantile language, a man or his business is embarrassed, when he cannot meet his pecuniary engagements.
4. To perplex; to confuse; to disconcert; to abash. An abrupt address may embarrass a young lady. A young man may be too much embarrassed to utter a word.
EMBARRASSED, pp. Perplexed; rendered intricate; confused; confounded.
EMBARRASSING, ppr. Perplexing; entangling; confusing; confounding; abashing.
EMBARRASSMENT, n. Perplexity; intricacy; entanglement.
1. Confusion of mind.
2. Perplexity arising from insolvency, or from temporary inability to discharge debts.
3. Confusion; abashment.
EMBASE, v.t. [en and base.] To lower in value; to vitiate; to deprave; to impair.
The virtue--of a tree embased by the ground.
I have no ignoble end--that may embase my poor judgment.
1. To degrade; to vilify.
[This word is seldom used.]
EMBASEMENT, n. Act of depraving; depravation; deterioration.
EMBASSADE, n. An embassy.
1. A minister of the highest rank employed by one prince or state, at the court of another, to manage the public concerns of his own prince or state, and representing the power and dignity of his sovereign. Embassadors are ordinary, when they reside permanently at a foreign court; or extraordinary, when they are sent on a special occasion. They are also called ministers. Envoys are ministers employed on special occasions, and are of less dignity.
2. In ludicrous language, a messenger.
EMBASSADRESS, n. The consort of an embassador.
1. A woman sent on a public message.
EMBASSAGE, an embassy, is not used.
1. The message or public function of an embassador; the charge or employment of a public minister, whether ambassador or envoy; the word signifies the message or commission itself, and the person or persons sent to convey or to execute it. We say the king sent an embassy, meaning an envoy, minister, or ministers; or the king sent a person on an embassy. The embassy consisted of three envoys. The embassy was instructed to inquire concerning the king’s disposition.
2. A solemn message.
Eighteen centuries ago, the gospel went forth from Jerusalem on an embassy of mingled authority and love.
3. Ironically, an errand.
[The old orthography, ambassade, ambassage, being obsolete, and embassy established, I have rendered the orthography of embassador conformable to it in the initial letter.]
EMBATTLE, v.t. [en and battle.] To arrange in order of battle; to array troops for battle.
On their embattled ranks the waves return.
1. To furnish with battlements.
EMBATTLE, v.i. To be ranged in order of battle.
EMBATTLED, pp. Arrayed in order of battle.
1. Furnished with battlements; and in heraldry, having the outline resembling a battlement, as an ordinary.
2. Having been the place of battle; as an embattled plain or field.
EMBATTLING, ppr. Ranging in battle array.
EMBAY, v.t. [en, in, and bay.] To inclose in a bay or inlet; to land-lock; to inclose between capes or promontories.
1. To bathe; to wash. [Not used.]
EMBAYED, pp. Inclosed in a bay, or between points of land, as a ship.
EMBED, v.t. [en, in, and bed.] To lay as in a bed; to lay in surrounding matter; as, to embed a thing in clay or in sand.
EMBEDDED, pp. Laid as in a bed; deposited or inclosed in surrounding matter; as ore embedded in sand.
EMBEDDING, ppr. Laying, depositing or forming, as in a bed.
EMBELLISH, v.t. [L. bellus, pretty.]
1. To adorn; to beautify; to decorate; to make beautiful or elegant by ornaments; applied to persons or things. We embellish the person with rich apparel, a garden with shrubs and flowers, and style with metaphors.
2. To make graceful or elegant; as, to embellish manners.
EMBELLISHED, pp. Adorned; decorated; beautified.
EMBELLISHING, ppr. Adorning; decorating; adding grace, ornament or elegance to a person or thing.
EMBELLISHMENT, n. The act of adorning.
1. Ornament; decoration; any thing that adds beauty or elegance; that which renders any thing pleasing to the eye, or agreeable to the taste, in dress, furniture, manners, or in the fine arts. Rich dresses are embellishments of the person. Virtue is an embellishment of the mind, and liberal arts, the embellishments of society. | <urn:uuid:55208b3e-e491-4ad0-8d4e-751018d7ad67> | CC-MAIN-2022-33 | https://m.egwwritings.org/en/book/1843.541949#41992 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00296.warc.gz | en | 0.895386 | 5,144 | 2.90625 | 3 |
Domestic abuse – Is technology part of the problem or solution?
If you need urgent help related to domestic abuse please call the National Domestic Abuse Helpline on 0808 2000 247
We are living in a truly digital age. And digital is now synonymous with social. Technology has the power to connect us and share our lives, and that can be inspiring. But what if you are in a situation where you want – or need – to do just the opposite? What if you are being controlled and surveilled as a victim of domestic abuse? What if you are in hiding and living in fear that your location will be revealed to the wrong person? What if you fear for your life and your children’s lives if you take a step out of turn?
We all know of the dark side of technology – addiction, clickbait, the selling of data, cyber bullying and the damage to mental health as a result of unrealistic body images and the rest. But for many who are experiencing, or recovering from domestic abuse, the dangers are often magnified by the power of technology.
The UK government’s definition of domestic abuse is: “any incident or pattern of incidents of controlling, coercive, threatening behaviour, violence or abuse between those aged 16 or over who are, or have been, intimate partners or family members regardless of gender or sexuality. The abuse can encompass, but is not limited to psychological, physical, sexual, financial, emotional.”
Women’s Aid provides a little more specificity stating that domestic abuse can include, but is not limited to, the following:
- Coercive control (a pattern of intimidation, degradation, isolation and control with the use or threat of physical or sexual violence
- Psychological and/or emotional abuse
- Physical or sexual abuse
- Financial or economic abuse
- Harassment and stalking
- Online or digital abuse
2 million people suffer some form of domestic abuse each year in England and Wales. And currently over 100,000 people in the UK are at high risk of being murdered or seriously injured as a result of domestic abuse. Domestic abuse is no joke.
In 2016, Comic Relief commissioned a collaborative “Tech Vs Abuse” research piece to better understand the potential for digital tools to support people affected by abuse. The research was undertaken by SafeLives, Snook and Chayn, and gathered insights from over 200 survivors of domestic abuse and 350 practitioners who support them.
The research report found that digital has been a significant tool for perpetrators to exacerbate abuse. Almost half of women involved said they were monitored online or controlled with technology – through trackers, apps or internet blockers; and 90% of 307 practitioners saw technology as a risk and felt they did not fully understand how to use technology effectively and safely. Supporting this, an online survey of women who had experienced abuse revealed that only one in five said their online activity was not monitored by their partner. Almost half (47%) said that they were monitored, and a quarter said they did not know.
This supports the findings from a survey conducted by Women’s Aid (2014) which reported that 45% of women experienced some form of abuse online during their relationship, and 48% experienced harassment and abuse online from their ex-partner once they’d left the relationship. And nearly a third of respondents (29%) experienced the use of spyware or GPS locators on their phone or computers by a partner or ex-partner.
Recently, I spoke to a friend, whose sister has been in a long-term abusive relationship with her male partner. (For the purposes of this blog, I’ll call my friend Hannah and her sister Beth). Due to the negative effects of Beth’s abusive relationship, Hannah and her wife now take care of her niece (Beth’s daughter) acting as legal guardians. Hannah spoke to me about her firsthand experience of witnessing domestic abuse in her family, the impact that it has had, and her thoughts about technology as both an exacerbator and a solution.
I worry about digital and the opportunities it presents to perpetrators.
“The person being abused needs to be untraceable. But so do their children. If children are posting online it presents another channel of control for the abuser. It’s so hard to police their online activity though. Trying to have autonomy as a young person whilst people are saying “Don’t put anything on Insta!” is really difficult.”
It’s clear that for survivors of domestic abuse and their families, technology presents dangers. Seemingly harmless apps, such as ‘Find My Phone’ are being utilised by perpetrators as a method of surveillance straight from a remote device, and social media posts are being used to find clues about a person’s location – whether individuals mean to give this information away or not.
This poses serious considerations about intent vs implementation of any digital solutions that we develop in the tech for good sector, particularly within the context of domestic abuse and digital support services for survivors. To combat this, as declared in the Tech Vs Abuse research report: “You have to think as an abuser.” We need to engage with survivors of abuse, their families, support networks, charities in the sector and, perhaps even abusers themselves, to ensure we don’t fall into the trap of creating services to support survivors which in reality make their situation a whole lot worse.
Should we steer away from digital?
The Tech vs Abuse research report found that, though there are considerable risks with digital, there are also major opportunities to fill current gaps in service provision and provide a safety net to survivors of abuse in times before, during and after crisis. For example, in a survey of 92 survivors of abuse, 14% said that digital presents an opportunity to have important legal queries answered without making appointments, whilst 13% said it provides a way to safely record abuse. And almost half (47%) said that connecting with services and support services through technology had already been a positive experience, particularly helping to reduce feelings of isolation.
There is clearly the need for digital to play a part in the fight against domestic abuse. And there are certainly clever tricks we can use to create digital support services that are more discreet and less likely to be found by, or raise the suspicion of, perpetrators. These could be straightforward and relatively non-technical like using the Cloud to store information remotely; disguising digital support as something mundane such as news sites or calculators; or raising awareness of the digital support available in more covert spaces such as womens’ public toilets or embedded in everyday brands’ websites.
At Reason Digital, we have worked with beneficiaries on many occasions to design digital services that have discreetness at the heart. We worked with Ugly Mugs – a national charity working to end violence against sex workers – to create an app to allow sex workers to instantly and secretly alert others about threatening behaviour in the area, without fear of prosecution. The idea is that their smartphone can be used as a panic button which, through effective and discreet design, allows sex workers to communicate their current location to a trusted contact and request help, without alerting the perpetrator. We co-produced the app with sex workers and experts in the field and took out prototypes to wherever our users were – brothels, outreach sessions and under canal bridges – for informal consultations.
Through the co-creation process, we realised fast that we also needed to ensure that the app didn’t inadvertently end up as a database disclosing information on where every sex worker in Britain is. Privacy is key: so we added location fuzzing into the app so the specific location of any sent message isn’t known. Suspicious or criminal activity will immediately alert others in the vicinity and be shared anonymously with support agencies and police without compromising the anonymity of the sex worker(s).
This kind of creative use of technology is the key to making digital work for, rather than against, survivors of domestic abuse. It is these techniques that will allow us to flip what could easily act as threats into opportunities, and there are already examples of digital innovations doing just that.
Current digital support services to tackle domestic abuse
As part of the Tech vs Abuse research, a market scan uncovered a plethora of digital solutions and services already out there being used to support people experiencing or recovering from domestic abuse. However, it found that there were still crucial gaps in provision. The Tech vs Abuse grant initiative was launched in January 2017 off the back of the findings, funded jointly through the Tampon Tax Fund, a partnership between Comic Relief and HM Government, and the Big Lottery Fund. It focused on finding innovative tech solutions to the following design challenges:
2017 Tech vs Abuse Design Challenges
|Realising it’s abuse||Safer Digital Footprint||Fifteen minute window||Accessible legal and financial information||Effective real-time support services|
|Use the creative opportunities of the web to raise awareness of what an abusive relationship looks like, provoking women and girls experiencing abuse to recognise this and get support.||Provide people affected by domestic abuse and frontline professionals the confidence and knowledge they need to use technology and stay online safely, with full control over their online data, privacy settings and social media accounts.||Provide or curate key information online for women experiencing domestic abuse in a way which is easy to find, simple to navigate and quick to interact with.||Create engaging, accessible and digestible information on the legal process or the financial situation women find themselves in, connecting to support and advice where relevant.||Enable women to find and access services for support (including referrals) when required, day or night, seamlessly and with minimal logistical and emotional burden.|
There were 10 Tech vs Abuse funded projects in 2017-2018, all focused on achieving one of the design challenges (above) through digital innovation. For example, Hestia sought to tackle the ‘fifteen minute window’ challenge by developing the Bright Sky app – an app that provides support and information to those in an abusive relationship or those concerned about someone they know. The funding allowed them to improve the design & UI of Bright Sky, add additional content, and include more language options. Aanchal worked on the challenge “realising it’s abuse” by developing an app to support GPs to safeguard their patients. And Safelives worked with frontline practitioners to help victims of abuse to use technology and the internet safely.
I asked Hannah about the current gaps, and how digital might support her and her family based on her experience. Here’s what she told me:
“In a significant proportion of domestic abuse cases, there is an issue which perpetrators latch onto – for example a mental health problem or a substance misuse issue – because you’re more vulnerable which perpetrators can capitalise on.
“My sister spends long periods of time without her partner in her life – especially when he’s in prison. She has a substance misuse problem, but when he’s not been around she’s managed to stay clean for a long time, until he’s back out again – he’ll literally stop to pick up on his way back home, track her down and coerce her into using.”
“I don’t think multiple and complex needs are considered enough in services for survivors – there is no joined up platform of support.”
There are so many touch points in my sister’s life, but they’re not aware of the others. Police see a junkie. Social services see a bad mother. Housing sees a bad tenant. They’re not speaking together, but it’s really key to each of the provisions that they understand it. Part of a solution could be about educating all of these services to look out for the right things and provide advice and holistic support.
With digital comes the amazing opportunity to capture data across services, to gather comprehensive insights into a person’s life and provide the support they require. Digital could be the glue that joins together a fragmented service, ensuring that people who need support do not slip between the cracks.
With data comes a whole host of considerations and potential issues around privacy and surveillance that would need to be accounted for. As Hannah asked: “Does your GP need to know all the information about your abusive relationship? If you legitimately go to the doctors with a child who’s injured, would you be afraid of misconceptions and backlash?”
Then there’s the issue of getting services to talk together and work towards a shared outcome whilst battling separate and multiple priorities. There’s certainly not an easy solution, but it seems that what we really need is for local and national services such as charities, NHS bodies, police and crime commissioners, data scientists and government representatives, is to come together alongside survivors of domestic abuse and their families to co-create an approach to supporting survivors, including considerations around data and privacy.
Another area in which Hannah identified a potential opportunity for digital is in children’s services:
“I think there needs to be something there for children who have lived with or witnessed this abuse.”
“I really worry about the long term effects that witnessing abuse has on my niece who is now 13. She doesn’t live with my sister (her mother) and doesn’t see much of her, but even so, she is aware of everything that is going on.”
“My niece was removed from the abusive environment at an early age. She attended support group events with other children who had witnessed abuse, and had counselling at primary school. But she has now moved to a much larger high school, where there’s not that much support available. She may not even want it now.”
“She is getting to the age where she realises that there is something different about her family dynamic. There’s a stigma there. She just wants to be normal, and part of this is resisting any support that has a stigma attached.”
“I wonder how tech might be able to better support children living with or witnessing abuse. They are naturally tech-savvy and very social media driven – could this be a way to connect? If my niece could have a discreet online tool – for mentoring, peer support or information – that she could use as and when she wanted, this could really help.”
The Tech vs Abuse research report found that peer-to-peer support can be a very powerful form of support for survivors of abuse. And there are lots of digital peer to peer platforms currently out there, such as Facebook groups and the Women’s Aid Survivors Forum. Moreover, ‘information on how to help children of parents who are abuse survivors was the fourth most popular answer in a Facebook poll run as part of the research process. Despite this, there is no peer to peer tool currently out there designed specifically for children who have experienced and/or witnessed abuse, and this gap in provision needs to be filled.
Hannah also spoke to me about the need for a collaborative approach to logging evidence of abuse across families:
“Even when people do realise they are living with abuse, there is often no action – and even family members can rationalise what is going on.”
“It’s easy to forget how bad things have been when the abuser is ‘doing well’ or seems to have changed.”
Even my Mum has gone from worrying that every phone call is the police to say my sister has been killed, to testifying in court to have a restraining order lifted because her partner seems to have changed.
“I think tech could help with this. A collective record contributed to by family and children alongside survivors themselves, could act as a reminder of the severity and gravity of the situation and build robust evidence.”
There are currently some brilliant evidence-collection platforms out there, such as Just Evidence which allows survivors to record, describe and validate evidence. Or SmartSafe+, which has been designed to assist women to collect and store evidence in order to support them to get an intervention order, or prove a breach. But what strikes me is that these apps focus predominantly on evidence-gathering for legal purposes; with one person acting as the ‘gatherer’. It seems that there could be merit in pursuing a crowd-contributed log which survivors, alongside trusted friends and family members, come together to piece together a collective truth. Rather than (just) focusing on evidence for legal proceedings, could there be a focus instead on understanding or reinforcing the full picture of the abuse that’s taking place to support challenge number 1: ‘realising it’s abuse’, whilst uniting survivors with a support network around them?
The future of tech to combat abuse
Since my conversation with Hannah, Tech Vs Abuse have released another research report: Tech vs Abuse 2.0, which builds on, explores and updates the original findings.
Findings of the second report saw even more risks and fears surrounding technology, with the widespread uptake of smartphones and Internet of Things devices making it even easier for perpetrators to use tech to abuse. However, on the flip side, it also found that there are now more digital solutions than ever to support survivors. In addition the need for more digital solutions particularly around early stage prototypes, to support those in need of access to services and support, is still very much there. Specifically, there is a greater recognition of the need to extend support to those who are recovering from an abusive relationship – be it help with their finances, or with offsetting the risk of falling into a similar relationship in the future.
These findings have been used to develop four key design challenges:
2019 Tech vs Abuse Design Challenges
|Realising it’s abuse||Finding the right information at the right time||Effective real-time support services||Recovery|
|People have a better understanding of what a healthy relationship looks like, realising when they are experiencing abuse in their relationship and/or when they are abusive towards others. Friends, family, co-workers and professionals they interact with are also better able to identify this and know how best to support them.||People are able to find the right information at the right time. Using different platforms, they can access relevant, trustworthy, and safe sources. Key tools and resources are easy to find, simple to navigate, and quick to interact with. People of all ages, genders, cultural backgrounds, sexual orientations, and abilities can easily find resources relevant to them.||People can find and access services for support (including referrals, if required) seamlessly and with minimal logistical and emotional burden, in a format that works in the moment, context, and time people have. Real-time support is available when it’s most needed, including in the middle of the night or during the weekend. People of all ages, genders, cultural backgrounds, sexual orientations, and abilities can easily connect with services relevant to them.||People have access to advice, information, resources and tools to help rebuild their lives, tailored to different situations. This includes support for mental health issues, confidence building, practical needs, families affected by abuse, and understanding healthy relationships.|
In response to these design challenges, a second round of the funding programme was announced, which is currently in the shortlisting and assessment stage, and is funded jointly with Esmée Fairbairn Foundation and The Clothworkers’ Foundation.
At Reason Digital, we greatly welcome the fund. Tech vs Abuse is an amazing initiative that has the potential to help thousands of people recognise and recover from a situation that can literally be a case of life or death. We particularly welcome the more inclusive focus, and recognition that ‘people of all ages, genders, cultural backgrounds, sexual orientations and abilities’ can be victims of abuse. We felt that the first report would have benefitted from recognising the unique manifestations of abuse experienced by more specific groups, such as trans women, who often face other multi-faceted layers of social exclusion which make seeking help even harder. We believe it’s an important step to actively recognise diversity here.
We feel that this diversity must be reflected in the tech industry in order to create digital solutions which are genuinely inclusive and user-centric – it’s no secret that the industry is currently saturated with white, straight, middle class men. Though the tide is starting to turn, when I speak at digital conferences, for every woman in the audience, I still see ten more men staring back at me. And the results are digital tools created for – you guessed it – white, straight, middle class men. For example, last year, one of the largest international digital retailers – Amazon – were under fire for creating an Artificial Intelligence-based recruitment service which was inherently sexist.
I feel really proud to work for a digital social enterprise that is making active steps to foster diversity and inclusivity. Last year, we decided to publish Reason Digital’s gender pay gap, which was found to actually sway in women’s favour. And my colleague, Ian, wrote a brilliant blog, Gender diversity and discrimination: how not to be part of the problem, which explores how people in positions of privilege can support diversity rather than exacerbate the problem.
We also have a Women’s Leadership Group, which all women in the organisation are invited to join, in which we discuss any issues or obstacles we face as women, hold coaching sessions to support us to overcome them, and agree and implement training needs for the wider organisation to encourage equality. This year, we’ve held sessions on ‘Imposter Syndrome’, ‘Networking’, and ‘Inclusivity in the Workplace’ (which was delivered by the LGBT Foundation).
This year, we’ve also been working closely with InnovateHer an organisation who have been set up to get more young women into STEM careers, with a particular focus on web development, gaming and tech for good. We are working with a group of twenty school girls around mentoring, field trips and helping them with tech for good solutions for parents, carers and children at Alderhey Children’s Hospital.
Currently, 35% of Reason Digital is made up of women, with 44% of our Senior Management Team made up of women. 33% of our staff identify as lesbian, gay or bisexual; and 27% have a physical or mental illness or disability. These stats are very encouraging, particularly in the industry that we’re in. However, we recognise that we still have some way to go, especially with having more black, Asian and minority ethnic representation; which we’re actively working towards.
We pride ourselves on leading by example with diversity and inclusivity, and believe it puts us in a unique position to be able to disrupt and create real change in the tech industry. This will be absolutely vital in the fight against domestic abuse, and I implore all our peers in the industry who are developing digital solutions to open up the conversation to ensure that diversity is championed and everyone’s voice is heard.
In short, we have an amazing opportunity with the Tech vs Abuse fund to create digital solutions that have the potential to help thousands of people experiencing and recovering from Abuse. However, this opportunity also comes with huge risks that need to be offset wherever possible. I am reminded of a quote that David Heinemann brought to my attention recently during his talk at our annual Charity Digital Conference in partnership with the Directory of Social Change. The quote was from ‘The Book of Radical Love’, published by Ates Ilyas Bassoy, the Head of Campaigning from the Republican People’s Party (CHP), which has been hailed as the reason for the party’s recent election success in April of this year, ending a quarter-century conservative rule in Turkey.
A knife is useful for slicing bread and fruit. But the same knife can also be used to hurt someone. Therefore, we can’t call a knife useful or harmful. The important thing is how we use it.
These words certainly ring true in the context of domestic abuse and technology. Just like a knife, technology has the potential to be useful or destructive. We in the tech sector need to tread that edge carefully. We need to be led by the expertise of charities, organisations, services and, most importantly, individuals with lived experience – like Hannah and her sister – to create tools with beneficiaries for beneficiaries, that are genuinely useful, and empower survivors of abuse rather than oppress them further. | <urn:uuid:d1a9353a-313f-46cd-a94a-8d96db1af0ec> | CC-MAIN-2022-33 | https://reasondigital.com/blog/domestic-abuse-is-technology-part-of-the-problem-or-solution/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571745.28/warc/CC-MAIN-20220812170436-20220812200436-00497.warc.gz | en | 0.961468 | 5,146 | 2.765625 | 3 |
ax Ernst was a renegade, a provocative and inventive artist who explored his psyche for surreal images that challenged social norms. Ernst, a World War I veteran, was terribly disturbed by his experiences and strongly skeptical of Western civilization. These powerful emotions led straight into his perception of the modern world as illogical, which became the foundation of Max Ernst’s artwork. Max Ernst’s Surrealism and Dada artworks reflect his aesthetic vision, as well as his wit and vigor.
Table of Contents
- 1 Max Ernst’s Biography and Art
- 2 Max Ernst’s Artwork and Style
- 3 Exhibitions and Honors
- 4 Notable Artworks
- 5 Recommended Reading
- 6 Frequently Asked Questions
Max Ernst’s Biography and Art
Max Ernst’s paintings were pioneering examples of both Dadaism and Surrealism. His engagement with the psyche, social criticism, and wide-ranging experimentation in both topic and method continues to have an impact. Max Ernst’s artworks challenged art cultures and practices while demonstrating a profound understanding of the history of European art. Max Ernst’s Surrealism paintings were non-representational pieces with no clear themes that challenged the purity of art by criticizing religious iconography and inventing new ways to create artworks to convey the current condition.
|Date of Birth||2 April 1891|
|Date of Death||1 April 1976|
|Place Born||Brühl, German Empire|
|Associated Movements||Surrealism, Dada|
Max Ernst was born the third of nine children to a middle-class Catholic family in Brühl, near Cologne. Philipp, his father, was a teacher for deaf children and also an amateur artist, a fervent Christian, and a rigorous disciplinarian. He instilled in Ernst a desire to challenge authority, and his passion for painting and drawing in nature prompted him to pursue a career as a painter.
Ernst received no official art instruction other than this initiation to amateur art at home; therefore, he was accountable for his own artistic skills.
Ernst entered the University of Bonn in 1909 to study art history, philosophy, psychology, literature, and psychiatry. He toured sanitariums and became captivated with the artwork of mentally ill people; that same year, he also began painting, making drawings on the grounds of the Brühl castle, as well as paintings of his sister and himself. The artist then encountered August Macke and subsequently joined the Die Rheinischen Expressionisten group of painters in 1911, opting to pursue a career as an artist. In 1912, he went to the Sonderbund exposition in Cologne, where he was deeply impressed by pieces by Pablo Picasso and post-Impressionists such as Paul Gauguin and Vincent van Gogh.
Portrait of Max Ernst, 1968; Unknown author Unknown author, CC0, via Wikimedia Commons
Max Ernst’s artworks were shown at Galerie Feldman in Cologne that year, with that of the Das Junge Rheinland group, and then in other group shows in 1913. Ernst used a satirical technique in his works during this time period, juxtaposing repulsive elements with Expressionist and Cubism motifs. Max Ernst then met Hans Arp in the city of Cologne in 1914. The two became steadfast friends and had a 50-year-long connection. After finishing his training in the summer, Ernst’s life was disrupted by World War I. Ernst was conscripted and saw action on both the Eastern and Western Fronts.
The war had a terrible impact on Ernst; in his memoirs, he described his time in the army as follows: “On the first of August 1914 M[ax]. E[rnst]. died. He was resurrected on the eleventh of November 1918”
Ernst was allocated to chart maps for a brief spell on the Western Front, allowing him to resume painting. Max Ernst was among several famous painters who came back from their war duties extremely emotionally traumatized and mentally removed from European customs and conventional beliefs. Several German Expressionist artists, including Franz Marc and August Macke, were killed in battle during the war.
Despite being mostly self-taught, Ernst was inspired by the paintings of August Macke and Vincent van Gogh. Giorgio de Chirico’s canvases ignited his curiosity in dream iconography and the surrealistic. Ernst used his childhood and military memories to create ridiculous and horrific scenarios. Ernst maintained a rebellious streak throughout his career, practically turning the world upside down in several of his paintings.
After returning to Germany following the armistice, Ernst, together with the artist-poet Jean Arp, helped found the Dada group in Cologne, while maintaining strong relations with the avant-garde movement in Paris. He married art history major Luise Straus, whom he had first met in 1914. In 1919, Ernst paid a visit to Paul Klee in Munich and studied Giorgio de Chirico’s works.
Portrait of Giorgio de Chirico, 1936; Carl Van Vechten, Public domain, via Wikimedia Commons
The same year, influenced by de Chirico, teaching-aide manuals, and other sources, the first of Max Ernst’s collages were created, a style that would come to dominate his creative endeavors. Ernst’s illogical image-making enabled him to make the worlds of dreams, the subliminal, and the unintentional all visible as he delved into his own mind for motivation and to address his own suffering.
Ernst was in Cologne curating journals and assisting in the creation of a Dada exhibition in a public restroom, where customers were greeted by a lovely young lady spewing horrible poetry.
Ernst’s sculpture was also on exhibit, as was an ax, which the spectators were encouraged to use to attack and destroy the piece of art. This audience participation event outraged bourgeois sensitivities. Ulrich Ernst, Ernst and Luise’s son, was born on the 24th of June 1920 and went on to become a painter. His marriage to Luise ended in divorce. He then subsequently met Paul Éluard, who became a lifetime friend, in 1921. Éluard purchased two of Ernst’s paintings – Celebes (1921) and Oedipus Rex (1922) – and chose six collages to accompany his poetry book Répétitions.
Opening of the Max Ernst exhibition at the gallery Au Sans Pareil, May 2, 1921. From left to right: René Hilsum, Benjamin Péret, Serge Charchoune, Philippe Soupault on top of the ladder with a bicycle under his arm, Jacques Rigaut (upside down), André Breton, and Simone Kahn; Unknown author Unknown author, Public domain, via Wikimedia Commons
In 1922 Ernst moved to Paris and worked and lived there until around 1941 when the second world war made it extremely difficult for him to remain in Europe. With the release of André Breton’s First Surrealist Manifesto (1924), Surrealism proceeded to supplant Dadaism over the next few decades, and Ernst had become a fundamental component of the group.
Ernst and his artist colleagues began exploring the potential of autonomism and visions; indeed, his creative experiments were facilitated by hypnotherapy and hallucinogens. In 1925, Ernst began to experiment with frottage (pencil rubbings of substances such as natural wood grains, fabric, or foliage) in order to arouse the torrent of imagery from his unconscious, as well as decalcomania (the practice of transmitting pigment from one material to another by squeezing the two simultaneously).
His experiments and advances in technology resulted in finished images, unintended patterns, and unique textures, which he subsequently incorporated into his paintings and sketches.
This emphasis on material touch, as well as changing common items to create a picture that represented some type of collective awareness, would become crucial to Surrealism’s idea of automatism. He also invented the grattage method, which involves scraping paint over a canvas to show the impressions of objects put beneath it. This technique was utilized in his well-known work Forest and Dove (1927).
The year following that, Ernst then worked with Joan Miró on various designs for Sergei Diaghilev. Ernst devised grattage, in which he trowelled color off his canvases, with the assistance of Miró. Ernst acquired a love of birds, which was evident in his art. In Max Ernst’s paintings; he had a bird as his alter ego, which he dubbed Loplop. He said that his alter-ego was an augmentation of himself, resulting from an early mix-up of birds and people.
He claimed that when he was a child, he awoke one night to discover that his favorite bird had perished; a few moments later, his father proclaimed the birth of his sister.
Loplop is frequently featured in Max Ernsts’ collages of the works of other creators, such as Loplop presents André Breton. Max Ernst’s painting The Virgin Chastises the Infant Jesus in Front of Three Witnesses (1926) sparked a lot of debate. Ernst married Marie-Berthe Aurenche in 1927, and his connection with her is said to have influenced the sensual subject matter of The Kiss and other pieces of that year. Luis Bunuel, a Surrealist, directed Ernst in the 1930 film L’age d’Or.
Ernst began sculpting in 1934 and studied under Alberto Giacometti. Peggy Guggenheim, an American heiress and art supporter, purchased a number of Max Ernst’s works in 1938 and presented them at her new gallery in London. Ernst and Peggy Guggenheim were wed sometime between 1942 and 1946.
Hitler and the Nazis had assumed power over Germany by 1933. By the fall of 1937, Hitler had amassed around 16 000 avant-garde pieces from Germany’s state museums and had sent 650 paintings to Munich for his disastrous show (Degenerate Art). Ernst appears to have had at least two works on show in the exhibit, both of which have since completely disappeared or were most likely demolished.
When World War II broke out in September 1939, Ernst was incarcerated as an “unwanted alien” in Camp des Milles, near Aix-en-Provence, with compatriot Surrealist Hans Bellmer, who had previously fled to Paris. He had been residing with his partner and fellow Surrealist painter, Leonora Carrington, who had no choice but to sell their property to clear their debts and depart for Spain since she didn’t know whether he would return.
After being incarcerated on several occasions as a German subject, Ernst managed to escape France with the Gestapo hot on his tail.
As a migrant in New York, he encouraged an entirely new school of American artists alongside prominent avant-garde European artists like Piet Mondrian and Marcel Duchamp. Ernst’s disdain of conventional painting methods, styles, and images (as shown by his father’s work’s classical tradition) fascinated youthful American artists, who, like Ernst, aspired to establish a fresh and unconventional approach to art.
LEFT: Portrait of Piet Mondrian, 1899; Anonymous Unknown author, Public domain, via Wikimedia Commons | RIGHT: Portrait of Marcel Duchamp, 1927; Unknown author Unknown author, Public domain, via Wikimedia Commons
He had a notably great influence on the path of Jackson Pollock’s paintings, who became fascinated in Ernst’s collage features as well as his inclination to utilize his art as an abstraction of his interior condition. Ernst’s capturing of the subconscious and the incidental in his art creating, as well as his tremendous Surrealist experiments with autonomism and spontaneous writing, piqued the curiosity of the new artists.
In 1942, Ernst worked with “Oscillation,” or painting by spinning a paint-filled container pierced numerous times with holes across the canvas; Pollock was particularly taken with this.
Max Ernst then met the vibrant socialite Peggy Guggenheim, who was a gallery curator, and art lover who would also subsequently become his third wife. He gained intimate access to New York’s emerging art circuit due to Guggenheim’s popularity and connections. However, his marriage to Guggenheim was not long-lived, and in October 1946, he married Dorothea Tanning in Beverly Hills, California, in a double wedding alongside Man Ray and Juliet P. Browner.
From 1946 through 1953, the couple lived in Sedona, Arizona, where the high desert scenery inspired them and reminded them of Ernst’s previous work. Regardless of the fact that Sedona was isolated and home to just 400 herders, vineyard laborers, traders, and tiny Native American villages, their presence aided in the establishment of what would become an American artists’ community.
Ernst erected a tiny home on Brewer Road among the massive red rocks, and he and Tanning welcomed intellectuals and European painters such as Yves Tanguy and Henri Cartier-Bresson.
Sedona inspired the painters as well as Ernst, who wrote his book Beyond Painting and finished his sculptural masterwork Capricorn (1948) while living in Sedona. Ernst began to experience financial successes as a consequence of the book and its popularity. Ernst and Tanning eventually returned to France in 1953. Ernst received the major painting prize at the renowned Venice Biennale in 1954. Ernst worked as a painter until his death in 1976 in Paris.
Max Ernst’s Artwork and Style
Max Ernst disrupted artistic traditions while also being well-versed in European art history. He brought into dispute the purity of culture by producing non-representational compositions with no obvious narrative, making light of religious imagery, and creating new approaches to produce art to convey the existing status. Ernst was fascinated by the artwork of the mentally disturbed as a method of accessing basic feeling and unbridled creation.
Ernst was one of the first painters to use Sigmund Freud’s visionary ideas to probe into his own profound psyche in order to discover the source of his own originality.
Ernst was looking within while simultaneously interacting with the collective unconscious through shared dream imagery. Ernst aimed to attain a pre-verbal state of being by painting freely from his inner consciousness and striving to uncover the source of his own creativity. As a result, he was able to communicate his primal emotions and reveal his inner traumas, which were the subject of his collages and canvases.
This desire to create from the subconscious, sometimes referred to as automatic painting, was important to his Surrealist compositions and impacted the Abstract Expressionists who came after him.
With Max Ernst’s collages such as Here Everything is Still Floating (1920), he developed a new reality in which unpredictability and incoherence portrayed the lunacy of WWI and called into question capitalist perceptions. These photographs were taken from scientific textbooks, ethnographic periodicals, and ordinary commerce brochures from the turn of the century. Despite the vain hunt for purpose, these playful works proved to be delightful and fulfilling in the end.
“Collage was considered as a form of crime, meaning one inflicted damage to nature,” the artist would later remark.
Max Ernst’s paintings such as Celebes (1921), with their bizarre juxtapositions of different items, display his devotion to Freudian dream theory. Despite this mismatch – a headless/nude lady, parts of equipment – the picture works as a full composition. Max Ernst’s artwork causes discomfort because spectators are unaware of his goals, as well as contempt because of its irrelevant representation of the human form (the headless body), which is cherished within art creating (since individuals are fashioned in God’s image).
Ernst’s work raises the question of whether reality is the “real” one: that of the nighttime and visions, or that of the waking consciousness.
Max Ernst’s Surrealism paintings often took playful jabs at the status quo of religious themes, such as with The Virgin Spanking the Christ Child (1926). The Virgin Mary, shown as an earthy, irritated mother, harshly paddles her little son – the rebellious infant Jesus – on his bottom, which bears red marks from her punitive hand. Paul Eluard, Andre Breton, and the artist himself are observing through the backdrop window and functioning as bystanders; all three appear unconcerned by the situation.
Ernst successfully subverts his own Catholic belief with its dedication to Christ’s mother Mary, while concurrently belittling much of Western art history with its expansion of affectionate scenes between both the Blessed Virgin Mary and the Christ baby, and undermining the doctrines, upper-class holiness of motherhood. Ernst’s picture is both irreverent and sharply amusing. As predicted, not everyone found the premise amusing, and the piece sparked much debate as an assault on Christianity and current morals.
Ernst also pioneered a new painting technique known as “Grattage”, as can be seen in his work Forest and Dove (1927). This painting exhibits Ernst’s’ grattage’ method, in which he scraped paint over the canvas to expose object impressions, which he invented with the Spanish surrealist Joan Miro. Grattage created a coarse texture that provided another layer to the painting, amplifying the thickness of a forest. It would be a folly to ignore Ernst’s German roots, with their Romantic legacy, and how this influenced his distinctive psychology. In his apocalyptic works, the German idea of Ahnung, or the dread of imminent doom, undulates.
He was well-versed in Wagnerian stories of strange and bewitching German woodlands; the Surrealists eventually embraced forests and dark corners as metaphors for the creative mind.
While most of his work was non-representational, there are a few rare examples where he was directly reacting to current political events such as The Fireside Angel (1937). This strange creature looks to be jumping, arms and legs outstretched, with a gaudy, yet joyful, grin on its face. The figurines and their limbs are discolored and deformed. Furthermore, its limb appears to be giving birth to another entity, as if a malignant mass is spreading.
The painter was motivated to produce the image after Franco’s fascists defeated the Republican camp in the Spanish Civil War. Ernst attempted to produce a picture that reflected the impending turmoil that he believed was sweeping over Europe and coming from his own Germany. Returning to the innocuous and deceptive title, Ernst’s play aimed to entice audiences with pleasant phrases, only to jolt them into doubting their own convictions by naming creatures as if angels. Another politically driven statement piece was Europe after the Rain II (1942).
Europe after the Rain II provides witness to the impossible reign of conflict that decimated Europe at the period, as viewed in the perspective of 20h-century European history. Ernst’s creative representation of the Spanish Civil War and the onset of World War II is unique in this work. The grattage method, used to produce the ruins shapes, aptly recalls Europe’s catastrophic disaster.
The time range assigned to the work shows that Ernst began this painting in France and finished it in the United States while the war raged on and Europe’s fate remained undetermined.
Ernst has created an evocation of a massive catastrophe on this otherworldly painting. An armored, bird-headed creature – maybe a soldier – confronts a female figure with his spear or damaged battle standard in the midst of the destroyed terrain. It’s been hypothesized that the figures are enormous garden statues or semi-mythical heroes of a future conflict. As stated previously, Ernst’s usage of the bird-human image might be self-referential.
Max Ernst’s Artistic Legacy
While still living, Max Ernst accomplished the unusual feat of building a glowing image and critical reputation in three countries (France, Germany, and the United States). Although Ernst is more known to art historians and scholars than to the general public, his influence on the development of mid-century American art is indisputable.
Ernst worked with the Abstract Expressionists both personally and through his son, Jimmy Ernst, who went on to become a well-known Abstract Expressionist artist after the war as a result of his association with Peggy Guggenheim.
Ernst grew interested in Southwest Native American Navajo art as a creative influence while living in Sedona. The later Abstract Expressionists, particularly Pollock, became captivated with the art of sand painting, which is strongly linked to healing practices and spiritual incantations. Ernst remains a pivotal figure for artists who are strongly concerned with the method, psychology, and the urge to shock and challenge social norms.
Max Ernst at work in his studio, date unknown; National Archives at College Park – Still Pictures, Public domain, via Wikimedia Commons
In his hometown of Brühl, Germany, the Max Ernst Museum was launched in 2005. It is housed in a late-classicist 1844 structure that has been combined with a contemporary glass pavilion. Ernst used to frequent the ancient ballroom as a popular social location when he was younger.
The collection includes canvases, sketches, frottages, assemblages, practically all of his lithographic pieces, over 70 bronze sculptural works, and over 700 papers and pictures by Henri Cartier-Bresson, Man Ray, Lee Miller, and others.
The artist gave pieces to the City of Brühl in 1969, which formed the foundation of the collection. As many as 36 paintings, presented by the painter to his fourth wife Dorothea Tanning, are on loan from the Kreissparkasse Köln on an indefinite basis. Among his notable works are the sculptures The Teaching Staff for a School of Murderers (1945) and King Playing with the Queen (1944). Other artists’ works are also shown in the museum.
Exhibitions and Honors
From 1970 to 1972, a retrospective of 104 of Max Ernst’s paintings from the Menil Collection spanning the years 1920 to 1968 toured Europe. The exhibition premiered in Paris in April 1971, on Max Ernst’s 80th birthday, and was supplemented by 44 works from various collations. Here are a few more notable exhibitions and honors of the artist:
- 1954 – Grand Prize for Painting – Venice Biennale
- 1959 – Grand Prix national des arts – Musée National d’Art Moderne Paris
- 1961 – Museum of Modern Art, New York
- 1962 – Tate Gallery London
- 1969 – Moderna Museet, Stockholm
Max Ernst’s paintings were forerunners of both Dadaism and Surrealism art. His interest in psychology, societal critique, and broad experimentation in both theme and approach continues to have an influence. Max Ernst’s artworks questioned art cultures and practices while exhibiting a deep awareness of European art history. Here is a list of some of his most important works:
- Crucifixion (1913)
- Town with Animals or Landscape (1916)
- Aquis Submersus (1919)
- Fiat modes (1919)
- He’s Not Very Well, the Hairy-hoofed Horse (1920)
- Murdering Airplane (1920)
- All Friends Together (1922)
- The Wavering Woman (1923)
- Ubu Imperator (1923)
- Paris Dream (1925)
- Loplop Introduces a Young Girl (1930)
- The Giant Snake (1935)
Max Ernst was a fascinating artist. Hopefully, you have enjoyed reading about him today. Perhaps you would like to learn more and would be interested in purchasing a Max Ernst biography. We have compiled a list of awesome books in case you would like to explore the artist’s life and art even more!
A Little Girl Dreams of Taking the Veil (2017) by Dorothea Tanning
While exploring an illustrated book of items, timepieces, tools, and clothes—artist Max Ernst was fascinated by the unexpected juxtapositions of the things. Ernst pioneered the collage novel and converted ordinary advertising imagery into revelatory dramas rooted in his visions and inner desires by modifying Victorian-era prints into dazzling tableaux and adding brief subtitles. Its hallucinatory images revolve around the dreams of a young girl who breaks her virginity on the day of her holy communion and vows to become a nun. Ernst, a Dadaist and Surrealist artist, explores the non-rational but very real junction of religious ecstasy and sensual desire with comedy and sarcasm. This truly strange novel retains its element of surprise as well as its imaginative force a century after its initial publication.
Une Semaine De Bonte: A Surrealistic Novel in Collage (1976) by Max Ernst
This is one of Max Ernst’s (b. 1891) iconic collage masterpieces. Ernst was a prominent role in the surrealist movement and was one of the most creative painters of the twentieth century. Ernst created this series of 182 weird and darkly hilarious collage pieces of classic visions and sensual fantasies that appear to magically entice the unconscious into view using the vintage collection and pulp fiction illustrations: Serpents arrive in the drawing-room and bedroom, a nobleman has the face of a lion, and the salon floor changes to water, on which some individuals appear to be able to walk while others perish.
- A legendary collage masterpiece by leading Surrealist artist Max Ernst
- A series of 182 bizarre and darkly humorous collage scenes
- Divided into seven parts; one for each day of the week
Max Ernst was a renegade, a provocative and inventive artist who explored his psyche for surreal images that challenged social norms. Ernst, a World War I veteran, was terribly disturbed by his experiences and strongly skeptical of Western civilization. These powerful emotions led straight into his perception of the modern world as illogical, which became the foundation of Max Ernst’s artwork. Max Ernst’s Surrealism and Dada Artworks reflected his aesthetic vision, as well as his wit and vigor.
Frequently Asked Questions
Who Was Max Ernst and Why Was He Famous?
Max Ernst was a German painter and sculptor who was a strong champion of insanity in art and the founder of the Surrealist Automatism movement. He managed to obtain naturalization in both the United States (1948) and France (1958). Ernst’s early pursuits were psychology and philosophy, but he dropped out of the University of Bonn to pursue painting.
What Style Were Max Ernst’s Artworks?
Ernst was looking within while simultaneously interacting with the collective unconscious through shared dream imagery. Ernst aimed to attain a pre-verbal state of being by painting freely from his inner consciousness and striving to uncover the source of his own creativity. As a consequence, he was able to articulate his basic feelings and disclose his innermost experiences via his collages and canvases. This urge to paint from the subconscious, often known as automatic painting, was central to his Surrealist works and influenced the Abstract Expressionists who followed him.
Who Created the Portrait of Max Ernst?
Max Ernst did not create this painting. The Portrait of Max Ernst was painted by Leonora Carrington. It does, however, display Surrealist imagery. | <urn:uuid:3fa28a8a-c5f9-4b83-a74f-fde80f8a3925> | CC-MAIN-2022-33 | https://artincontext.org/max-ernst/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570692.22/warc/CC-MAIN-20220807181008-20220807211008-00696.warc.gz | en | 0.970205 | 5,841 | 2.546875 | 3 |
The concept of Social Darwinism is ineffective due to the creation of government laws starting in the twentieth century to modern day, which consistently take away from the supporting idea of “ survival of the fittest.” Charles Darwin created the theory known as social Darwinism, advocated by Herbert Spencer, in which individuals follow the same laws of natural selection as plants and animals. The purpose of natural selection is for those who are strongest to continue and the weak to die out, it is what makes society evolve. Throughout the years, government involvement has increased substantially whether it be political, economic, social or even biological, that prevents natural selection and the unfit to die out. It creates an equal playing field for all individuals, despite being wealthy, poor, ‘ fit’ or ‘ unfit.’
The government supporting the poor goes against the main idea of social Darwinism. “ Spencer opposed any laws that helped workers, the poor, and those he deemed Genetically weak. Such laws, he argued, would go against the evolution of civilization by delaying the extinction of the ‘ unfit’” (). President Franklin D. Roosevelt supported the idea of federal aid for the poor, a national welfare system was established for the first time in American history in 1935. The purpose of the welfare system is to assist those who are not able to support themselves due to cases like unemployment, unskilled labor capacity, disability, or other similar reasons. The welfare system opposes the idea of social Darwinism due to its purpose to support and provide for those who are unable to do it for themselves. “ The state was not to hinder the strong or assist the weak, interceding only to protect individual freedom and rights.” (). Many of those individuals who partake in the system are supposed to be supported until they are able to do so on their own, it is not meant to live off of. In modern day, there are cases where some abuse the welfare system by lying and cheating to receive more benefits from the government. For example, there are restrictions on what an individual is allowed to purchase with food stamps. There are many individuals who purchase items that are restricted and are not necessities like alcohol, pet food, household supplies, and other similar items. The system is to support individuals to help get them back on their feet, or to help maintain a basic living style that covers all necessities, however many are abusing the system and staying on the system, therefore they do not have to work or work harder to earn what they need to survive and it lessens the individuals motivation to strive for more.
The time of the Industrial Revolution created a period of development that largely impacted how goods are produced. The revolution transformed mostly rural societies into industrialized urban areas. During this time period, goods were increasingly starting to be produced in mass quantities by machines in factories. The dramatic change from small towns to major cities brought problems like pollution, overcrowded cities, lack of clean drinking water and sanitation. However, the cities greatly benefited the economy with booming businesses. The idea of “ survival of the fittest” applied to laissez faire, in which businesses are allowed to operate with little regulation from the government. In other words, the government did not have much authority of how the business should be run and its conditions. This allowed businesses to have whatever working conditions, maximum hours, and minimum wages they pleased, and those who had a problem with it either quit or were fired. “ He believed labor unions took away the freedom of individual workers to negotiate with employers.” (). Those who stayed and worked had the motivation to do so and earn the money. Soon enough protests start against the businesses which formed labor unions caused the beginning of government intervention to create and enforce regulations. The regulations enforced child labor laws, health regulations, minimum wage and wage hours. Those who were “ unfit” were then able to work due to the governments support.
Taxes take away from the hard-earned wealth and gives it to those who did not earn it themselves. It allows those who are unmotivated to continue what they are doing and not strive for more or better because they are getting handed everything by doing nothing because the wealthy are working and doing it for them. Herbert Spencer is known for advocating laissez-faire that did not support regulation of private enterprise from the government. “ He considered most taxation as confiscation of wealth and undermining the natural evolution of society,”(). in other words, the government taking individuals wealth who have worked for it is defeating the purpose of competition and the intelligent to evolve. The wealth that is earned is suppose to be passed down to future generations to take that wealth and expand and do better. “ The former carries society forward and favors all its best members; the latter carries society downwards and favors all its worst members.”
President Richard M. Nixon signed the Controlled Substances Act (CSA) which calls for the regulation of certain drugs and substances. Drugs or substances like marijuana, heroin, LSD, and others are illegal under federal law. Creating a law that makes it illegal for individuals to take such drugs, opposes survival of the fittest. Those who are susceptible to take such drugs and or susceptible of addiction keeps the genes in circulation to pass down such traits to future generations. The drugs are a gateway to cancel out or remove such individuals from continuing and passing down to future offspring, but the laws against the usage of the drugs keep those so-called bad bloodlines in society and passed to their children.
The idea of eugenics was popular across the world and soon to be controversial. Eugenics is the science of improving the human species by selectively mating people with specific desirable hereditary traits. The purpose of eugenics is to breed out any diseases, disabilities or any other undesirable traits, to create a superior human population. “ Darwin’s natural selection itself had evolved from biological theory, to social, and economic theory, until finally it provided the intellectual foundation for creating a ‘ better’ human race” (1997, p. 97). Starting in the early 20 th century, thousands of individuals from mental institutions starting in California took part of sterilizations that were to protect society from the offspring of people with mental illness. Not only did the sterilizations affect the mentally ill, but it was also performed on minorities. In 1924, they successfully passed national legislation to restrict immigration of people who deemed less desirable. This also prevented marriage between races and those who were known as criminals. Thirty-three states allowed involuntary sterilization on those who deemed unworthy to procreate. The U. S. Supreme Court ruled in 1927, that forced sterilization on the handicapped does not violate the U. S. Constitution. The ruling was overturned in 1942, but many were victims of the procedure. Adolf Hitler believed that non-Aryan races such as Jews and Gypsies were inferior and performed extreme measures like genocide, to keep the gene pool pure. Similarly to the United States, the Nazis created the Law for the Prevention of Hereditarily Diseased Offspring which resulted in thousands of forced sterilizations. Hilter continued trying to purify his superior race by targeting his own Germans with mental or physical disabilities, including those who were blind and deaf, euthanizing them by gas or lethal injection. Hilters actions were morally wrong and was eventually prevented from continuing with the defeat of Germany and ending of World War II. Many often-associated eugenics with Hilters actions and ideas which therefore caused a decline in popularity. Eugenics across the world in the 20 th century were morally wrong, due to similar ideals to Hilter, which created laws to shut down such procedures and actions.
Those who have disabilities, whether it is mentally or physically, are protected by the government to give them a chance and to not be left behind from society. For education, those who are mentally challenged are put into special education programs within the school to receive an education like any other student. This is known as inclusion; it is wanted for the disabled children to feel similar and included with the other children, and to not be left behind which opposes social Darwinism of leaving the ‘ unfit’ behind. For public schools, the federal government provides funding for the schools, more specifically to benefit the special education children. The funding also includes free reduced lunch, which is for children who cannot afford to buy food from the school for lunch. The government is aiding those who are unable to support themselves or those who are the ‘ unfit’ of society. In public schools, making every individuals education the same often affects the more intelligent of the population because the laws requiring diversity, which often brings the intelligent down. The Americans with Disabilities Act (ADA), bans discrimination against employees and potential employees who have physical or mental disabilities that limit daily everyday life activities. This law is similar to the inclusion idea of education, and how it opposes Darwinism. This act is to prevent those with disabilities from being excluded or in other words to not leave the ‘ unfit’ behind.
The Civil Rights Act of 1964, signed by President Lyndon Johnson, ended segregation within public places and banned employment discrimination based on race, religion, sex, and other similar reasons. Many social Darwinist believed the government should not interfere with the development of society, and that the poor should not be helped, and races are biologically superior to others. For a period of time, whites have been the superior race and African Americans have been inferior. Blacks have been segregated from the whites in society in education, workplace, transportation, and many others. The Civil Rights Act made it that every individual is equal, including those of color, which defeats the ideal of competition between individuals.
Individual competition for property, wealth, etc. drives those who are willing to do and be better than others rise up within society and those who fail or simply lack the willingness to do so, helps eliminate the weak and immoral of the population. There is the basic understanding that intelligence tends to dominate over the lazy population. “ To affirm that they are equal would be to say that a man who has no tool can get as much food out of the ground as the man who has a spade or a plough; or that the man who has no weapon can defend himself as well against hostile beasts or hostile men as the man who has a weapon. If that were so, none of us would work any more.” (). The intelligent are the ones who have the wealth and the success they were able to achieve because they earned it by working hard. “ Competition, therefore, is a law of nature. Nature is entirely neutral; she submits to him who most energetically and resolutely assails her. She grants her rewards to the fittest, therefore without regard to other considerations of any kind.” ().
In 1993, President Bill Clinton signed the Family Medical Leave Act (FMLA). The FMLA supported individuals that needed to balance work and life, without losing their job. The act allows up to sixteen weeks off from work while the individual is allowed to keep their position in the workplace. The act originally applied to women who are pregnant and needed time off from work for their health and to care for the newborn baby, this soon expanded to all individuals of the workplace. The act strictly opposes competition within the workplace.
On June 26, 2015, the U. S. Supreme Court legalized same-sex marriage in all fifty states. The main idea of social Darwinism is for the human species, just like plants and animals, are meant to evolve. With same-sex marriage, the couples cannot procreate, it is biologically impossible, therefore preventing the human race to evolve. Similarly, those who identify as transgender also has an affect on the evolution of the human race. If a biologically born female transgenders male and a biologically born male transgenders female can reproduce. It is only possible when a biologically born female still has their reproductive organs that are still functioning, same as the biologically born male. If their reproduction organs are removed and altered to the opposite gender they identify as then the individuals can not procreate, therefore affecting the evolution of the human race.
Transgender individuals have been coming out more in modern society. The individuals are wanting to be treated as equals and identified as the gender of their choice. This is affecting the competitive aspect of Darwinism and making the playing field equal for every individual. With the individuals who are transgender, their participation in school sports are greatly affecting records and scholarships. Mary Gregory is biologically male but transgender female who actively participates in powerlifting. On April 27, 2019, Gregory broke several powerlifting records of the Raw Powerlifting Federation. It sparked controversy and the president of the foundation stated, “ In our rules, we go by biological. According to the rules, she can only lift in the men’s division. …I’m not trying to hurt anyone’s feelings but I have to follow the rules.” (). Similarly, controversy has developed about a transgender female, biologically male, participating in high school track and field. Female top runners have filed a complaint due to the transgender individual dominating the races which is affecting their chances of scholarships for college. The Connecticut Interscholastic Athletic Conference allows athletes to compete based off of the gender the individual identifies as. To make the playing field fair and ‘ equal’ laws have been placed to make that happen, but unfortunately this instance ruins the competitive aspect of social Darwinism.
In 2010, the Affordable Care Act (ACA) was enacted by President Barack Obama. The act required every United States family to have health insurance, the families who choose to oppose then has to pay a tax penalty. Those who have declared previous medical conditions, for example cancer, cannot be denied health insurance. This act requires every individual, poor or wealthy, fit or unfit to be equal. This act strictly opposes the Darwinism concept of the government not supporting the ‘ unfit.’
Communist and socialist countries promoted equality and eliminate social classes, therefore opposing Darwinism beliefs. Both types of government heavily regulate the economy. “…arguing that human progress resulted from the triumph of superior individuals and cultures over their inferior competitors; poverty was evidence of inferiority. Anything that interfered with the self-improvement of superior individuals or markets was to be resisted.” (). The government owns and operates production and businesses, the citizens are not allowed to have ownership. This allows the government to control the economy. It creates large equality across the country for every individual and eliminates competition. One of the main understandings of Darwinism is the aspect of competition between individuals, it is what makes the strong move forward and the ‘ unfit’ fall behind, it is what strives evolution.
Social Darwinism supports the competition between individuals and the evolution of species. Governmental laws have been enacted more throughout history since the mid twentieth century to lessen individual competition and slow down evolution. Government is controlling more of the social, economic, political and scientific world to create a fair, equal playing field for all individuals. “ Survival of the fittest” is meant for the strong and intelligent to thrive in society and pass on to future generations, and the lazy and unintelligent to be unsupported, lost within society in the hopes to eventually die out, all in the purpose to evolve and create a superior species. Therefore, government involvement and creation of laws prevent “ survival of the fittest” from pursuing, causes it to be ineffective.
- History. com Editors. ” Social Darwinism.” History. com. April 06, 2018. Accessed August 05, 2019. https://www. history. com/topics/early-20th-century-us/social-darwinism.
- History. com Editors. ” Eugenics.” History. com. November 15, 2017. Accessed August 05, 2019. https://www. history. com/topics/germany/eugenics.
- ” Social Darwinism in the Gilded Age.” Khan Academy. Accessed August 05, 2019. https://www. khanacademy. org/humanities/us-history/the-gilded-age/gilded-age/a/social-darwinism-in-the-gilded-age.
- Kretchmar, Jennifer. “ Social Darwinism.” Salem Press Encyclopedia , 2019. http://search. ebscohost. com. ezproxy. fgcu. edu/login. aspx? direct= true&db= ers&AN= 89185708&site= eds-live.
- ” Capitalism and Western Civilization: Social Darwinism by William H. Young.” National Association of Scholars. Accessed August 05, 2019. https://www. nas. org/blogs/dicta/capitalism_and_western_civilization_social_darwinism.
- ” Social Darwinism.” Social Darwinism. Accessed August 05, 2019. http://autocww. colorado. edu/~toldy2/E64ContentFiles/SociologyAndReform/SocialDarwinism. html.
- Claeys, Gregory. ” The ” Survival of the Fittest” and the Origins of Social Darwinism.” Journal of the History of Ideas 61, no. 2 (2000): 223-40. doi: 10. 2307/3654026.
- Brechlin, Dan. ” Connecticut High School Transgender Athletes ‘no Longer Want to Remain Silent’ following Title IX Complaint.” Courant. com. June 20, 2019. Accessed August 05, 2019. https://www. courant. com/sports/high-schools/hc-sp-transgender-policy-runners-respond-20190619-20190620-5x2c7s2f5jb6dnw2dwpftiw6ru-story. html.
- Maese, Rick. ” Stripped of Women’s Records, Transgender Powerlifter Asks, ‘Where Do We Draw the Line?'” The Washington Post. May 19, 2019. Accessed August 05, 2019. https://www. washingtonpost. com/sports/2019/05/16/stripped-womens-records-transgender-powerlifter-asks-where-do-we-draw-line/? noredirect= on&utm_term=. d18ceb5a7096.
- History. com Editors. ” Industrial Revolution.” History. com. October 29, 2009. Accessed August 05, 2019. https://www. history. com/topics/industrial-revolution/industrial-revolution.
- Costly, Andrew. ” BRIA 19 2 B Social Darwinism and American Laissez-faire Capitalism – Constitutional Rights Foundation.” BRIA 19 2 B Social Darwinism and American Laissez-faire Capitalism – Constitutional Rights Foundation. Accessed August 05, 2019. https://www. crf-usa. org/bill-of-rights-in-action/bria-19-2-b-social-darwinism-and-american-laissez-faire-capitalism. html.
This work "Government laws preventing social darwinism" was written and submitted voluntarily by your fellow student. You can use this sample for research and reference purposes to help create your own paper. The use of any parts of the work without proper citation is forbidden.
If you are the owner of this work and don’t want it to be published on NerdySeal, request its removal.Request Removal
Cite this Essay
NerdySeal. (2022) 'Government laws preventing social darwinism'. 5 August.
NerdySeal. (2022, August 5). Government laws preventing social darwinism. Retrieved from https://nerdyseal.com/government-laws-preventing-social-darwinism/
NerdySeal. 2022. "Government laws preventing social darwinism." August 5, 2022. https://nerdyseal.com/government-laws-preventing-social-darwinism/.
1. NerdySeal. "Government laws preventing social darwinism." August 5, 2022. https://nerdyseal.com/government-laws-preventing-social-darwinism/.
NerdySeal. "Government laws preventing social darwinism." August 5, 2022. https://nerdyseal.com/government-laws-preventing-social-darwinism/.
"Government laws preventing social darwinism." NerdySeal, 5 Aug. 2022, nerdyseal.com/government-laws-preventing-social-darwinism/.
If you have any idea how best to write about Government laws preventing social darwinism, please contact us immediately. We would like to know more: [email protected] | <urn:uuid:4c2bf043-7628-49cf-8e38-17c45d7ffeee> | CC-MAIN-2022-33 | https://nerdyseal.com/government-laws-preventing-social-darwinism/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571989.67/warc/CC-MAIN-20220813232744-20220814022744-00495.warc.gz | en | 0.956081 | 4,375 | 3.484375 | 3 |
ALL >> Education >> View Article
1z0-053 Exams Study Guides
Total Articles: 261
The INV_HISTORY table is created using the command:
The following data has been inserted into the INV_HISTORY table:
You would like to store the data belonging to the year 2006 in a single partition and issue the command:
SQL> ALTER TABLE inv_history
INTO PARTITION sys_py;
What would be the outcome of this command?
A. It executes successfully, and the transition point is set to '1-apr-2006'.
B. It executes successfully, and the transition point is set to '15-apr-2006'.
C. It produces an error because the partitions specified for merging are not adjacent.
D. It produces an error because the date values specified in the merge do not match the date values stored in the table.
You want to perform the following operations for the DATA ASM disk group:
• Verify the consistency of the disk.
• Cross-check all the file extent maps and allocation tables for consistency.
• Check whether the alias metadata directory and file directory are linked correctly.
• Check that ASM metadata directories do not have unreachable allocated blocks.
Which command accomplishes these tasks?
A. ALTER DISKGROUP data CHECK;
B. ALTER DISKGROUP data CHECK DISK;
C. ALTER DISKGROUP data CHECK FILE;
D. ALTER DISKGROUP data CHECK DISK IN FAILURE GROUP 1;
Syntax: ALTER DISKGROUP CHECK [REPAIR | NOREPAIR];
The check_diskgroup_clause lets you verify the internal consistency of Oracle ASM disk group metadata. The disk group must be mounted. Oracle ASM displays summary errors and writes the details of the detected errors in the alert log.
The CHECK keyword performs the following operations:
• Checks the consistency of the disk.
• Cross checks all the file extent maps and allocation tables for consistently.
• Checks that the alias metadata directory and file directory are linked correctly.
• Checks that the alias directory tree is linked correctly.
• Checks that Oracle ASM metadata directories do not have unreachable allocated blocks.
Refer to here
Which two statements are true regarding the functionality of the remap command in ASMCMD? (Choose two.)
A. It repairs blocks that have read disk I/O errors.
B. It checks whether the alias metadata directory and the file directory are linked correctly.
C. It repairs blocks by always reading them from the mirror copy and writing them to the original location.
D. It reads the blocks from a good copy of an ASM mirror and rewrites them to an alternate location on disk if the blocks on the original location cannot be read properly.
Answer: A, D
Reference from the Oracle document release v11.1 at here:
Repairs a range of physical blocks on a disk. The remap command only repairs blocks that have read disk I/O errors. It does not repair blocks that contain corrupted contents, whether or not those blocks can be read. The command assumes a physical block size of 512 bytes and supports all allocation unit sizes (1 to 64 MB).
Reference from the Oracle document release v11.2 at here:
The remap command marks a range of blocks as unusable on the disk and relocates any data allocated in that range.
What is the advantage of setting the ASM-preferred mirror read for the stretch cluster configuration?
A. It improves resync operations.
B. This feature enables much faster file opens.
C. It improves performance as fewer extent pointers are needed in the shared pool.
D. It improves performance by reading from a copy of an extent closest to the node.
Preferred Read Failure Groups
When you configure Oracle ASM failure groups, it might be more efficient for a node to read from an extent that is closest to the node, even if that extent is a secondary extent. In other words, you can configure Oracle ASM to read from a secondary extent if that extent is closer to the node instead of Oracle ASM reading from the primary copy which might be farther from the node. Using the preferred read failure groups feature is most useful in extended clusters.
Examine the following command:
ALTER DISKGROUP data MOUNT FORCE;
In which scenario can you use the above command to mount the disk group?
A. when ASM disk goes offline
B. when one or more ASM files are dropped
C. when some disks in a disk group are offline
D. when some disks in a failure group for a disk group are rebalancing
In the FORCE mode, Oracle ASM attempts to mount the disk group even if it cannot discover all of the devices that belong to the disk group. This setting is useful if some of the disks in a normal or high redundancy disk group became unavailable while the disk group was dismounted. When MOUNT FORCE succeeds, Oracle
ASM takes the missing disks offline.
If Oracle ASM discovers all of the disks in the disk group, then MOUNT FORCE fails. Therefore, use the MOUNT FORCE setting only if some disks are unavailable. Otherwise, use NOFORCE.
In normal- and high-redundancy disk groups, disks from one failure group can be unavailable and MOUNT FORCE will succeed. Also in high-redundancy disk groups, two disks in two different failure groups can be unavailable and MOUNT FORCE will succeed. Any other combination of unavailable disks causes the operation to fail, because Oracle ASM cannot guarantee that a valid copy of all user data or metadata exists on the available disks.
Refer to here
Which background process of a database instance, using Automatic Storage Management (ASM), connects as a foreground process into the ASM instance?
ASMB (ASM Background Process): Communicates with the ASM instance, managing storage and providing statistics, runs in ASM instances when the ASMCMD cp command runs or when the database instance first starts if the server parameter file is stored in ASM. ASMB also runs with Oracle Cluster Registry on ASM.
RBAL (ASM Rebalance Master Process): In an ASM instance, it coordinates rebalance activity for disk groups. In a database instances, it manages ASM disk groups.
PMON (Process Monitor): Monitors the other background processes and performs process recovery when a server or dispatcher process terminates abnormally.
SMON (System Monitor Process): Performs critical tasks such as instance recovery and dead transaction recovery, and maintenance tasks such as temporary space reclamation, data dictionary cleanup, and undo tablespace management
Immediately after adding a new disk to or removing an existing disk from an ASM instance, you find that the performance of the database goes down initially until the time the addition or removal process is completed, and then gradually becomes normal.
Which two activities would you perform to maintain a consistent performance of the database while adding or removing disks? (Choose two.)
A. Define the POWER option while adding or removing the disks.
B. Increase the number of ARB processes by setting up a higher value for ASM_POWER_LIMIT.
C. Increase the number of DBWR processes by setting up a higher value for DB_WRITER_PROCESSES.
D. Increase the number of slave database writer processes by setting up a higher value for DBWR_IO_SLAVES.
Answer: A, B
ARBn (ASM Rebalance Process): Rebalances data extents within an ASM disk group, possible processes are ARB0-ARB9 and ARBA.
ALTER DISKGROUP..POWER clause, specify a value from 0 to 11, where 0 stops the rebalance operation and 11 permits Oracle ASM to execute the rebalance as fast as possible. The value you specify in the POWER clause defaults to the value of the ASM_POWER_LIMIT initialization parameter. If you omit the POWER clause, then Oracle ASM executes both automatic and specified rebalance operations at the power determined by the value of the ASM_POWER_LIMIT initialization parameter.
Beginning with Oracle Database 11g Release 2 (220.127.116.11), if the COMPATIBLE.ASM disk group attribute is set to 18.104.22.168 or higher, then you can specify a value from 0 to 1024 in the POWER clause.
Identify three key features of ASM. (Choose three.)
A. file striping
B. allocation unit mirroring
C. automatic disk rebalancing
D. automatic file size increment
E. automatic undo management
Answer: A, B, C
You have three production databases, HRDB, FINDB, and ORGDB, that use the same ASM instance. At the end of the day, while all three production database instances are running, you execute the following command on the ASM instance:
SQL> shutdown immediate;
What is the result of executing this command?
A. The ASM instance is shut down, but the other instances are still running.
B. It results in an error because other database instances are connected to it.
C. All the instances, including the ASM instance, are shut down in the IMMEDIATE mode.
D. HRDB, FINDB, and ORGDB instances are shut down in the ABORT mode and the ASM instance is shut down in the IMMEDIATE mode.
You are managing an ASM instance. You previously issued the following statements:
ALTER DISKGROUP dg1 DROP DISK disk2;
ALTER DISKGROUP dg1 DROP DISK disk3;
ALTER DISKGROUP dg1 DROP DISK disk5;
You want to cancel the disk drops that are pending for the DG1 disk group.
Which statement should you issue?
A. ALTER DISKGROUP dg1 UNDROP disk2, disk3, disk5;
B. ALTER DISKGROUP dg1 UNDROP;
C. ALTER DISKGROUP dg1 UNDROP DISKS;
D. You cannot cancel the pending disk drops.
Use this clause to cancel the drop of disks from the disk group. You can cancel the pending drop of all the disks in one or more disk groups (by specifying diskgroup_name) or of all the disks in all disk groups (by specifying ALL).
This clause is not relevant for disks that have already been completely dropped from the disk group or for disk groups that have been completely dropped. This clause results in a long-running operation. You can see the status of the operation by querying the V$ASM_OPERATION dynamic performance view.
What is the effect of increasing the value of the ASM_POWER_LIMIT parameter?
A. The number of DBWR processes increases
B. The number of ASMB processes increases
C. The number of DBWR_TO_SLAVES increases
D. The rebalancing operation in an ASM instance completes more quickly, but can result in higher I/O overhead
ASM supports all but which of the following file types? (Choose all that apply.)
A. Database files
C. Redo-log files
D. Archived log files
E. RMAN backup sets
F. Password files
G. init.ora files
Answer: F, G
What Types of Files Does Oracle ASM Support?
After executing the command
ALTER DISKGROUP diskgroup2 DROP DISK dg2a;
You issue the following command from the ASM instance:
SELECT group_number, COUNT(*) FROM v$asm_operation;
What is the implication if the query against V$ASM_OPERATION returns zero rows?
A. The drop disk operation is still proceeding and you cannot yet run the undrop disks operation.
B. The drop disk operation is complete and you can run the undrop disks command if needed.
C. The drop disk operation is complete and you cannot run the undrop disks command.
D. The query will fail since there is not a V$ASM_OPERATION view available in an ASM instance.
E. None of the above is true.
Once the DROP DISK operation is completed, you CANNOT run the UNDROP DISKS command any more.
What is the net effect of the following command?
alter diskgroup dgroup1 drop disk abc;
A. The disk ABC will be dropped from the disk group. Since you did not issue a rebalance command, the data on that disk will be lost.
B. The command will raise an error indicating that you need to rebalance the disk group to remove the data from that disk prior to dropping the disk.
C. The disk group will be automatically rebalanced during the drop operation. Once the rebalancing is complete, the disk will be dropped.
D. This command will fail because you cannot drop a specific disk in an ASM disk group.
E. The disk drop command will be suspended for a predetermined amount of time, waiting for you to also issue an alter diskgroup rebalance command. Once you have issued the rebalance command, ASM will proceed to rebalance the disk group and then drop the disk.
Which of the following is not a configurable attribute for an individual disk group?
DG_DROP_TIME is an invalid DG attribute.
Disk Group Attributes
The DISK_REPAIR_TIME disk group attribute specifies how long a disk remains offline before ASM drops the disk.
The COMPATIBLE.ASM attribute determines the minimum software version for an ASM instance that uses the disk group.
The COMPATIBLE.RDBMS attribute determines the minimum COMPATIBLE database initialization parameter setting for any database instance that uses the disk group.
The AU_SIZE attribute determines the allocation unit size of the disk group. The values can be 1, 2, 4, 8, 16, 32, and 64 MB.
Your organization decided to upgrade the existing Oracle 10g database to Oracle 11g database in a multiprocessor environment.
At the end of the upgrade, you observe that the DBA executes the following script:
What is the significance of executing this script?
A. It performs parallel recompilation of only the stored PL/SQL code.
B. It performs sequential recompilation of only the stored PL/SQL code.
C. It performs parallel recompilation of any stored PL/SQL as well as Java code.
D. It performs sequential recompilation of any stored PL/SQL as well as Java code.
Recompile invalid objects with utlrp.sql
You are maintaining the SALES database. You have added a new disk to a disk group. Automatic Storage Management performs the rebalancing activity. You want to speed up the rebalancing activity.
Which parameter should you specify to control the speed of the rebalancing activity?
What are the recommendations for Oracle Database 11g installation to make it Optimal Flexible Architecture (OFA)-compliant? (Choose all that apply.)
A. ORACLE_BASE should be set explicitly.
B. An Oracle base should have only one Oracle home created in it.
C. Flash recovery area and data file location should be on separate disks.
D. Flash recovery area and data file location should be created under Oracle base in a non-Automatic Storage Management (ASM) setup.
Answer: A, C, D
In your database, the LDAP_DIRECTORY_SYSAUTH initialization parameter has been set to YES and the users who need to access the database as DBAs have been granted SYSDBA enterprise role in Oracle Internet Directory (OID). SSL and the password file have been configured. A user SCOTT with the SYSDBA privilege tries to connect to the database instance from a remote machine using the command:
$ SQLPLUS scott/tiger@DB01 AS SYSDBA
Which DB01 is the net service name.
Which authentication method would be used first?
A. authentication by password file
B. authentication by using certificates over SSL
C. authentication by using the Oracle Internet Directory
D. authentication by using the local OS of the database server
You are managing an Oracle Database 11g database with the ASM storage. The database is having big file tablespaces. You want files to open faster and less memory to be used in the shared pool to manage the extent maps.
What configuration would you effect to achieve your objective? (Choose all that apply.)
A. Set the ASM compatibility attribute for the ASM disk group to 11.1.0.
B. Set the RDBMS compatibility attribute for the ASM disk group to 11.1.0.
C. Set the COMPATIBLE initialization parameter for the ASM instance to 11.1.0.
D. Set the COMPATIBLE initialization parameter for the database instance to 11.1.0.
Answer: A, D
Which two statements are true regarding an Automatic Storage Management (ASM) instance? (Choose two.)
A. An ASM instance mounts an ASM control file
B. An ASM instance uses the ASMB process for rebalancing of disks within a disk group
C. Automatic Memory Management is enabled in an ASM instance even when the MEMORY_TARGET parameter is not set explicitly
D. An RDBMS instance gets connected to an ASM instance using ASMB as a foreground process when the database instance is started
Answer: C, D
Users are connected to a database instance that is using Automatic Storage Management (ASM). The DBA executes the command as follows to shut down the ASM instance:
SQL> SHUTDOWN IMMEDIATE;
What happens to the database instance?
A. It shuts down long with the ASM instance.
B. It is aborted and the ASM instance shuts down normally.
C. It stays open and SHUTDOWN command for the ASM instance fails.
D. It shuts down only after all pending transactions are completed and the ASM instance waits for this before shutting down.
IMMEDIATE or TRANSACTIONAL Clause (link)
Oracle ASM waits for any in-progress SQL to complete before performing an orderly dismount of all of the disk groups and shutting down the Oracle ASM instance. Oracle ASM does not wait for users currently connected to the instance to disconnect. If any database instances are connected to the Oracle ASM instance, then the SHUTDOWN command returns an error and leaves the Oracle ASM instance running. Because the Oracle ASM instance does not contain any transactions, the TRANSACTIONAL mode behaves the same as IMMEDIATE mode.
Examine the following ALTER command;
SQL> ALTER DISKGROUP dgroup1 UNDROP DISKS;
What is the purpose of the command?
A. It cancels all pending disk drops within the disk group.
B. It adds previously dropped disks back into the disk group.
C. It restores disks that are being dropped as the result of a DROP DISKGROUP operation.
D. It mounts disks in the disk group for which the drop-disk operation has already been completed.
E. It restores all the dropped disks in the disk group for which the drop-disk operation has already been completed.
The key point is PENDING.
A database instance is using an Automatic Storage Management (ASM) instance, which has a disk group, DGROUP1, created as follows:
SQL> CREATE DISKGROUP dgroup1 NORMAL REDUNDANCY
FAILGROUP controller1 DISK '/devices/diska1', '/devices/diska2'
FAILGROUP controller2 DISK '/devices/diskb1', '/devices/diskb2';
What happens when the whole CONTROLLER1 Failure group is damaged?
A. The transactions that use the disk group will halt.
B. The mirroring of allocation units occurs within the CONTROLLER2 failure group.
C. The data in the CONTROLLER1 failure group is shifted to the CONTROLLER2 failure group and implicit rebalancing is triggered.
D. The ASM does not mirror any data and newly allocated primary allocation units (AU) are stored in the CONTROLLER2 failure group.
Your database instance is running. You are not able to access Oracle Enterprise Manager Database Control because the listener is not started.
Which tool or utility would you use to start the listener?
A. Oracle Net Manager
B. Listener Control utility
C. Database Configuration Assistant
D. Oracle Net Configuration Assistant
View the Exhibit and examine the disk groups created at the time of migrating the database storage to Automatic Storage Management (ASM).
Why does the FRA disk group initially have more free space even though both DATA and FRA disk groups are provided with the same size?
A. Because the FRA disk group will not support dynamic rebalancing
B. Because the FRA disk group is not configured to support mirroring
C. Because disks in the FRA disk group are not formatted at this stage
D. Because the FRA disk group will support only a single size of allocation unit
What are three benefits of using ASM? (Choose three.)
A. Ease of disk administration and maintenance
B. Load balancing across physical disks
C. Software RAID-1 data redundancy with double or triple mirrors
D. Automatic recovery of failed disks
Answer: A, B, C
What components are present in an ASM instance? (Choose three.)
B. Database processes
C. Database datafiles
D. Control files
E. Database parameter file or SPFILE
Answer: A, B, E
Which of the following is a benefit of ASM fast disk resync?
A. Failed disks are taken offline immediately but are not dropped.
B. Disk data is never lost.
C. By default, the failed disk is not dropped from the disk group ever, protecting you from loss of that disk.
D. The failed disk is automatically reformatted and then resynchronized to speed up the recovery process.
E. Hot spare disks are automatically configured and added to the disk group.
ASM Fast Mirror Resync
What is the result of increasing the value of the parameter ASM_POWER_LIMIT during a rebalance operation?
A. The ASM rebalance operation will likely consume fewer resources and complete in a shorter amount of time.
B. The ASM rebalance operation will consume fewer resources and complete in a longer amount of time.
C. The ASM rebalance operation will be parallelized and should complete in a shorter amount of time.
D. There is no ASM_POWER_LIMIT setting used in ASM.
E. None of the above
What is the default AU size of an ASM disk group? What is the maximum AU size in an ASM disk group?
A. 100KB default, 10TB maximum
B. 256KB default, 1024MB maximum
C. 10MB default, 126PB maximum
D. 64KB default, 1EB maximum
E. 1MB default, 64MB maximum
The AU size is determined at creation time with the allocation unit size (AU_SIZE) disk group attribute. The values can be 1, 2, 4, 8, 16, 32, and 64 MB.
Refer to here
Which initialization parameter in an ASM instance specifies the disk groups to be automatically mounted at instance startup?
Refer to here
When you run the STARTUP command, this command attempts to mount the disk groups specified by the initialization parameter ASM_DISKGROUPS. If you have not entered a value for ASM_DISKGROUPS, then the ASM instance starts and Oracle displays an error that no disk groups were mounted. You can then mount disk groups with the ALTER DISKGROUP...MOUNT command.
When an ASM instance receives a SHUTDOWN NORMAL command, what command does it pass on to all database instances that rely on the ASM instances disk groups?
When starting up your ASM instance, you receive the following error:
SQL> startup pfile=$ORACLE_HOME/dbs/init+ASM.ora
ASM instance started
Total System Global Area 104611840 bytes
Fixed Size 1298220 bytes
Variable Size 78147796 bytes
ASM Cache 25165824 bytes
ORA-15032: not all alternations performed
ORA-15063: ASM discovered an insufficient number of disks for diskgroup “DGROUP3”
ORA-15063: ASM discovered an insufficient number of disks for diskgroup “DGROUP2”
ORA-15063: ASM discovered an insufficient number of disks for diskgroup “DGROUP1”
In trying to determine the cause of the problem, you issue this query:
SQL> show parameter asm
What is the cause of the error?
A. The ASM_DISKGROUPS parameter is configured for three disk groups: DGROUP1, DGROUP2, and DGROUP3.
The underlying disks for these disk groups have apparently been lost.
B. The format of the ASM_DISKGROUPS parameter is incorrect. It should reference the disk group numbers, not the names of the disk groups
C. The ASM_POWER_LIMIT parameter is incorrectly set to 1. It should be set to the number of disk groups being attached to the ASM instance.
D. The ASM_DISKSTRING parameter is not set; therefore disk discovery is not possible.
E. There is insufficient information to solve this problem.
ASM_DISKSTRING specifies an operating system-dependent value used by Automatic Storage Management to limit the set of disks considered for discovery. When a new disk is added to a disk group, each Automatic Storage Management instance that has the disk group mounted must be able to discover the new disk using the value of ASM_DISKSTRING.
In most cases, the default value will be sufficient. Using a more restrictive value may reduce the time required for Automatic Storage Management to perform discovery, and thus improve disk group mount time or the time for adding a disk to a disk group. A "?" at the beginning of the string gets expanded to the Oracle home directory. Depending on the operating system, wildcard characters can be used. It may be necessary to dynamically change ASM_DISKSTRING before adding a disk so that the new disk will be discovered.
An attempt to dynamically modify ASM_DISKSTRING will be rejected and the old value retained if the new value cannot be used to discover a disk that is in a disk group that is already mounted.
Refer to here
As DBA for the Rebalance, you have decided that you need to facilitate some redundancy in your database. Using ASM, you want to create a disk group that will provide for the greatest amount of redundancy for your ASM data (you do not have advanced SAN mirroring technology available to you, unfortunately).
Which of the following commands would create a disk group that would offer the maximum in data redundancy?
No SAN mirroring available means no external redundancy available.
The highest redundancy of ASM is the HIGH redundancy with 3 mirror copies.
Education Articles1. Detail Description About Core Java Basics
2. Tang Civilization
Author: Rosemary Charles
3. Why Civil Engineering Got A Huge Scope In Today's World?
Author: Vikrant Institute of Technology & Management (VITM
4. Java Online Training From H2kinfosys
5. Tang Writing
Author: Rosemary Charles
6. Introduction Essay
Author: nancy brown
7. Development In An Adolescent: The Roles Schools Play
8. Microsoft Crm Training | Dynamics 365 - Ms Dynamics Crm Training
9. E-learning Courses
10. Ibm Sterling Integrator Online Training
Author: Vasu Buddi
11. Ibm Api Connect | Management Online Training
Author: Vasu Buddi
12. Some Essential Things About Udemy Benefits
13. Delhi Digital Zone Digital Marketing Summer Training Program
Author: Sandeep Chahal
14. Learning Activity One
Author: Winnie Melda | <urn:uuid:1e913f08-40a4-41d9-805b-6cefa0b14e80> | CC-MAIN-2017-51 | http://www.123articleonline.com/articles/935765/1z0-053-exams-study-guides | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948585297.58/warc/CC-MAIN-20171216065121-20171216091121-00728.warc.gz | en | 0.791803 | 6,016 | 2.59375 | 3 |
The Venice Biennale (/ - /,; Italian: La Biennale di Venezia [la bi.enˈnaːle di veˈnɛttsja]; in English also called the "Venice Biennial") refers to an arts organization based in Venice and the name of the original and principal biennial exhibition the organization presents. The organization changed its name to the Biennale Foundation in 2009, while the exhibition is now called the Art Biennale to distinguish it from the organisation and other exhibitions the Foundation organizes.
Biennale di Venezia
|Genre||biennale; focuses on contemporary art, and also includes events for art, contemporary dance, architecture, cinema and theatre|
|Frequency||biennial, every two years|
|Founder||Venetian City Council|
The Art Biennale, a contemporary visual art exhibition and so called because it is held biennially (in odd-numbered years), is the original biennale on which others in the world have been modeled. The Biennale Foundation has a continuous existence supporting the arts as well as organizing the following separate events:
|Common name||Formal name||Founded||Frequency|
|Art Biennale||International Art Exhibition||1895||Odd-numbered years|
|Venice Biennale of Architecture||International Architecture Exhibition||1980||Even-numbered years (since 2000)|
|Biennale Musica||International Festival of Contemporary Music||1930||Annually (Sep/Oct)|
|Biennale Teatro||International Theatre Festival||1934||Annually (Jul/Aug)|
|Venice Film Festival||Venice International Film Festival||1934||Annually (Aug/Sep)|
|Dance Biennale||International Festival of Contemporary Dance||2004||Annually (June; biennially 2010–16)|
|International Kids' Carnival||2009||Annually (during Carnevale)|
On April 19, 1893 the Venetian City Council passed a resolution to set up an biennial exhibition of Italian Art ("Esposizione biennale artistica nazionale") to celebrate the silver anniversary of King Umberto I and Margherita of Savoy.
A year later, the council decreed "to adopt a 'by invitation' system; to reserve a section of the Exhibition for foreign artists too; to admit works by uninvited Italian artists, as selected by a jury."
The first Biennale, "I Esposizione Internazionale d'Arte della Città di Venezia (1st International Art Exhibition of the City of Venice)" (although originally scheduled for April 22, 1894) was opened on April 30, 1895 by the Italian King and Queen, Umberto I and Margherita di Savoia. The first exhibition was seen by 224,000 visitors.
The event became increasingly international in the first decades of the 20th century: from 1907 on, several countries installed national pavilions at the exhibition, with the first being from Belgium. In 1910 the first internationally well-known artists were displayed- a room dedicated to Gustav Klimt, a one-man show for Renoir, a retrospective of Courbet. A work by Picasso was removed from the Spanish salon in the central Palazzo because it was feared that its novelty might shock the public. By 1914 seven pavilions had been established: Belgium (1907), Hungary (1909), Germany (1909), Great Britain (1909), France (1912), and Russia (1914).
During World War I, the 1916 and 1918 events were cancelled. In 1920 the post of mayor of Venice and president of the Biennale was split. The new secretary general, Vittorio Pica brought about the first presence of avant-garde art, notably Impressionists and Post-Impressionists.
1922 saw an exhibition of sculpture by African artists. Between the two World Wars, many important modern artists had their work exhibited there. In 1928 the Istituto Storico d'Arte Contemporanea (Historical Institute of Contemporary Art) opened, which was the first nucleus of archival collections of the Biennale. In 1930 its name was changed into Historical Archive of Contemporary Art.
In 1930, the Biennale was transformed into an Ente Autonomo (Autonomous Board) by Royal Decree with law no. 33 of 13-1-1930. Subsequently, the control of the Biennale passed from the Venice city council to the national Fascist government under Benito Mussolini. This brought on a restructuring, an associated financial boost, as well as a new president, Count Giuseppe Volpi di Misurata. Three entirely new events were established, including the Biennale Musica in 1930, also referred to as International Festival of Contemporary Music; the Venice Film Festival in 1932, which they claim as the first film festival in history, also referred to as Venice International Film Festival; and the Biennale Theatro in 1934, also referred to as International Theatre Festival.
In 1933 the Biennale organised an exhibition of Italian art abroad. From 1938, Grand Prizes were awarded in the art exhibition section.
During World War II, the activities of the Biennale were interrupted: 1942 saw the last edition of the events. The Film Festival restarted in 1946, the Music and Theatre festivals were resumed in 1947, and the Art Exhibition in 1948.
The Art Biennale was resumed in 1948 with a major exhibition of a recapitulatory nature. The Secretary General, art historian Rodolfo Pallucchini, started with the Impressionists and many protagonists of contemporary art including Chagall, Klee, Braque, Delvaux, Ensor, and Magritte, as well as a retrospective of Picasso's work. Peggy Guggenheim was invited to exhibit her collection, later to be permanently housed at Ca' Venier dei Leoni.
1949 saw the beginning of renewed attention to avant-garde movements in European—and later worldwide—movements in contemporary art. Abstract expressionism was introduced in the 1950s, and the Biennale is credited with importing Pop Art into the canon of art history by awarding the top prize to Robert Rauschenberg in 1964. From 1948 to 1972, Italian architect Carlo Scarpa did a series of remarkable interventions in the Biennale's exhibition spaces.
In 1954 the island San Giorgio Maggiore provided the venue for the first Japanese Noh theatre shows in Europe. 1956 saw the selection of films following an artistic selection and no longer based upon the designation of the participating country. The 1957 Golden Lion went to Satyajit Ray's Aparajito which introduced Indian cinema to the West.
1962 included Arte Informale at the Art Exhibition with Jean Fautrier, Hans Hartung, Emilio Vedova, and Pietro Consagra. The 1964 Art Exhibition introduced continental Europe to Pop Art (The Independent Group had been founded in Britain in 1952). The American Robert Rauschenberg was the first American artist to win the Gran Premio, and the youngest to date.
The student protests of 1968 also marked a crisis for the Biennale. Student protests hindered the opening of the Biennale. A resulting period of institutional changes opened and ending with a new Statute in 1973. In 1969, following the protests, the Grand Prizes were abandoned. These resumed in 1980 for the Mostra del Cinema and in 1986 for the Art Exhibition.
In 1972, for the first time a theme was adopted by the Biennale, called "Opera o comportamento" ("Work or Behaviour").
Starting from 1973 the Music Festival was no longer held annually. During the year in which the Mostra del Cinema was not held, there was a series of "Giornate del cinema italiano" (Days of Italian Cinema) promoted by sectorial bodies in campo Santa Margherita, in Venice.
1974 saw the start of the four-year presidency of Carlo Ripa di Meana. The International Art Exhibition was not held (until it was resumed in 1976). Theatre and cinema events were held in October 1974 and 1975 under the title Libertà per il Cile (Freedom for Chile) – a major cultural protest against the dictatorship of Augusto Pinochet.
On 15 November 1977, the so-called Dissident Biennale (in reference to the dissident movement in the USSR) opened. Because of the ensuing controversies within the Italian left wing parties, president Ripa di Meana resigned at the end of the year.
In 1979 the new presidency of Giuseppe Galasso (1979-1982) began. The principle was laid down whereby each of the artistic sectors was to have a permanent director to organise its activity.
In 1980 the Architecture section of the Biennale was set up. The director, Paolo Portoghesi, opened the Corderie dell'Arsenale to the public for the first time. At the Mostra del Cinema, the awards were brought back into being (between 1969 and 1979, the editions were non-competitive). In 1980, Achille Bonito Oliva and Harald Szeemann introduced "Aperto", a section of the exhibition designed to explore emerging art. Italian art historian Giovanni Carandente directed the 1988 and 1990 editions. A three-year gap was left afterwards to make sure that the 1995 edition would coincide with the 100th anniversary of the Biennale.
The 1993 edition was directed by Achille Bonito Oliva. In 1995, Jean Clair was appointed to be the Biennale's first non-Italian director of visual arts while Germano Celant served as director in 1997.
For the Centenary in 1995, the Biennale promoted events in every sector of its activity: the 34th Festival del Teatro, the 46th art exhibition, the 46th Festival di Musica, the 52nd Mostra del Cinema.
In 1999 and 2001, Harald Szeemann directed two editions in a row (48th & 49th) bringing in a larger representation of artists from Asia and Eastern Europe and more young artists than usual and expanded the show into several newly restored spaces of the Arsenale.
In 1999 a new sector was created for live shows: DMT (Dance Music Theatre).
The 51st edition of the Biennale opened in June 2005, curated, for the first time by two women, Maria de Corral and Rosa Martinez. De Corral organized "The Experience of Art" which included 41 artists, from past masters to younger figures. Rosa Martinez took over the Arsenale with "Always a Little Further." Drawing on "the myth of the romantic traveler" her exhibition involved 49 artists, ranging from the elegant to the profane. In 2007, Robert Storr became the first director from the United States to curate the Biennale (the 52nd), with a show entitled Think with the Senses – Feel with the Mind. Art in the Present Tense. Swedish curator Daniel Birnbaum was artistic director of the 2009 edition, followed by the Swiss Bice Curiger in 2011.
The biennale in 2013 was curated by the Italian Massimiliano Gioni. His title and theme, Il Palazzo Enciclopedico / The Encyclopedic Palace, was adopted from an architectural model by the self-taught Italian-American artist Marino Auriti. Auriti's work, The Encyclopedic Palace of the World was lent by the American Folk Art Museum and exhibited in the first room of the Arsenale for the duration of the biennale. For Gioni, Auriti's work, "meant to house all worldly knowledge, bringing together the greatest discoveries of the human race, from the wheel to the satellite," provided an analogous figure for the "biennale model itself...based on the impossible desire to concentrate the infinite worlds of contemporary art in a single place: a task that now seems as dizzyingly absurd as Auriti's dream."
Curator Okwui Enwezor was responsible for the 2015 edition. He was the first African-born curator of the biennial. As a catalyst for imagining different ways of imagining multiple desires and futures Enwezor commissioned special projects and programs throughout the Biennale in the Giardini. This included a Creative Time Summit, e-flux journal's SUPERCOMMUNITY, Gulf Labor Coalition, The Invisible Borders Trans-African Project and Abounaddara.
- 1948 – Rodolfo Pallucchini
- 1950 – Rodolfo Pallucchini
- 1952 – Rodolfo Pallucchini
- 1954 – Rodolfo Pallucchini
- 1956 – Rodolfo Pallucchini
- 1958 – Gian Alberto Dell'Acqua
- 1960 – Gian Alberto Dell'Acqua
- 1962 – Gian Alberto Dell'Acqua
- 1964 – Gian Alberto Dell'Acqua
- 1966 – Gian Alberto Dell'Acqua
- 1968 – Maurizio Calvesi and Guido Ballo
- 1970 – Umbro Apollonio
- 1972 – Mario Penelope
- 1974 – Vittorio Gregotti
- 1976 – Vittorio Gregotti
- 1978 – Luigi Scarpa
- 1980 – Luigi Carluccio
- 1982 – Sisto Dalla Palma
- 1984 – Maurizio Calvesi
- 1986 – Maurizio Calvesi
- 1988 – Giovanni Carandente
- 1990 – Giovanni Carandente
- 1993 – Achille Bonito Oliva
- 1995 – Jean Clair
- 1997 – Germano Celant
- 1999 – Harald Szeemann
- 2001 – Harald Szeemann
- 2003 – Francesco Bonami
- 2005 – María de Corral and Rosa Martinez
- 2007 – Robert Storr
- 2009 – Daniel Birnbaum
- 2011 – Bice Curiger
- 2013 – Massimiliano Gioni
- 2015 – Okwui Enwezor
- 2017 – Christine Macel
- 2019 – Ralph Rugoff
Role in the art market
When the Venice Biennale was founded in 1895, one of its main goals was to establish a new market for contemporary art. Between 1942 and 1968 a sales office assisted artists in finding clients and selling their work, a service for which it charged 10% commission. Sales remained an intrinsic part of the biennale until 1968, when a sales ban was enacted. An important practical reason why the focus on non-commodities has failed to decouple Venice from the market is that the biennale itself lacks the funds to produce, ship and install these large-scale works. Therefore, the financial involvement of dealers is widely regarded as indispensable; as they regularly front the funding for production of ambitious projects. Furthermore, every other year the Venice Biennale coincides with nearby Art Basel, the world's prime commercial fair for modern and contemporary art. Numerous galleries with artists on show in Venice usually bring work by the same artists to Basel.
Central Pavilion and Arsenale
The formal Biennale is based at a park, the Giardini. The Giardini includes a large exhibition hall that houses a themed exhibition curated by the Biennale's director.
Initiated in 1980, the Aperto began as a fringe event for younger artists and artists of a national origin not represented by the permanent national pavilions. This is usually staged in the Arsenale and has become part of the formal biennale programme. In 1995 there was no Aperto so a number of participating countries hired venues to show exhibitions of emerging artists. From 1999, both the international exhibition and the Aperto were held as one exhibition, held both at the Central Pavilion and the Arsenale. Also in 1999, a $1 million renovation transformed the Arsenale area into a cluster of renovated shipyards, sheds and warehouses, more than doubling the Arsenale's exhibition space of previous years.
A special edition of the 54th Biennale was held at Padiglione Italia of Torino Esposizioni – Sala Nervi (December 2011 – February 2012) for the 150th Anniversary of Italian Unification. The event was directed by Vittorio Sgarbi.
The Giardini houses 30 permanent national pavilions. Alongside the Central Pavilion, built in 1894 and later restructured and extended several times, the Giardini are occupied by a further 29 pavilions built at different periods by the various countries participating in the Biennale. The pavilions are the property of the individual countries and are managed by their ministries of culture.
Countries not owning a pavilion in the Giardini are exhibited in other venues across Venice. The number of countries represented is still growing. In 2005, China was showing for the first time, followed by the African Pavilion and Mexico (2007), the United Arab Emirates (2009), and India (2011).
The assignment of the permanent pavilions was largely dictated by the international politics of the 1930s and the Cold War. There is no single format to how each country manages their pavilion, established and emerging countries represented at the biennial maintain and fund their pavilions in different ways. While pavilions are usually government-funded, private money plays an increasingly large role; in 2015, the pavilions of Iraq, Ukraine and Syria were completely privately funded. The pavilion for Great Britain is always managed by the British Council while the United States assigns the responsibility to a public gallery chosen by the Department of State which, since 1985, has been the Peggy Guggenheim Collection. The countries at the Arsenale that request a temporary exhibition space pay a hire fee per square meter.
In 2011, the countries were Albania, Andorra, Argentina, Australia, Austria, Bangladesh, Belarus, Belgium, Brazil, Bulgaria, Canada, Chile, China, Congo, Costa Rica, Croatia, Cuba, Cyprus, Czech and Slovak Republics, Denmark, Egypt, Estonia, Finland, France, Georgia, Germany, Greece, Haiti, Hungary, Iceland, India, Iran, Iraq, Ireland, Israel, Italy, Japan, Korea, Latvia, Lithuania, Luxembourg, Macedonia, Mexico, Moldova, Montenegro, Netherlands, New Zealand, Norway, Poland, Portugal, Romania, Russia, San Marino, Saudi Arabia, Serbia, Singapore, Slovenia, South Africa, Spain, Sweden, Switzerland, Syrian Arab Republic, Taiwan, Thailand, Turkey, Ukraine, United Arab Emirates, United Kingdom, United States of America, Uruguay, Venezuela, Wales and Zimbabwe. In addition to this there are two collective pavilions: Central Asia Pavilion and Istituto Italo-Latino Americano. In 2013, eleven new participant countries developed national pavilions for the Biennale: Angola, Bosnia and Hervegowina, the Bahamas, Bahrain, the Ivory Coast, Kosovo, Kuwait, the Maldives, Paraguay, Tuvalu, and the Holy See. In 2015, five new participant countries developed pavilions for the Biennale: Grenada , Republic of Mozambique, Republic of Seychelles, Mauritius and Mongolia. In 2017, three countries participated in the Art Biennale for the first time: Antigua & Barbuda, Kiribati, and Nigeria. In 2017, four countries participated in the Art Biennale for the first time: Ghana, Madagascar, Malaysia, and Pakistan.
As well as the national pavilions there are countless "unofficial pavilions" that spring up every year. 2009 there were pavilions such as the Gabon Pavilion and a Peckham pavilion. Upcoming artists in new media showed work in an Internet Pavilion in 2011.
The Venice Biennale has awarded prizes to the artists participating at the Exhibition since the first edition back in 1895. Grand Prizes were established in 1938 and ran until 1968 when they were abolished due to the protest movement. Prizes were taken up again in 1986. The selections are made by the Board of la Biennale di Venezia, following the proposal of the curator of the International Exhibition.
Also upon the recommendation of the curator, the Biennale names the five members of its international jury, which is charged with awarding prizes to the national pavilions. The international jury awards the Golden Lion for best national participation, the Golden Lion for best participant in the international exhibition, and the Silver Lion for a “promising young participant” in the show. It may also designate one special mention to national participants, and a maximum of two special mentions to artists in the international exhibition.
On 26 July 1973, the Parliament approved the Organisation's new statute for the Biennale. A "democratic" Board was set up. It included 19 members made up of representatives from the Government, the most important local organisations, major trade unions, and a representative of the staff. The Board was to elect the President and nominate the Sectorial Directors – one each for Visual arts, Cinema, Music, and Theatre.
In 1998 the Biennale was transformed into a legal personality in private law and renamed "Società di Cultura La Biennale di Venezia". The company structure – Board of directors, Scientific committee, Board of auditors and assembly of private backers – has a duration of four years. The areas of activity became six (Architecture, Visual arts, Cinema, Theatre, Music, Dance), in collaboration with the ASAC (the Historical Archives). The President is nominated by the Minister for Cultural Affairs. The Board of directors consists of the President, the Mayor of Venice, and three members nominated respectively by the Regione Veneto, the Consiglio Provinciale di Venezia and private backers. Dance was added to the others.
On 15 January 2004, the Biennale was transformed into a foundation.
For the 2013 edition, the main exhibition's budget was about $2.3 million; in addition, more than $2 million were raised mostly from private individuals and foundations and philanthropists. In 2015, the budget for the international exhibition was at 13 million euros (about $14.2 million).
- Vittorio Sgarbi, Lo Stato dell'Arte, Moncalieri (Torino), Istituto Nazionale di Cultura, 2012
- "La Biennale di Venezia – The origin". Retrieved 2014-09-29.
- "La Biennale di Venezia – From the beginnings until the Second World War". labiennale.org. 2014. Archived from the original on May 30, 2013. Retrieved September 28, 2014.
- "122 Years of History". La Biennale di Venezia. La Biennale di Venezia. 2017. Retrieved January 25, 2018.
- "La Biennale di Venezia – From the beginnings until the Second World War". Archived from the original on 2013-05-30. Retrieved 2014-09-29.
- Velthuis, Olav (June 3, 2011). "The Venice Effect". The Art Newspaper. Retrieved October 22, 2011.
- Michele Robecchi, "Lost in Translation: The 34th Venice Biennale", Manifesta Journal, no. 2, Winter 2003/Spring 2004. https://zs.thulb.uni-jena.de/rsc/viewer/jportal_derivate_00233809/Manifesta_journal_2_2003_04_0043.TIF
- "La Biennale di Venezia - From the post-war period to the reforms of 1973". Archived from the original on 2014-09-06. Retrieved 2014-09-29.
- Fabio Isopo, La Biennale del Dissenso: uno scontro a Sinistra, http://www.unclosed.eu/rubriche/amnesia/amnesia-artisti-memorie-cancellazioni/60-la-biennale-del-dissenso-uno-scontro-a-sinistra.html
- Riding, Alan (June 10, 1995). "Past Upstages Present at Venice Biennale". The New York Times. Retrieved October 22, 2011.
- "La Biennale di Venezia - From the '70s to the reforms of 1998". Archived from the original on 2014-09-05. Retrieved 2014-09-29.
- Massiliano Gioni, Introductory Statement, Il Palazzo Enciclopedico/The Encyclopedic Palace: Short Guide. Venice: Marsilio, 2013: pp. 18 and 21.
- Javier Pes (December 4, 2013), Okwui Enwezor named director of the 2015 Venice Biennale The Art Newspaper.
- "Addendum -Okwui Enwezor" (Press release). Italy: La Biennale di Venezia. Archived from the original on 2015-05-08. Retrieved 2015-06-04.
- "e-flux journal at the 56th Venice Biennale" (Press release). New York: e-flux. April 23, 2015. Retrieved 2015-06-04.
- "The British Council and the Venice Biennale". UK at the Venice Biennale. British Council. 2013. Retrieved October 22, 2011.
- "La Biennale di Venezia – Recent years". Archived from the original on 2014-10-06. Retrieved 2014-09-29.
- Gareth Harris (November 24, 2015), Venice Biennale bows out with more than half a million visitors Archived 2015-11-25 at the Wayback Machine The Art Newspaper.
- "Christine Macel Appointed Artistic Director of 2017 Venice Biennale". artforum.com. Retrieved 27 January 2016.
- "Ralph Rugoff appointed curator of the 58th Venice Biennale". artreview.com. Retrieved 15 December 2017.
- Adam, Georgina (June 6, 2009). "Trading places". Financial Times. Retrieved October 22, 2011.
- Kate Brown and Javier Pes (March 21, 2019), Biennials Are Proliferating Worldwide. There’s Just One Problem: Nobody Wants to Pay For Them artnet.
- Cristina Ruiz (June 13, 2013), Venice makes the art world go round Archived 2013-08-10 at the Wayback Machine The Art Newspaper.
- Carol Vogel (June 14, 1999), At the Venice Biennale, Art Is Turning Into an Interactive Sport New York Times.
- "54° Edizione della Biennale di Venezia – Sala Nervi di Torino Esposizioni". Beniculturali.it. 2011-12-16. Retrieved 2014-01-22.
- Gareth Harris (May 15, 2013), Down but not out, European countries invest in Venice Biennale pavilions The Art Newspaper.
- Vogel, Carol (June 7, 2009). "A More Serene Biennale". The New York Times. Retrieved October 22, 2011.
- Farah Nayeri (May 10, 2015), Venice Biennale Pavilions for Iraq, Ukraine and Syria Reflect Strife at Home New York Times.
- National Pavilions Archived 2013-05-24 at the Wayback Machine La Biennale di Venezia.
- Bianchini, Riccardo. "Venice Art Biennale 2017 - info, program, exhibitions, and events". Inexhibit. Retrieved 4 May 2017.
- "58th International Art Exhibition May You Live In Interesting Times". Venice Biennale. Archived from the original on 2019.
- Horan, Tom (June 8, 2009). "Venice Biennale: finding out about the now". The Telegraph. London. Retrieved October 22, 2011.
- Andrew Russeth (April 23, 2015), Venice Biennale Awards Golden Lions to El Anatsui, Susanne Ghez, Names Jury ARTnews.
- Claire Selvin (April 11, 2019), [Venice Biennale Appoints International Jury for 2019 Awards] ARTnews.
- Carol Vogel (May 23, 2013), New Guide in Venice New York Times.
- Rachel Donadio (May 7, 2017), A Venice Biennale About Art, With the Politics Muted New York Times.
- Sophie Bowness and Clive Phillpot (ed), Britain at the Venice Biennale 1895–1996, The British Council, 1995
- Martino, Enzo Di. The History of the Venice Biennale, Venezia, Papiro Arte, 2007.
- Sarah Thornton. Seven Days in the Art World. New York: WW Norton, 2008.
- Digitalarti Mag (2009). Venice Biennale (PDF). pp. 8–12.
- 52nd Venice Biennale and Documenta 12 in Kassel vol.20 July 2007 n.paradoxa: international feminist art journal pp. 88–92
- Vittorio Sgarbi, Lo Stato dell'Arte: 54 Esposizione internazionale d'Arte della Biennale di Venezia. Iniziativa speciale per il 150° Anniversario dell'Unità d'Italia, Moncalieri (Torino), Istituto Nazionale di Cultura, 2012 | <urn:uuid:e08993df-6565-4fa6-b099-edae06d805a9> | CC-MAIN-2022-33 | https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Venice_Biennale | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00296.warc.gz | en | 0.914836 | 6,286 | 2.71875 | 3 |