text
stringlengths
1k
23.6k
id
stringlengths
47
47
dump
stringclasses
3 values
url
stringlengths
16
1.34k
file_path
stringlengths
125
126
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
2.05k
4.1k
score
float64
2.52
4.91
int_score
int64
3
5
SLUG: 7-37214 Dateline: African-Americans in Space DATE=February 24, 2003 TITLE=African-Americans in Space DISK: DATELINE THEME [PLAYED IN STUDIO, FADED UNDER DATELINE HOST VOICE OR PROGRAMMING MATERIAL] HOST: The tragic death of African-American astronaut Michael P. Anderson aboard the shuttle Columbia during Black History Month made the nation pause and think of those other African-Americans who were chosen to fly in space. More now in this special Dateline report written by Marsha James. Here's Neal Lavon. NL: Becoming an astronaut and accepting the responsibility of blasting off into space requires dedication, stamina, bravery, and mental toughness. Dr. Guion [GEE-yon] Bluford, Jr. was one of those astronauts who accepted the challenge. In 1979, he was the first African-American astronaut chosen by the National Aeronautics and Space Agency.NASA. TAPE: CUT 1, BLUFORD JR, :15 "In (19)77 NASA started looking for astronauts for the space program for the new vehicle called the shuttle and so I applied and I was fortunate to be selected in 1978." NL: While he was training to become a mission specialist, Dr. Bluford says he noticed that NASA was beginning to seek out astronauts who "looked like America." TAPE: CUT 2, BLUFORD, :20 "There were 35 people in our class, there were 15 test pilots and 20 mission specialists who were scientist and engineers. I was one of those mission specialists and I came in with Sally Ride and a bunch of other people.including three African-Americansmyself, Ronald McNair, and Frederick Gregory." SOUND: SNEAK SOUND OF ENDEAVOR, :09, FADE OUT UNDER: NL: On August 30, 1983, the STS-8, or Challenger, launched from the Kennedy Space Center in Florida. Aboard was America's first black astronaut, Dr. Guion Bluford. After working on several experiments and space station operations, he realized that he had the honor of being first African-American in space. TAPE: CUT 3, BLUFORD, :12 "I was excited about the opportunity to fly into space. I think everybody that comes into the program is anxious to fly in space. I was anxious like everybody else. I was very pleased to get selected to fly in space." NL: Dr. Bluford realized he was not just fulfilling a mission, but making history. TAPE: CUT 4, BLUFORD, :13 "I also recognized later on that it was a historical event and because of that my biggest concern was to really set the example, do really a good job in representing African- Americans as I flew in space." NL: Dr. Bluford became a veteran of four space flights, logging over 688 hours in space. His success blazed the trail for others. On September 12, 1992, Dr. Mae C. Jemison became the first African-America woman to orbit the Earth as a crew member of the space shuttle, Endeavour. She remembers clearly how her journey into space began.with a phone call. TAPE: CUT 5, JEMISON, :05 "I got a call saying are you still interested and I said 'yea'." NL: She joined the astronaut program in June 1987,and although she brought the knowledge she gained as a chemical engineer and former Peace Corps medical officer in West Africa, she learned a lot as well. TAPE: CUT 6, JEMISON, :56 "My task while I was with NASA was not to immediately start training for space flight, because it takes a while before you are assigned to a mission, but I did things like help to support the launch of vehicles at Kennedy Space Center. I was in the first class of astronauts selected after the Challenger accident back in 1986, and the very first assignment I had was working at Kennedy Space Center. I saw the launch and in fact actually worked the launch of the first flight after the Challenger accident. I worked at the shuttle avionics integration laboratory, which is where all the software that flies the space shuttle is tested. So, you have lots of jobs that you do to support human space flight, and that's of other crews; that's of making sure you are looking at the vehicles, following the vehicles. I actually followed Columbia while I was at Kennedy Space Center, back in the late 1980s early 1990s and that is a part and that is very important to the training process." NL: During her flight as a science mission specialist, she conducted experiments to study the effects of zero gravity on people and animals, and co-investigated an experiment into bone cell research. Now as a former astronaut, Dr. Jemison continues to conduct experiments in life and material sciences. TAPE: CUT 7, JEMISON :57 "Since I left NASA in 1993, I've been involved with the Jemison Group, which is a technology and consulting company and we look at specifically how do you look at socio- cultural, political, environmental, economic issues when you design technology. My focus right now is on a new start-up company that I call BioSentient Corporation and it is really about putting together equipment that people can wear that collects physiological data and you barely know that you have the equipment on. Why do you need to do that because it is important to understand what's going on with people. As their walking around day-to-day we will be able to collect that information and understand what's happening to them. And the other emphasis of BioSentient is teaching people how to regulate their own physiologic responses to things. Whether its in terms of headache, it's in terms of stressing, anxiety reactions, things like that. We look at physiological awareness and self-regulation." NL: Having an understanding of science and technology has always been a part of Dr. Jemison's life. She says African-Americans have made great contributions to the science and technology field. TAPE: CUT 8, JEMISON :56 "Science and technology; how we understand the world, and the tools we create, has always been a part of every society. And what we have to do now is make sure that people are not fooled into thinking, 'oh, it's just the purview of someone else.' The most amazing thing to me about technological advancement and design is that African- Americans have continued to be right in the middle and the heart of the center of U-S technological development since we came over. Even as slaves, we're part of it. The real McCoy was built by Elijiah McCoy for a certain type of mechanism on hydrogen. It's very amazing we've been here the whole time and we will continue to be there. We're there right now. Silicon graphics? People always talk about silicon graphics and the amazing morphing technology and things that come in the movie. It was invented by a black graduate student Mark Hanna at Stanford University. I can go on and on. That's the reality. We've been there and we will continue to be there." MUSIC: SOMBRE MUSIC SNEAK, ESTAB, HOLD UNDER: NL: The most recent mission the mission of the Shuttle Columbia was dedicated to science and research experiments. However, as we all know, it ended in tragedy. President Bush spoke of the loss. TAPE: CUT 9, BUSH :17 "The same Creator who names the stars also knows the names of the seven souls we mourn today. The crew of the shuttle Columbia did not return safely to Earth; yet we can pray that all are safely home." NL: Payload Commander Michael Anderson was in charge of the science experiments on the shuttle. In what was the astronaut's last national interview, Anderson spoke about the expanding horizons for African - Americans in space. TAPE: CUT 10, ANDERSON :33 "I see the future as being pretty bright. Bob (Robert) Curbeam will be flying in just a few months. Then of course later in the year we will have Joan Higginbotham will be flying and early next year, you will see Stephanie Wilson flying. So it looks like the future is really bright." NL: Commander Anderson spoke of one experiment in particular that he believed would someday help African-American communities. TAPE: CUT 11, ANDERSON :16 "We also have some research that I think will be really beneficial to the African- American community on board this flight. We have a bio-reactor, which is growing prostate cancer cells. As you know, prostate cancer is a high rate of occurrence in African-American males so some of the research we are doing up here we can really help out in those areas." SNEAK: MUSIC UP, UNDER AND OUT: NL: At the nationally-televised memorial service at the Johnson Space Center in Houston, earlier this month, Chief of the Astronaut Corps Captain Kent V. Rominger, remembered Michael Anderson.not Michael Anderson the payload commander, but Michael Anderson, the man. SOUND: CUT 12, ROMINGER, :20 "Mike, he was a perfect choice for the payload commander. Organized, thorough, someone you could absolutely count on, a gifted leader. He was the quiet type unless you asked him about his family or his Porsche. And perhaps because he was quiet, we all loved to see him laugh. And when he laughed, we laughed with him all the harder, and he knew just when to drop a great punch line." MUSIC: SNEAK UP SOLEMN MUSIC, ESTAB AND HOLD UNDER: NL: Sadly, we will never know the results of the experiments Commander Michael Anderson and his crew members conducted aboard the space shuttle Columbia. But as we celebrate Black History, we celebrate Michael Anderson and the contributions made by him and all the African-American astronauts to the U.S. Space program. Both their journey and the journey of America into the heavens.will go on. This edition of Dateline was written by Marsha James. We close with Patti Labelle at the memorial service for the astronauts at Washington's National Cathedral, singing a tribute to their memory, Way Up There. I'm Neal Lavon in Washington. MUSIC: WAY UP THERE, PATTI LABELLE, 4:53. |Join the GlobalSecurity.org mailing list|
<urn:uuid:ad9ce43c-ef03-4ca9-bb81-bcef45a57277>
CC-MAIN-2022-33
https://www.globalsecurity.org/space/library/news/2003/space-030224-271877eb.htm
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571232.43/warc/CC-MAIN-20220811012302-20220811042302-00605.warc.gz
en
0.970194
2,294
2.53125
3
Lynnewood Hall is a spectacular Neoclassical Revival masterpiece and is considered one of the greatest surviving Gilded Age mansions in the United States. The stately home was once one of the finest homes in Pennsylvania, but due to its complex and sad history, the magnificent house now sits shuttered and in a state of disrepair. Lynnewood Hall was built between 1898 and 1900 for streetcar tycoon, prolific art collector, and an investor in the ill-fated Titanic, Peter Arrell Browne Widener. After construction, the mansion stood on a large 480-acre estate in Elkins Park, Pennsylvania. Born on November 13, 1834, to German immigrant parents, Peter Widener was raised in Philadelphia, Pennsylvania. As a teenager, he began a butcher apprenticeship and later opened a shop in a local market. The market shop became a social gathering place, and within a short time, Widener was elected Republican leader of the 20th Ward. During the Civil War, the position enabled him to obtain a contract to supply mutton to the Union Army within 10 miles of Philadelphia. Widener earned $50,000 (roughly $850,000 today) from the meat contract. After the Civil War, he invested his earnings in a chain of butcher shops that would become very successful. In 1858, at the age of 24, Peter Widener married Hannah Josephine Dunton Widener, and the couple had three children, Harry (who died in his teens), George, and Joseph. While in his twenties, Widener became business partners with William L. Elkins. He remained involved in politics, and in 1873, was appointed as city treasurer of Philadelphia. The position was the most lucrative political office in the city since the treasurer accrued all interest from city deposits as spoils of office. Widener noticed a need for public transport and branched out into streetcar railways, horse-drawn commuter cars that would hold a dozen or so people. The streetcars were considered a necessity for those who needed to commute in the growing city. Automobiles would not be available to the masses for several more decades. By 1883, Widener and Elkins had consolidated all the streetcar lines in Philadelphia and later merged with New York operators. The rapidly growing company expanded to Chicago, Pittsburgh, and Baltimore, owning over 500 miles of tracks. Peter Widener then diversified into railroads, helped organize U.S. Steel and the American Tobacco Company, and invested heavily into Standard Oil. In 1887, Elkins and Widener bought a pair of Victorian townhouses across the street from one another. After all, the two were now family since Widener’s oldest son George married Elkins’ daughter Eleanor. Widener’s vast wealth also enabled him to start an art collection of fine paintings and Chinese porcelain. Yet, being filthy rich was certainly no guarantee that tragedies in life were avoidable. Sadly, in 1896, Widener’s wife Hannah died on board the family’s yacht off the coast of Maine. In the throes of mourning, Widener decided to vacate the townhouse. He commissioned renowned architect Horace Trumbauer to design a new home, a place he could showcase his art and somewhere “comfortable” for him and his family. Discussions between the two resulted in Lynnewood Hall, a 70,000-square-foot Neoclassical Revival mansion built of Indiana limestone. Trumbauer took inspiration from two architectural gems: Prior Park in Bath, England, and Ballingarry in New Jersey. When the Widener family moved to Lynnewood Hall in 1900, the estate had a full-time staff of 100 servants that tended to the house and grounds. After he moved, Widener donated the townhome in honor of his late wife to the Free Library of Philadelphia, becoming the H. Josephine Widener Memorial Library. Across the road from the main house, Widener had a 117-acre farm with chicken houses, stock barns, greenhouses, a half-mile race track with a polo field in the middle, and stables for his thoroughbred horses. In addition to the farm, the property had a power plant, water pumps, laundry, carpentry shop, and bakery, making it virtually self-sustaining. Peter Widener was so concerned about a fire destroying his art collection that he had a hot air heating system installed at the farm and piped the heat approximately 1,500 feet to Lynnewood Hall. Horace Trumbauer also built a carriage house and a gatehouse at Lynnewood Hall. Both houses exhibit similar designs as the main house. The carriage house and stables, later known as Lynnewood Lodge, features elements from the Petite Trianon at Versailles Palace. In 1909-10, Lynnewood Hall went through a renovation; Trumbauer enclosed the swimming pool, added the Van Dyck gallery, and enclosed the east and west porches to create loggias. Peter Widener decorated his palatial estate with the finest furnishings. He was a fanatic about art and antiquities, a self-taught art collector who, with his son Joseph, amassed an internationally renowned collection. The paintings were displayed in Victorian fashion frame to frame floor to ceiling. Among his many works of art were 14 Rembrandts, including The Mill, which caused an uproar when Widener purchased it for $400,000. The British did not want it to leave England, but Peter outbid Britain’s National Gallery in 1911. The seller was a trustee of the gallery and expected to give the gallery a discount. Nobody would match Widener’s offer. Before being shipped to Philadelphia under protest, it was displayed for two days at the National Gallery. More than 22,000 people came by to get one last look. Peter Widener admitted he overpaid for The Mill but explained to friends it was an investment. The acquisition for that price instantly increased the value of the rest of his Rembrandts. In the 1920s, Trumbauer returned to Lynnewood Hall to redesign the carriage house to provide living quarters for Joseph Widener’s son, Peter Jr., and his family and named Lynnewood Lodge. Peter A.B. Widener became friends and business partners with J.P. Morgan, working closely together in the steel industry. In 1912, Widener became a 20% stakeholder in the International Mercantile Marine Company, famed for building the RMS Titanic. At that time, Peter Widener was nearly in his eighties and declined the offer to ride on the Titanic’s maiden voyage. Instead, he sent his son George and his grandson Harry. George was the heir of Lynnewood Hall and had followed his father in business and set to take over once Peter stepped down. George Widener, his wife Eleanor, and their son Harry planned to return home on the maiden voyage, following a family vacation in Europe. George hosted several dinner parties aboard the ship in his father’s honor. The lavish event was attended by Captain E.J. Smith, whose death was never officially confirmed and subject to much mystery and speculation. On April 14, 1912, four days into the voyage, the Titanic hit an iceberg. Sadly, both George and Harry lost their lives when the ship sank to the bottom of the Atlantic Ocean. Approximately 2,224 people were on board, and more than half lost their lives, making the Titanic one of history’s most devastating marine disasters. Eleanor was lucky enough to survive, boarding one of Titanic’s famously limited lifeboats. After arriving in New York, Eleanor Widener took a private train back to Philadelphia. She devoted herself to charitable work after losing her husband and son to the sea. She would later present Harvard University with the Widener Memorial Library in memory of Harry. In 1931, she donated the Harry Elkins Widener Memorial Science building at the Hill School, Pottstown. Her son was a student at both institutions. On November 6, 1915, after battling a long illness, which some speculated was caused by the grief of losing his oldest son and grandson, Peter A.B. Widener passed away at the age of 80 at Lynnewood Hall. At the time of his death, he was worth over $100 million (equivalent to nearly $2 billion today). His only surviving son, Joseph, inherited Lynnewood Hall. Over the next several decades, the estate would remain central to the Widener family’s activities, remaining in the family even during the Great Depression. Joseph Widener began breeding and racing horses. He shared his father’s love of art and took over the curation of Lynnewood Hall’s renowned art collection. He opened the estate to the public by appointment only between 1915 and 1940. For this reason, the property became known as “the house that art built.” In 1940, Joseph Widener donated the family’s art collection to the National Gallery in Washington, D.C. Joseph E. Widener passed away at Lynnewood Hall in 1943. After Joseph’s death, Lynnewood Hall slowly began to decay. His son, Peter A.B. Widener II, remained active in the family businesses, including breeding and racing horses. As a result, the Wideners and their relatives spent less and less time at Lynnewood Hall. In 1944, only a year after Joseph’s death, the contents of Lynnewood Hall were auctioned off. The sale was such an event, it was covered in a 1944 issue of LIFE magazine. For the first time in its history, Lynnewood Hall became vacant, although a caretaker was hired to keep watch over the empty mansion and grounds. The same year, a Philadelphia developer purchased the 220-acre farm from the original estate and built a housing community named Lynnewood Gardens. However, the mansion didn’t sell after years on the market. None of the Wideners or Elkins family wanted it. The same developer purchased it for $130,000 in 1948. He was not able to find a buyer until 1952. The Reverend Carl McIntire paid $190,000 for the title and another $150,000 to update the electrical system and repair some vandalism damage, according to McIntire’s biographers. He acquired a large amount of surplus paint from the Philadelphia Naval Yard at low cost, and many of the interior walls were painted using this battleship grey paint. McIntire, considered conservative even by evangelical Christian standards, used Lynnewood Hall as a theological school. In 1935, McIntire was removed from the Presbyterian Church after calling their missionaries “too liberal.” He was found guilty of “sowing dissension within the church.” He would go on to form his own Presbyterian Church. For the next 40 years, Lynnewood Hall would be the home of McIntire’s religious school. He had several hundred students attending at a time. Reverend Carl McIntire was an avowed anti-Communist and spent much of his time during his radio broadcasts working to expose Communists in America during the height of the red scare. Although Lynnewood Hall had full-time residents, McIntire quickly discovered how costly it was to maintain the grounds and the estate. McIntire made his money through contributions from followers, most of which listened to his 30-minute radio show each week. The show aired out of a suburban Philly radio station to more than 600 stations. The FCC shut him down for violation of the “fairness doctrine,” which required radio stations to give equal time to anyone attacked on the air. Religious and civic groups complained about McIntire. According to one clergyman, he used the show to spew his “highly racist, anti-Semitic, anti-Negro, anti-Roman Catholic” views to the world. Not one to be deterred, McIntire, who declared the FCC would have taken Jesus Christ off the air, decided to start his pirate radio broadcast. He purchased a decommissioned navy minesweeper and set out for international waters off the coast of New Jersey. The U.S. Coast Guard quickly put a stop to it. A two-year legal battle between the FCC and McIntire ensued. He lost the case and hundreds of radio stations bailed on him, separating him from the people that donated close to $4 million annually. Carl McIntire’s empire began to crumble. Under his ownership, many of Lynnewood Hall’s architectural assets – its one-of-a-kind fountain, marble walls, and mantles – were sold off piecemeal to sustain the school’s operations. McIntire was surrounded by people who just wanted money. They auctioned off many of the fine furnishings, took the money, and disappeared. By the 1990s, McIntire’s organization began to dwindle. He lost many supporters, which caused funding to dry up, and the maintenance costs continued to rise. Lynnewood Hall had fallen into disrepair. The slate roof eventually collapsed. There was major water damage and the costs to repair were high. McIntire only used a small section of the building, so the damaged areas were sealed off and ignored. Selling off items did not solve Carl McIntire’s money troubles. He nearly lost Lynnewood Hall several times through a series of loans he took on, including one from a student named Dr. Richard Yoon. In 1993, McIntire was struggling, but still in possession of the property. At a court hearing, McIntire and the Cheltenham Township worked to create an ordinance that would preserve Lynnewood Hall and satisfy the preservation advocates. When the property fell into foreclosure again and a sheriff’s sale was scheduled, preservationists hoped Lynnewood Hall would soon be in their hands. However, Dr. Yoon took over the property in late 1996. Dr. Richard Yoon was the head of the First Korean Church of New York. He spent a lot of his time fighting local authorities over taxes and zoning, trying to say that Lynnewood Hall was a place of worship and exempt from taxes. The courts disagreed with Yoon and for the last several years the property has been up for sale. It started at $20 million but has slowly trickled down to $11 million in 2019. Today, the mile-long iron fence that surrounds the estate has kept many wondering about the condition of the mansion. The fate of the Gilded Age mansions in Philadelphia’s northern suburbs has often been institutional use or demolition. They become a part of a college campus or religious retreat. Lynnewood Hall has stood the test of time, built of concrete and steel, and continues to hold up against the elements. However, there are doubts about the economic feasibility of restoration. One estimate for renewing the mansion and grounds begins at $40 million. For now, Lynnewood Hall remains in a state of disrepair, teetering on the brink of abandonment. Since these photos were taken, an advanced security system has been installed at Lynnewood Hall. Anyone attempting to access the property will be arrested. A caretaker lives on the property. No tours are available to the public at this time. For more historic photos, stories, and updates regarding Lynnewood Hall, check out this unofficial Instagram page dedicated to the estate.
<urn:uuid:df4c2cef-9cc8-4027-bd88-ee39054ab8f2>
CC-MAIN-2022-33
https://abandonedsoutheast.com/2021/08/09/lynnewood-hall/?like_comment=125364&_wpnonce=23106d8db6
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00005.warc.gz
en
0.980712
3,178
2.671875
3
Karnataka 1st PUC Geography Question Bank Chapter 8 India You can Download Chapter 8 India Questions and Answers, Notes, 1st PUC Geography Question Bank with Answers Karnataka State Board Solutions help you to revise complete Syllabus and score more marks in your examinations. 1st PUC Geography India One Mark Questions and Answers State the geographical location of India. (T.B.Qn) India extends between 8°4’ N to 37°6’ N Latitude and 68° 7’ E to 97°23 E longitude. Name the southernmost and northernmost points of main land of India. (T.B.Qn) The northern tip of India is recognized as ‘Indira Col’ in Jammu & Kashmir while, the southern tip (main land) is ‘Kanyakumari’ or ‘Cape Camorin’ in Tamilnadu. In which Island of India, is ‘Indira Point’ situated? (T.B.Qn) Indira point located at Great Nicobar Island. What is the total geographical area of India? (T.B.Qn) Total geographical area of India is 32, 87,263 Sq.km. Mention the International boundary between India and China. (T.B.Qn) McMahon line is the international boundary between India and China. Which water bodies separate India and Sri lanka? Palk Strait and Gulf of Mannar What percent of the world’s land area is with India? It is 2.4% of the world area. What is the length of the land boundary of India? About 15,200 kilometers. Name the largest island and smallest island in India. Middle Andaman is the largest one and Lakshadweep islands are the smallest islands. Name the largest and smallest states in India. Rajasthan is the largest state in India and Goa is the smallest one. Why Jabalpur is called as geographical centre of India? The Tropic of cancer and 80° E longitude lines meets at Jabalpur of Madhya Pradesh. It is known as geographical centre of India. Name the state of India which has longest coastal line. Gujarat state has longest coastal line in India. Which line of longitude is called Indian Standard Time? 82°30′ (82!/2°) East longitude is called Indian Standard Time. Name the type of Climate prevails in India. India has “Tropical Monsoon type” of climate. Name the continent where India is located. Mention the channel which separates Andaman and Nicobar islands. 10° N channel separates Andaman and Nicobar Islands. What is the total length of coastal line of India? The main land of the country has a coastline of 6100km. Including island the total length of the coastal of the country is about 7516. How many states and Union Territories are there in India? India is divided into twenty eight states and seven Union Territories. Among them Delhi is the National Capital Territory. What is the percentage of India’s population in the world? The percentage of India’s population in the world is 7.4%. Which latitude divides the Indian Sub-continent in two halves? Latitude of 23 1/2° N divides the Indian Sub-continent. 1st PUC Geography India Two Marks Questions and Answers Write the latitudinal and longitudinal extent of India. (T.B.Qn) The latitudinal extension is 8°4’ N to 37°6’ N and the longitudinal extent is 68° 7’ E to 97°23 E .The latitudinal and longitudinal extent of India is around 30° The country stretches to 3214km from North to South and 2933 km from West to East. Name the water bodies that surround India. (T.B.Qn) India is a peninsula, located at the north tip of the Indian Ocean. It is bordered by the Arabian Sea in the west, Indian Ocean in the south and Bay of Bengal in the east. Which latitude and longitude passes in the centre of the Country? (T.B.Qn) - The Tropic of Caner 23 1/2° N latitude passes through the middle of India and divides the country into almost two equal halves. - Indian Standard Time- 82 1/2° E longitude passes through the middle of India (through Allahabad) is recognized as standard longitude of the country two keep standard time. What is the growth rate of population and total population of India according to 2011 census? (T.B.Qn) According to 2011 census the total population of the country was 121.6 croreor 1216 million or 1.21 billions, which accounts for about 17.45% ofthe total world’s population. Name the international boundaries between India & Pakistan and India & Afghanistan. (T.B.Qn) - The Radcliff line – India and Pakistan (2910km) by Sir Cyril Radcliff. - The Durand line – India and Afghanistan (80km) by Mortimer Durand. Why has India selected a Standard Meridian of India with an odd value of 82° 30’ E? On the International basis the globe has been divided into 24 time zones (each of 15 longitudes). In every zone local time of the middle longitude (divided by 7° 30’) is taken as standard time of the entire zone. Because 82 1/2° E is well divisible by 7° 30’ A standard adopted by almost all the countries of the world while they selected a Standard meridian for their respective countries. Name the states through which Tropic of cancer passes. The tropic of Caner 23 1/2° N latitude passes through Gujarat, Rajasthan, Madhya Pradesh, Chhattisgarh, Jharkhand, and West Bengal, Tripura, and Mizoram states. IIow India is a peninsular country? India is a peninsular country because it has water bodies on its three sides. The Indian Ocean lies in the South, Arabian Sea lies in the west and Bay of Bengal lies in the east. Why is Indian sub-continent so called? - India and the adjoining countries are considered to be a sub-continent as it is comprised of all the characteristics of a continent. - Indian sub-continent encompasses vast areas of diverse landmasses. Indian sub-continent is comprised of lofty mountains, fertile plains, desert and plateau. - There is also great vastness and diversities in terms of climate, natural vegetation, wildlife and other resources. - Also, the vivid characteristics of culture and tradition among the people make it a sub continent. State the Neighbouring countries of India. Pakistan and Afghanistan in the North-west, China, Nepal and Bhutan in the north, Myanmar in the East, Bangladesh in the North-East, Sri Lanka and Maldives are in the south oceanic zone. 1st PUC Geography India Five Marks Questions and Answers Explain the location, size and frontiers of India. (T.B.Qn) Location: The main land oflndia extends between 8°4’ N to 37°6’N latitude and 68° 7’ Eto 97°23 E longitude. The latitudinal and longitudinal extent oflndia is around 30° The country stretches to 3214 km from North to South and 2933 km from West to East. The northern tip of India is recognized as ‘Indira Col’ in Jammu & Kashmir while, the southern tip (main land) is ‘Kanyakumari’ or ‘Cape Camorin’ in Tamilnadu. In the same way the western and eastern tips of the country are ‘Rann of Kutch’ in Gujarat and ‘Luhit’ in Arunachal Pradesh respectively. The territorial limit oflndia extends up to 6° 45’ N latitude. ‘Indira point’ situated at this latitude in Great Nicobar Islands. As a peninsular country India has both land and water frontiers. The total length of land frontier of the country is 15,200 km. The mainland of the country has a coast line of 6,100km including the islands. The total length of the coast line of the country is about 7516km. The territorial water extends into the sea to a distance of 12 nautical miles (22.2km) from the coastal baseline. India is a peninsula, located at the north tip of the Indian Ocean. It is bordered by the Arabian Sea in the west, Indian Ocean in the south and Bay of Bengal in the east and covered by land in the north – China, Nepal, Bhutan etc. The Tropic of Caner 23 1/2° N latitude passes through the middle of lndia and divides the country into almost two equal halves. Indian Standard Time – 82 1/2° E longitude passes through the middle of India (through Allahabad) is recognized as standard longitude of the country two keep standard time. Size: India is the 7th largest country in the world next to Russia, Canada, China, USA, Brazil and Australia. It has a total geographical area of 32, 87,263 sq.km. This constitutes about 2.4% of the total land area of the Earth. India is the second most populous country in the world next to China. According to 2011 census the total population of the country was 121.6 crore which accounts for about 17.45% of the total world’s population. India has 28 states, 6 union territories and one national capital region (New Delhi). Frontiers: India has 15,200km long land frontier extending from west to east running from Gujarat in the west to West Bengal in the east. The Himalayas for a natural boundary in the north, between India and China. Similarly, Thar Desert in the west & northwest and eastern hills acts as boundary between India & Pakistan and India & Myanmar respectively. India share land frontier with seven countries, they are Pakistan and Afghanistan to the northwest, China, Nepal and Bhutan to the north and Bangladesh and Myanmar to the-East. The important international boundary lines demarcated between India and neighbouring countries are: The Durand line- India and Afghanistan (80Km) by Mortimer Durand The Me Mahon line- India and China (PRC) (3488Km) by Henry Me Mahon. The Radcliff line – India and Pakistan (2910km) by Sir Cyril Radcliff. India and Bangladesh (4097km). Sri Lanka, an island country, situated to the southeast, is separated by Palk Strait and Gulf of Mannar.
<urn:uuid:2b8c29fe-05d6-4bc7-b47d-cfe788d4ff05>
CC-MAIN-2022-33
https://kseebsolutions.guru/1st-puc-geography-question-bank-chapter-8/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571847.45/warc/CC-MAIN-20220812230927-20220813020927-00604.warc.gz
en
0.877227
2,543
3.140625
3
In this condition, ASTM AH36 steel and A36 steel is totally the same. However, at most times, ASTM A36 steel means another steel type, it is A36 mild steel. A36 mild steel is for general construction, seldom used for ship building. A36 mild steel strength is lower than ASTM AH36 …sp.info What Is A36 Steel?, HunkerASTM A36 Steel Properties. ASTM A36 has a yield strength of 36,000 psi and an allowable bending stress of 22,000 psi. The properties of ASTM A36 steel allow it to deform steadily as stress is increased beyond its yield strength. This ductility allows buildings to stand long after the limits of a structure have been met in an emergency,...sp.info What Is the Difference Between SA36 and A36 Metals?Jan 27, 2016 · There are times when steel named A36 and SA36 are the same, especially if they are used in boiler and pressure vessels, but this is not always the case. An A36 steel can be different than SA36 …sp.info ASTM A36 Carbon Steel vs. ASTM A572 Grade 50 :: …ASTM A36 (SS400, S275) Structural Carbon Steel ASTM A572 Grade 50 (S 355 GP) HSLA Steel Both ASTM A27 cast steel and ASTM A36 carbon steel are iron alloys. There are 28 material properties with values for both materials. Properties with values for just one material (3, in this case) are not shown. For each property being compared, the top bar is ASTM A27 cast steel and the bottom bar is ASTM A36 carbon steel. Mechanical Properties.sp.info differnce between steelSep 14, 2007 · I will often times see the A500 designation on tube steels, yet I have always assumed that they were in fact of an A36 material type, is this an incorrect line of thinking? Thanks and regards, aevald By pipewelder_1999 Date 09-14-2007 10:43 The A500 …sp.info What is difference between a36 and a36m steel plate - …the difference between mild steel to carbon steel, is that mild steel is mild, while carbon stell is carbon. ...sp.info What Is the Difference Between SA36 and A36 Metals ...Apr 25, 2017 · There are times when steel named A36 and SA36 are the same, especially if they are used in boiler and pressure vessels, but this is not always the case. An A36 steel can be different than SA36 in pressure areas, but all SA36 steel includes A36 designation, because the SA36 standards are based on ASTM standards. Most structural steel used in the US is A36 (36 ksi as specified by The American Institute of Steel Construction or AISC) which has a carbon content of .25 to .29%.sp.info practical difference between A36 and 1018 - Blacksmithing ...Jul 23, 2011 · Frank Turley. With A36, you'll notice that on 3/4" and under, manganese is not added as an alloy. It is added on 3/4" up to impart strength. So on pieces 3/4" and less, the carbon content is supposed to not exceed 0.26%. The difference between 1018 and 0.26% is 0.08% carbon. That's not too big a deal for our kind of work.Hi all, simply put, what is the difference between A36 ad 1018 mild? I have a railing job to do-just some basic twisted spindles and scrolls, and I...Aaron, A36 is not an alloy designation, but rather signifies its minimum shear PSI 1018 is a chemical designation....about 18 points carbon and no...A36 is a performance standard with a fairly loose allowable range for chemistry. The composition standard for 1018 is tighter. In practice, I've be...A36 is actually the name of an ASTM standard for structural carbon steel. (No doubt they chose "36" due to the minimum yield strength requirement.)...I don't think that I've forged 1018 in years, if ever. We used to get 1020 off the racks, but nowadays most everything at my steel supplier is A36....The problem is not the specs for A36, the problem is unscrupulous vendors who will sell you whatever they have on the shelf and *tell* you that it...A36 is generally a tad harder under the hammer and so not as good for very "florid" ornamental work where "pure iron" might be just the thing. Howe...Well this is excellent news! Thanks all for the help and input. This saves me an awful lot of money. and also, duly noted-do not quench. So just to...You may have part or all of a bar go to cottage cheese on you as you work it hot. This may be "corrected" by working it cooler, which may be imprac...See more resultsIncluding results for what is the difference between a36 and a36.Do you want results only for wht is the difference between a36 and h36?Some results are removed in response to a notice of local law requirement. For more information, please see here.sp.infoGrade Guide: A36 Steel, Metal Supermarkets - Steel ...sp.info Steel grades according to American standards - A36, …A36 - 04b A36 ≥250 400-550 [58-80] 20 21 A572 - 04 Grade 42 ≥290 ≥415 20 24 Grade 50 ≥345 ≥450 18 21 Grade 55 ≥380 ≥485 17 20 ... Steel grades according to American standards - A36, A572, A588, A709, A913, A992. Created Date: Jan 27, 2016 · There are times when steel named A36 and SA36 are the same, especially if they are used in boiler and pressure vessels, but this is not always the case. An A36 steel can be …sp.info What Is The Difference Between A36 And A1011, ASTM ...For A36/SA36 material, the tensile and yield strength are moderate. ... Variations in tensile strength allow for the difference in carbon, manganese, and silicon ... The ASTM A1011 Grade 36 and ASTM A1018 Grade 36 specifications ... www.thefabricator.comsp.info ASTM A36 Carbon Steel vs. ASTM A572 Grade 50 :: …ASTM A36 (SS400, S275) Structural Carbon Steel ASTM A572 Grade 50 (S 355 GP) HSLA Steelsp.info What is the difference between steel grade AH36 and steel ...Steel grade A36 is only general strength steel, while steel grade AH36 is high strength steel. Second condition, steel grade AH36 and steel grade A36 is same, the steel grade A36 is … What is difference between A36 steel and 44W steel grade 300w? There isn't a different in A36 steel and 44W steel grade 300w. The 44W is the Canadian version of America's A36.sp.info What is the difference between ASTM A6 and ASTM A36? …Nov 22, 2017 · Answer Wiki. 1 Answer. , A Maker of Things. A6 is a broad standard for Rolled Structural Steel Bars, Plates, Shapes, and Sheet Piling. A36 is a specific standard for Carbon Structural Steel. Additional product specification (including A36) prevail over those of A6 specification.sp.info differnce between steelSep 14, 2007 · I will often times see the A500 designation on tube steels, yet I have always assumed that they were in fact of an A36 material type, is this an incorrect line of thinking? Thanks and regards, aevald By pipewelder_1999 Date 09-14-2007 10:43 The A500 …sp.info ASTM A1011 vs ASTM A36 - ASTM (testing materials) Code ...Jul 15, 2009 · We are working a project for structural application, we required ASTM A36 plates of 3 mm thickness, one supplier offered ASTM A1011 Type 1 Grade 36, they indicate that ASTM A36 is for plates of 5 mm and over, and A1011 is applicable for 6 mm and under, and that 3 … Jul 23, 2011 · A36 is a performance standard with a fairly loose allowable range for chemistry. The composition standard for 1018 is tighter. In practice, I've been told that 1018 is commonly sold as A36 (since it meets the standard), and I have experienced that myself. A36 at the minimum allowable strength for that grade will be substantially weaker than 1018.sp.info AH36 Shipbuilding steel plate - BBN Ship SteelAH36 Shipbuilding steel plate. Shipbuilding steel plate AH36 is the high tensile strength steel. As a professional AH36 shipbuilding steel plate exporter, Bebon international can supply steel plate AH36. ASTM A 131 AH36 shipbuilding steel plates can be used in the manufacture of the ship's hull structure whose weight is below 10000 tons.sp.info ASTM A36 Carbon Steel vs. SAE-AISI 1020 Steel ...> ASTM A36 Carbon Steel Up One. ASTM A36 Carbon Steel vs. SAE-AISI 1020 Steel. Both ASTM A36 carbon steel and SAE-AISI 1020 steel are iron alloys. Their average alloy composition is basically identical. There are 31 material properties with values for both materials. Properties with values for just one material (1, in this case) are not shown.sp.info Steel Plate Comparisons, Precision Grinding, Inc.A36 plate is the highest demand hot-roll steel plate in the world, as engineers specify ASTM A36 for more structural steel fabrications than any other type of plate. Among the least expensive of the carbon steels. Very good welding properties along with 36ksi yield strength. Considered a mild steel due to hardness. Jan 01, 2019 · AH36 material is a kind of normal temperature steel, and DH36 steel is a kind of low temperature steel. Shipping steel plate, manufactured in regard of classification society, is widely used in building ship structure. Because of its poor workin...sp.info Difference between AH36 and DH36 steel, Boat Design NetJan 28, 2013 · The only difference I've found is its toughness. The only difference I've found is its toughness. Click to expand... Chapel Steel stock AH36 in the control rolled condition, DH36 in both the control rolled and normalized condition and EH36 in the normalized condition.sp.info What is the difference between A-36 & SA-36? - Metal and ...Sep 11, 2007 · SA-36 is the ASME standard for one type of plate, and it is based on ASTM A36. Generally, they are going to be the same. Sometimes ASME is based on an older version of ASTM and it's possible that ASME would have other requirements. For vessel work, you'd refer to SA-36, for structural and other uses, A36.sp.info What is the difference between A36, 1045 and 1018 hot ...Jul 15, 2010 · But the spec for A36 can let the carbon content go as high as 0.29% and it can contain many more impurities. More carbon makes it harder to forge. -You generally have to pay about twice as much money for cold rolled steel as for hot rolled steel, for … Steel grade A36 is only general strength steel, while steel grade AH36 is high strength steel. Second condition, steel grade AH36 and steel grade A36 is same, the steel grade A36 is …sp.info A36 (Grade 50) versus A572 Gr 50 - Structural engineering ...The project is Design-Build, and the contractor is requesting the use of ASTM A36, Grade 50 rods, which I'm assuming are more readily available (thus cheaper). In looking at the AISC Manual, the tensile strength for A36 steel varies from 58-80 ksi. Further, the yield strength is specified as 36 ksi.sp.info What is the difference between ASTM A6 and ASTM A36? …Nov 22, 2017 · A6 is a broad standard for Rolled Structural Steel Bars, Plates, Shapes, and Sheet Piling. A36 is a specific standard for Carbon Structural Steel. Additional product specification (including A36) prevail over those of A6 specification.sp.info ASTM A36 versus ASTM A572 GR50, which one better?High-strength low-alloy structural steel plate, thick 8~300mm A572Gr50 high strength low alloy structural steel is widely used in engineering structures, such as the construction of steel structures, construction machinery, mining machinery, heavy-duty trucks, bridges, pressure vessels, especially for good weldability and toughness of the architectural and engineering mechanical components. Dec 29, 2002 · what's the difference between a36 & a50 steel. specs??? Collapse. X. Collapse. Posts; Latest Activity . Search. Page of 1. Filter. Time. All Time Today Last Week Last Month. Show. All Discussions only Photos only Videos only Links only Polls only Events only. Filtered by: Clear All. new posts. Previous template Next. tynwood. Junior Member ...sp.info What is the difference between Mild steel A36 and Ss400 ...American ASTM A36 - "Carbon Structural Steel" Difference is that Ss 400 has no chemical composition assigned, while A36 is allowed for max. 0.25% C and 0.40% Si. In case when thickness is over 3/4" to 1 1/2" [20 to 40], incl., max. 0.80-1.20% Mn is also assigned for A36.sp.info ASTM A36 Mild Steel - SAE AISI 1010 1015 1020 1025 1045 ...ASTM A36 steel is the most commonly available of the hot-rolled steels. It is generally available in round rod, square bar, rectangle bar, as well as steel shapes such as I-Beams, H-beams, angles, and channels. The hot roll process means that the surface on this steel will be somewhat rough.sp.info Q235 Steel Plate Equivalent Steel Standard-ASTM A36Comparison of A36, SS400 and Q235 plate: As can be seen from data above, ASME standard A36 and China standard Q235 materials are named according to yield strength of material, and for SS400 is tensile strength. From the analysis of mechanical properties, A36 and SS400 are more equivalent, while Q235 material is less. Nov 14, 2017 · This allows A36 steel to be easily machined, welded, and formed, making it extremely useful as a general-purpose steel. The low carbon also prevents heat treatment from having much of an effect on A36 steel. A36 steel usually has small amounts of other alloying elements as well, including manganese, sulfur, phosphorus, and silicon.sp.info ASTM A36 Steel Properties, Modulus of Elasticity, Yield ...ASTM A36 steel is a widely used mild carbon steel, young's modulus of elasticity, properties, material density, yield strength, hardness, equivalent, pdf, tensile strength Home AISI StandardsSome results are removed in response to a notice of local law requirement. For more information, please see here. You may also leave contact information, we will contact you as soon as possible! E-Mail: [email protected] Address: Development Zone, Zhengzhou, China
<urn:uuid:d7a8544b-9816-4df7-9586-18356330ab97>
CC-MAIN-2022-33
http://www.ipaulownia.es/sa633/wht-is-the-difference-between-a36-and-h36_2721.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573760.75/warc/CC-MAIN-20220819191655-20220819221655-00403.warc.gz
en
0.922225
3,350
2.828125
3
Formaldehyde and Cancer Risk What is formaldehyde? Formaldehyde is a colorless, flammable, strong-smelling chemical that is used in building materials and to produce many household products. It is used in pressed-wood products, such as particleboard, plywood, and fiberboard; glues and adhesives; permanent-press fabrics; paper product coatings; and certain insulation materials. In addition, formaldehyde is commonly used as an industrial fungicide, germicide, and disinfectant, and as a preservative in mortuaries and medical laboratories. Formaldehyde also occurs naturally in the environment. It is produced in small amounts by most living organisms as part of normal metabolic processes. How is the general population exposed to formaldehyde? According to a 1997 report by the U.S. Consumer Product Safety Commission, formaldehyde is normally present in both indoor and outdoor air at low levels, usually less than 0.03 parts of formaldehyde per million parts of air (ppm). Materials containing formaldehyde can release formaldehyde gas or vapor into the air. One source of formaldehyde exposure in the air is automobile tailpipe emissions. During the 1970s, urea-formaldehyde foam insulation (UFFI) was used in many homes. However, few homes are now insulated with UFFI. Homes in which UFFI was installed many years ago are not likely to have high formaldehyde levels now. Pressed-wood products containing formaldehyde resins are often a significant source of formaldehyde in homes. Other potential indoor sources of formaldehyde include cigarette smoke and the use of unvented fuel-burning appliances, such as gas stoves, wood-burning stoves, and kerosene heaters. Industrial workers who produce formaldehyde or formaldehyde-containing products, laboratory technicians, certain health care professionals, and mortuary employees may be exposed to higher levels of formaldehyde than the general public. Exposure occurs primarily by inhaling formaldehyde gas or vapor from the air or by absorbing liquids containing formaldehyde through the skin. What are the short-term health effects of formaldehyde exposure? When formaldehyde is present in the air at levels exceeding 0.1 ppm, some individuals may experience adverse effects such as watery eyes; burning sensations in the eyes, nose, and throat; coughing; wheezing; nausea; and skin irritation. Some people are very sensitive to formaldehyde, whereas others have no reaction to the same level of exposure. Can formaldehyde cause cancer? Although the short-term health effects of formaldehyde exposure are well known, less is known about its potential long-term health effects. In 1980, laboratory studies showed that exposure to formaldehyde could cause nasal cancer in rats. This finding raised the question of whether formaldehyde exposure could also cause cancer in humans. In 1987, the U.S. Environmental Protection Agency (EPA) classified formaldehyde as a probable human carcinogen under conditions of unusually high or prolonged exposure (1). Since that time, some studies of humans have suggested that formaldehyde exposure is associated with certain types of cancer. The International Agency for Research on Cancer (IARC) classifies formaldehyde as a human carcinogen (2). In 2011, the National Toxicology Program, an interagency program of the Department of Health and Human Services, named formaldehyde as a known human carcinogen in its 12th Report on Carcinogens (3). What have scientists learned about the relationship between formaldehyde and cancer? Since the 1980s, the National Cancer Institute (NCI), a component of the National Institutes of Health (NIH), has conducted studies to determine whether there is an association between occupational exposure to formaldehyde and an increase in the risk of cancer. The results of this research have provided EPA and the Occupational Safety and Health Administration (OSHA) with information to evaluate the potential health effects of workplace exposure to formaldehyde. The long-term effects of formaldehyde exposure have been evaluated in epidemiologic studies (studies that attempt to uncover the patterns and causes of disease in groups of people). One type of epidemiologic study is called a cohort study. A cohort is a group of people who may vary in their exposure to a particular factor, such as formaldehyde, and are followed over time to see whether they develop a disease. Another kind of epidemiologic study is called a case-control study. Case-control studies begin with people who are diagnosed as having a disease (cases) and compare them to people without the disease (controls), trying to identify differences in factors, such as exposure to formaldehyde, that might explain why the cases developed the disease but the controls did not. Several NCI surveys of professionals who are potentially exposed to formaldehyde in their work, such as anatomists and embalmers, have suggested that these individuals are at an increased risk of leukemia and brain cancer compared with the general population. However, specific work practices and exposures were not characterized in these studies. An NCI case-control study among funeral industry workers that characterized exposure to formaldehyde also found an association between increasing formaldehyde exposure and mortality from myeloid leukemia (4). For this study, carried out among funeral industry workers who had died between 1960 and 1986, researchers compared those who had died from hematopoietic and lymphatic cancers and brain tumors with those who died from other causes. (Hematopoietic or hematologic cancers such as leukemia develop in the blood or bone marrow. Lymphatic cancers develop in the tissues and organs that produce, store, and carry white blood cells that fight infections and other diseases.) This analysis showed that those who had performed the most embalming and those with the highest estimated formaldehyde exposure had the greatest risk of myeloid leukemia. There was no association with other cancers of the hematopoietic and lymphatic systems or with brain cancer. A number of cohort studies involving workers exposed to formaldehyde have recently been completed. One study, conducted by NCI, looked at 25,619 workers in industries with the potential for occupational formaldehyde exposure and estimated each worker’s exposure to the chemical while at work (5). The results showed an increased risk of death due to leukemia, particularly myeloid leukemia, among workers exposed to formaldehyde. This risk was associated with increasing peak and average levels of exposure, as well as with the duration of exposure, but it was not associated with cumulative exposure. An additional 10 years of data on the same workers were used in a follow-up study published in 2009 (6). This analysis continued to show a possible link between formaldehyde exposure and cancers of the hematopoietic and lymphatic systems, particularly myeloid leukemia. As in the initial study, the risk was highest earlier in the follow-up period. Risks declined steadily over time, such that the cumulative excess risk of myeloid leukemia was no longer statistically significant at the end of the follow-up period. The researchers noted that similar patterns of risks over time had been seen for other agents known to cause leukemia. A cohort study of 11,039 textile workers performed by the National Institute for Occupational Safety and Health (NIOSH) also found an association between the duration of exposure to formaldehyde and leukemia deaths (7). However, the evidence remains mixed because a cohort study of 14,014 British industry workers found no association between formaldehyde exposure and leukemia deaths (8). Formaldehyde undergoes rapid chemical changes immediately after absorption. Therefore, some scientists think that formaldehyde is unlikely to have effects at sites other than the upper respiratory tract. However, some laboratory studies suggest that formaldehyde may affect the lymphatic and hematopoietic systems. Based on both the epidemiologic data from cohort and case-control studies and the experimental data from laboratory research, NCI investigators have concluded that exposure to formaldehyde may cause leukemia, particularly myeloid leukemia, in humans. In addition, several case-control studies, as well as analysis of the large NCI industrial cohort (6), have found an association between formaldehyde exposure and nasopharyngeal cancer, although some other studies have not. Data from extended follow-up of the NCI cohort found that the excess of nasopharyngeal cancer observed in the earlier report persisted (9). Earlier analysis of the NCI cohort found increased lung cancer deaths among industrial workers compared with the general U.S. population. However, the rate of lung cancer deaths did not increase with higher levels of formaldehyde exposure. This observation led the researchers to conclude that factors other than formaldehyde exposure might have caused the increased deaths. The most recent data on lung cancer from the cohort study did not find any relationship between formaldehyde exposure and lung cancer mortality. What has been done to protect workers from formaldehyde? In 1987, OSHA established a Federal standard that reduced the amount of formaldehyde to which workers can be exposed over an 8-hour workday from 3 ppm to 1 ppm. In May 1992, the standard was amended, and the formaldehyde exposure limit was further reduced to 0.75 ppm. How can people limit formaldehyde exposure in their homes? The EPA recommends the use of “exterior-grade” pressed-wood products to limit formaldehyde exposure in the home. These products emit less formaldehyde because they contain phenol resins, not urea resins. (Pressed-wood products include plywood, paneling, particleboard, and fiberboard and are not the same as pressure-treated wood products, which contain chemical preservatives and are intended for outdoor use.) Before purchasing pressed-wood products, including building materials, cabinetry, and furniture, buyers should ask about the formaldehyde content of these products. Formaldehyde levels in homes can also be reduced by ensuring adequate ventilation, moderate temperatures, and reduced humidity levels through the use of air conditioners and dehumidifiers. Where can people find more information about formaldehyde? The following organizations can provide additional resources that readers may find helpful: The EPA offers information about the use of formaldehyde in building materials and household products. The EPA can be contacted at: U.S. Environmental Protection Agency Office of Radiation and Indoor Air Indoor Environments Division Mail Code 6609J 1200 Pennsylvania Avenue, NW. Washington, DC 20460 202–554–1404 (EPA Toxic Substance Control Act (TCSA) Assistance Line) The U.S. Consumer Product Safety Commission (CPSC) has information about household products that contain formaldehyde. CPSC can be contacted at: U.S. Consumer Product Safety Commission 4330 East West Highway Bethesda, MD 20814 The U.S. Food and Drug Administration (FDA) maintains information about cosmetics and drugs that contain formaldehyde. FDA can be contacted at: U.S. Food and Drug Administration 10903 New Hampshire Avenue Silver Spring, MD 20993–0002 The Federal Emergency Management Agency (FEMA) has information about formaldehyde exposure levels in mobile homes and trailers supplied by FEMA after Hurricane Katrina. FEMA can be contacted at: Federal Emergency Management Agency Post Office Box 10055 Hyattsville, MD 20782–7055 The Occupational Safety and Health Administration (OSHA) has information about occupational exposure limits for formaldehyde. OSHA can be contacted at: U.S. Department of Labor Occupational Safety and Health Administration 200 Constitution Avenue Washington, DC 20210 The National Toxicology Program (NTP) is an interagency program of the Department of Health and Human Services that was created to coordinate toxicology testing programs within the federal government; to develop and validate improved testing methods; and to provide information about potentially toxic chemicals to health, regulatory, and research agencies, scientific and medical communities, and the public. NTP is headquartered at the National Institute of Environmental Health Sciences, which is part of NIH. NTP can be contacted at: National Toxicology Program 111 TW Alexander Drive Research Triangle Park, NC 27709
<urn:uuid:1912df63-06ef-4f92-a306-c7e15e8e522c>
CC-MAIN-2022-33
https://www.cancer.gov/about-cancer/causes-prevention/risk/substances/formaldehyde/formaldehyde-fact-sheet
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00203.warc.gz
en
0.944183
2,654
3.765625
4
This post and its photos may contain affiliate links. If you make a purchase through these links, I may receive a small commission at no extra cost to you! Read my full disclosure policy here. Inside: Let’s look at transition strategies and calming techniques to help prevent meltdowns and annoying behavior in kids. This article is relevant for transitions in preschoolers and transitions in the classroom too. These 9 tips for transitions in kids will be worth more than gold. Many kids struggle with transitions. And this statement is especially true if they are playing with their cool T-Rex dinosaur that makes really cool roaring noises and you are pleading with them to come for boring old dinner. (I mean, ew, who likes icky broccoli and chicken). And here’s the unbelievable part: You’d expect trouble coping with a transition like moving from Alaska to Mexico, but kids can (and will) bust-up at the seams for something as simple as transitioning from T.V. time to teeth brushing time. The resulting behavior from this battle can range from just annoying (like whining or complaining) to an all-out meltdown. And it’s important to note: If your kiddo suffers from autism, sensory issues, and/or ADHD, transitions are extra tough. There are many ways for parents (and teachers) to help kids with transitions and I’m going to give you 9 awesome transition strategies for kids. (Tried and tested by this very mom) What are Transitions in Kids? Perhaps you’ve never even heard of transition strategies for kids. So what exactly do we mean when we say transitions? Basically when your child has to move from one activity or location to another. They have to stop doing something and start doing something else, Think: - Getting ready to leave the house - Putting away toys - Turning off the television or computer - Getting out of the bath - Leaving school - Going to school - Coming for dinner - Leaving the park - Leaving a play date Mastering transitions is an important developmental step. Why are Transitions Hard for Kids? Transitions are hard for different kids for different reasons. (The behavior that arises might look similar but the actual underlying cause can be very different) 1. Flexible thinking Transitions require shifts in flexible thinking, and this is hard for many young kids (and even adults). Johnny dreamed about going to the park all day at school. That purple slide was calling his name. After school, when mom says, “Sorry Pal we have to head home and start dinner.” Johnny begins to escalate and panic. Instead of creating a “new plan” like, “I’ll just play in the backyard”, he is struggling with flexible thinking. He can’t switch gears quickly and find a new solution when things change without warning. This rigid thought process can cause a lot of transition-related meltdowns. Inflexible thinking is especially prevalent in kids with autism. These kids need predictability and any deviation from routine can feel incredibly confusing for them. This is cognitive inflexibility and it literally disrupts their entire equilibrium. 2. Toddler life is all about fun Toddlers live in the moment, and isn’t this amazing! But the problem is: They don’t have any real concept of time and they don’t understand that separation and change doesn’t last forever. This causes a ton of transition-related problems for parents. …thank goodness for the transition strategies below.. 3. Lacking language and communication skills Young kids are lacking the proper language skills to articulate exactly how they are feeling. Instead of comprising and saying, “Hey mom, just give me five more minutes” Kids might throw a tantrum instead. 4. Temperament and lack of emotional regulation Temperament also plays a big part, some kids are strong-willed (oh my word, have you met my son?!) Some kids are impatient, and some kids are fiery, like a Scottish ticking time bomb. Does your child ever get angry? If your kiddo suffers from BIG emotions, you’ve just gotta check out this life-saving tool, the Anger Rescue Kit for Kids. With over 60 pages this will quickly become your new go-to for helping your child develop appropriate coping strategies. Kids with ADHD have a tougher time regulating their emotions, and they find it difficult to turn their attention to something they are expected to do rather than something they find rewarding. So many kids are suffering from anxiety in today’s day and age. Many kids are scared of the unknown and just want to stay in their environment they’ve become comfortable with. Inflexible thinking that we discussed above also causes immense anxiety in kids. We’ve got you covered there, we’ve got an entire guide on how to overcome anxiety in kids, we’ve also got an Anxiety Kit for Kids which includes over 70 worksheets, quotes, coloring pages, and exercises. Anxiety Kit for Kids Our one of a kind Anxiety Printable Kit for kids is now available. You’ll get a collection of worksheets, posters, activities, and coloring pages designed to help children squash anxiety and worry and bring fun and freedom back into their lives. So now that we’ve looked at why some kids struggle with transitions, let’s look at ways to ease transitions in kids. Be sure to bookmark these transitions strategies for later! 9 Smart Transition Strategies for Kids 1. Give Warnings (On The A+ Transition Strategies List) If your child is engrossed in a make-believe world of magicians and monsters, you can’t just say, “Hey, let’s go.” Time is a blurry concept for toddlers. We need to give warnings, so many warnings in fact, that you might think you are morphing into a weather news reporter. It’s crucial to prepare your kiddo for what’s coming up. And don’t just think about giving warnings for when moving from one activity to the next, provide an overview of the entire day! In our house, each morning, we review the entire day’s plan. We have a family calendar that shows what each member of the family is doing. To help with warnings, use timers and/or visual schedules (jump to number 2). Do yourself a favor and give yourself the FREE gift of organization this year. You’ve just gotta get the free Cozi App. It’s a surprisingly simple way to manage everyday family life. With a shared calendar, reminders, grocery list and more! Cozi is a 3-time Mom’s Choice Award Winner! 2. Use a Timer or a Visual Schedule Children do amazingly well with visual cues. A visual schedule shows what is happening next, it creates predictability. Mornings and evenings are often the worst time of day for transitions, but by using visual schedules you can prep your child for each activity that is coming. (And it’s fun too) Along with visual schedules use timers! Remember to bring your child’s attention to the timer, and even take it one step further, let your kiddo know when there are five minutes left, two minutes left, etc. This is the EXACT timer we have, and let me tell you, for only $18 it has earned its cost 100 times over. We use it to get out the door in the morning, (snag our exact morning routine here) to ensure a stress-free bedtime routine, to monitor screen time, and I use it for cooking too! Can you say, $18 well spent! If you are playing a game and need to transition to say dinner, you can also offer turns, like three more turns, then we stop. 3. Allow More Time Between Activities Are you guilty of jam-packing too much into your child’s day? Perhaps you need to space our activities or take a red pen to the calendar. We as parents are often guilty of feeling like our children need to be entertained in some manner during every part of every day. Mom, you aren’t perfect, you don’t need to try to be perfect and your kids don’t need to be perfect either. Kids need downtime, just like we need downtime. They need to chill, pretend play, be silly, and just, well — be kids. So reconsider if a piano lesson and a swimming lesson on the same night is a good idea. Are your expectations sky-high in relation to the age of your child? Eyeball the schedule for the entire week and re-evaluate. Set the stage for success! 4. Make Transitions Fun and Positive You could also call this transition strategy the magic art of distraction. And mastering distraction will be your magic secret weapon. Of all the transition strategies for kids I use, this might be my favorite: I’ll often say, “Let’s hop like a kangaroo to the car”, or, “let’s sing this fun song while we put on our coats and mitts.” More recently we pretended we were butterflies fluttering to the car. (…before the butterfly fluttering, my son was starting his usual fussing over snow pants, but once we turned the transition into a fun game the fussing stopped, instantly). The funner (is that a word?!) the transition the easier and less anxious your kiddo will be. So get creative and think of some amusing games you can play while moving throughout those key transitions in your day. Another great tip to make transitions fun is to point out the positive side to the transition, this can help direct them away from whatever is making them fuss up and get them excited about what’s coming up. - We’ve got to go because we are going to Suzy’s birthday party, there will be a clown there! - Why don’t we play eye spy on the way to school? - Hey, once we get to the car I’m got a big secret to tell you. - Let’s shut the TV off for dinner, and after dinner, we will make ice cream cones. - After we brush your teeth, you can pick out 3 stories. Make transitions even MORE fun with these 10 transition songs from teaching mama. You could also use the first-then transition strategy where you tell your child what they need to do first, (usually the less preferred activity) followed by a then task, which is a motivating activity. First, we get dressed for school, then we can play hide-n-go-seek for five minutes before school. When talking about transition strategies for children, the first-then strategy is used by therapists around the globe. 5. Choose your Timing Carefully This one is often missed by parents, but put on your Nancy Drew cap because it can make a BIG difference. Can you stop the activity and suggest the transition you’d like during a natural break in your child’s activity? -Can you wait until your child’s cartoon is over to ask them to come for lunch? -If your child is playing with dinky cars and doesn’t want to get ready for school, perhaps allow an extra few minutes until they bore of bashing their dinky cars together. (I assure you they will tire and move onto another activity at some point.) Be patient and mindful. (Oh, and mindfulness is our main motto, so check out some super tips here) And to reinforce point #1, give warnings before ANY change in activity. I’ll often say you have five more minutes. Then I’ll count down when we are at two minutes, and then again at one minute. I love this book the Color Monster: A Story About Emotions. Teaching kids about emotions can help then keep calm and collected, even when they feel frustrated! 6. Offer Sensory Breaks Kids often get overwhelmed, especially kids with sensory issues, autism, and/or ADHD. If you are trying to transition an overwhelmed child, well, you might as well put a lion in an elephant’s den. There’s going to be a scrap. Here’s the bottom line: If your child is feeling overwhelmed, transitions will be especially problematic and they will be especially prone to meltdowns. No amount of transition strategies are going to work at this point! Don’t wait until your kiddo is in the red volcano zone. When you notice they are getting frustrated, silly, tired, and/or argumentative (all signs they are escalating to the red zone), take a sensory break. Have you considered using calm down cards? Breathing breaks are also wonderful to help calm an overwhelmed child. 7. Be Consistent and Be Calm Try to keep your routine as consistent as possible. Kids thrive on consistency and routine. They actually like structure. (Even if they don’t know it). For transitions that must occur every day like turning off the TV to brush teeth, routines can pay off for parents – big time! Shoot for making the same set of transitions at the same time each day, as much as you possibly can. And most importantly, be calm. If you are escalating to the red zone (think yelling, stomping, arguing) then how are you going to keep things moving peacefully along? World War 3 is going to erupt. Anytime I escalate, the situation with my child becomes 10 times worse, and in the somber aftermath of this civil war, I’m left feeling like a terrible parent. Use our three-step process to stop mom anger in its tracks (and this process takes under 1-minute) When speaking with your child or trying to get their attention, be sure to come down to their level. Bend down and look them straight in the eye. This has a huge impact on getting kids to focus and listen. 8. Offer Choices Giving your child choices can make them more co-operative. Kids like control, I bet you do too. Is it possible for you to give up a little control, and let your child make more decisions? You might be surprised at how their willingness to cooperate changes. Is it really the end of the world if your daughter wears a purple tutu with a fluorescent green top to school? My motto is this: if something isn’t putting my child in danger or harming their development, then who cares! When offering choices, don’t give them the option to completely defy the request, you need to be clever here. Instead of saying, “Do you want to put on your mitts? Say, “Do you want your fuzzy mitts or your bear mitts?” Our little people are smart, but we are smarter. 9. Give Praise and Avoid Threats Always, always, always tell your child when they are doing a good job. Kids will strive to do better if they think they are doing well. Kids actually want your approval, despite what it may seem sometimes. Try to avoid threats! Threats are for bullies. Instead, you could consider using rewards as a motivator. Some ideas are stickers, snacks or a point system that leads to a reward. Once your child is used to transitioning then you can work to phase it out. One of our biggest struggles has always been getting to the car to leave for school. We have a big green bucket in the car with treats and little toys. On good days my son picks one item from the big green bucket. It works wonders. Good luck my friends, parenting a high spirited child is not for the faint of heart! But there are ways we can win, and with these transition strategies, we can win and keep it positive! I hope you love these 9 ways to ease transitions in kids. Leave a comment below and tell me what your biggest transition struggles are, and what you do to sail smoothly through. What you Should Do Next… 1. Subscribe to My Newsletter: Signup for my newsletter for tips, resources, and lots of free printables to help you create a happier home or classroom. Plus, when you subscribe, I’ll also send you a copy of our strategy-packed guide, 12 Mini Mindfulness Activities for Kids. 2. Get the Toolkit and Put the FUN back into Parenting! If you want even MORE tips and strategies for raising resilient, mindful, happy kids, check out The Positive Parenting Toolkit (for busy parents or teachers ready for change at 77% off the regular price). Plus, for a limited time, get FREE bonuses worth $25 — completely risk-free and with lifetime access. 3. Discover the Calm Confident Kids Toolbox Your new bestie has arrived. The Calm Confident Kids Toolbox, a favorite among teachers, practitioners, and parents, is here. For a limited time, you get a collection of our best-selling resources worth over $100 for over 78% off.
<urn:uuid:c5daf42c-b395-43da-8015-604509c14161>
CC-MAIN-2022-33
https://www.mindfulmazing.com/transition-strategies-for-kids-9-tips-to-ease-transition-troubles/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570651.49/warc/CC-MAIN-20220807150925-20220807180925-00205.warc.gz
en
0.935627
3,692
2.78125
3
Technology has brought great advances and conveniences, but it also comes with the cost of privacy. You;ve seen many examples in the news. The NSA has been caught spying on German chancellor Angela Merkel and her closest advisers for years. WikiLeaks co-founder Julian Assange says the NSA intercepts 98 percent of South American communications. You’d fight for free speech if anyone threatened to take it away. Yet ISPs, technology companies, and the government are all threatening to take away our privacy, and we’re standing by and letting it happen. Even if you have nothing incriminating to hide, you still have sensitive information on the internet, and the right to privacy. Here are some of the organizations that are spying on you, and some of the simple steps you can take to protect yourself and your information. Who’s spying on us? Few organizations have caught as much of the spotlight as the National Security Agency (NSA). But even outside of the States, many governments have their own version of the NSA. The most prominent ones are: - UK’s Government Communications Headquarters (GCHQ) - Communications Security Establishment Canada (CSEC) - Australian Signals Directorate (ASD) - New Zealand’s Government Communications Security Bureau (GCSB) Together with the NSA, they form the Five Eyes alliance. These government organizations regularly collaborate on spy programs with silly code names, but their work is no laughing matter. The government can call upon technology companies to learn about you. Although technology companies wouldn’t want to rat out their own customers, they may simply have no choice. Yahoo CEO Marissa Mayer said executives faced jail if they revealed government secrets. Google has even made a petition for greater transparency. So technology companies are forced to work with the government. Yahoo has complied with government requests for information. Technology companies know quite a bit about you Both Apple and Google track your phone’s movements with location-based services. Google scans your emails in order to serve you more relevant advertisements. Apple stores your iMessages. Dropbox reads your files. As if jail wasn’t compelling enough, the government is also rumored to spy on technology companies. “It’s really outrageous that the National Security Agency was looking between the Google data centers, if that’s true,” said Google’s Executive Chairman Eric Schmidt to the Wall Street Journal. “The steps that the organization was willing to do without good judgment to pursue its mission and potentially violate people’s privacy, it’s not OK.” Even if you have nothing to hide, you have the right to your privacy. Here’s how you can protect your data from prying eyes. How can you protect ourselves from people spying on you? Before we proceed, it’s important to hammer this point home: there is no protection or system that is completely, 100 percent guaranteed, safe from hackers. Given enough time and money, an experienced hacker can hack into any system. (There are people attempting to create a system that can’t be hacked for 100 years.) Surveillance organizations and technology companies have both time and money. That means yes, they could hack into your computer if they were specifically targeting you. However, it’s unlikely they’d dedicate their resources to zero in on the average citizen. It would cost them too much time and money if they scaled that up across the board. Imagine if every citizen made it more difficult (and therefore expensive) for these organizations to spy on them. It would become more expensive for these programs to keep an eye on everyone. That would make it more difficult for them to keep a close eye on the majority of people. A simple, but fundamental, step to privacy is to encrypt your data. Whether it’s the government or some hacker spying on you, encryption makes your information way harder to read. Encryption codes the information that’s transferred between you and the website you’re visiting. Any prying eyes (e.g., the government, hackers, etc.) have to put more time and energy into decoding the encrypted information before they can read it. The next time you use your Web browser, have a look at the URL bar. You can tell your communication with a website is encrypted when there’s a green padlock and an “https://” preceding the website address. Although many sites support HTTPS, some of them may not enable it by default (keeping you on an unencrypted http:// connection). Use a plugin like HTTPS Everywhere to ensure you connect via HTTPS as often as possible. Some padlocks also feature a company’s name beside it (like PayPal, Inc.). That means the company has an extended verification certificate, which provides the strongest encryption level available (and requires more rigorous testing and validation). You can add an extra layer of encryption to your data by browsing through a Virtual Private Network (VPN). “The first thing I’d recommend to the average person on the street is whenever you’re out in the public…use a VPN service,” says former “Most Wanted Hacker” Kevin Mitnick in an interview. “It takes your data and puts it in an encrypted envelope so people can’t really intercept it and spy on that.” Also, put your data in the hands of technology companies that encrypt it. Edward Snowden, for example, recommends using SpiderOak instead of Dropbox (or at least protect your Dropbox folders with Truecrypt). You could use DuckDuckGo instead of Google. (If you miss Google’s powerful search algorithm, just use the !g function in DuckDuckGo.) Chat with OTR instead of Skype. Have a look at this privacy pack put together by Reset the Net. Keep your eyes peeled for technology that uses end-to-end encryption. End-to-end encryption ensures that your data only gets decrypted once it’s opened by the recipient, meaning that the technology companies wouldn’t be able to read the data in transit even if they were forced to pass it along to the government. You know it’s probably effective as the FBI and Department of Justice want companies to ease off end-to-end encryption. How do the pros protect their information? It’s tough to find people that protect their privacy well as they don’t tend to advertise themselves online. There are certain experts like journalists and security specialists that work with sensitive information. As such, they’ve set up systems to protect their information as much as possible. You can use their methods to set up a more secure system of your own. The NSA can’t read the information on your computer if you’ve never been connected to the Internet. If you have extremely sensitive information, consider investing in a computer that’s never touched the Internet (known as an “airgap”). Columnist Bruce Schneier writes at The Guardian: Since I started working with the Snowden documents, I bought a new computer that has never been connected to the Internet. If I want to transfer a file, I encrypt the file on the secure computer and walk it over to my Internet computer, using a USB stick. To decrypt something, I reverse the process. This might not be bulletproof, but it’s pretty good. If you plan to use an airgap, you might also want to remove any network chips, bluetooth chips, or even microphones and webcams from your new computer before using it. Along a similar vein, you could also use an operating system that’s bootable from a USB drive, and browse incognito. Tails is an operating system which forgets your activities after you unplug. Journalists working with Edward Snowden relied on it for secure communication. “Privacy and encryption work, but it’s too easy to make a mistake that exposes you,” writes journalist Barton Gellman. “Tails puts the essential tools in one place, with a design that makes it hard to screw them up. I could not have talked to Edward Snowden without this kind of protection. I wish I’d had it years ago.” Tails allows you to use GPG encryption when you are emailing and/or OTR encryption while instant messaging, with little setup required. These types of encryption come recommended by CDT’s senior staff technologist, Joe Hall. GPG and PGP encryption are standards that allow you to encrypt and decrypt files and emails using a public/private keypair. (Here’s an intro to how PGP and cryptography work.) Tails also allows journalists to work on sensitive documents, edit audio and video, and store all their files in an encrypted format. Additionally, Tails routes your web connections through the Tor network by default. The Tin Hat explains Tor pretty simply: Tor offers a great degree of anonymity and privacy by encrypting your Internet connection and sending it through three servers placed around the globe. In case you’re curious to learn more, we’d suggest going deeper into how journalists and security specialists handle sensitive information. For example, learn from this article how Edward Snowden leaked his information to the world. (Here’s another one.) If you have some sensitive information that you want to share with the press, use an encrypted service like SecureDrop. Start with the basics There’s a lot of information in this piece. Don’t drive yourself crazy with paranoia. Just remember that it all starts with making your information a bit more difficult to read through encryption. Use software that has end-to-end encryption built-in. VPNs are a simple solution that quickly ensure your information is at least a bit more challenging to read. If you ever do want to turn your privacy up a notch, encrypt emails with crypto technology and use airgaps and encryption-focused operating systems. Even if you have nothing to hide, you have the right to privacy. It’s your responsibility to protect it while you still can.
<urn:uuid:11909535-e5c1-4dc2-9d35-9a87bf06559b>
CC-MAIN-2022-33
https://thenextweb.com/news/how-the-government-can-spy-on-you-and-what-you-can-do-about-it
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572515.15/warc/CC-MAIN-20220816181215-20220816211215-00602.warc.gz
en
0.9432
2,111
2.578125
3
Pages:10 (2943 words) Topic:Department Of Energy Document Type:Research Paper How the DOE Used the Acquisition Process to Demolish a Contaminated Building Today, many organizations lack the resources to engage in a formal acquisition process while others rely on acquisitions processes that are specially designed for a specific project. In either case, these organizations may fail to achieve optimal outcomes due to these types of approaches to the acquisition process. One organization that has recognized the importance of using a formal, standardized acquisition process in the U.S. Department of Energy which oversees dozens of major projects each year. The purpose of this paper is to provide a detailed review of a major program that has been managed, via the acquisition process, over the past decade, by the Department of Energy at the Y-12 National Security Complex. A description of the demolition project is followed by a discussion concerning the acquisition process that was used to guide the process. Finally, a summary of the research and important findings about this program are presented in the conclusion. Review and Discussion Overview and Background In this context, the acquisition process can be regarded as comprising three basic elements: (1) identifying requirements; (2) acquisition of requisite supplies and commercial vendor contracts; and (3) obtaining needed funding to achieve a project’s goals. These steps were closely followed in the demolition and disposal operations conducted by the U.S. Department of Defense (DOE) in its Alpha 5 project. The stated mission of the DOE is “to ensure America’s security and prosperity by addressing its energy, environmental and nuclear challenges through transformative science and technology solutions” (About DOE, 2020, para. 1). With more than 14,000 employees and operations that span the country, the DOE’s mission has assumed new importance and relevance in recent years, especially following the terrorist attacks of September 11, 2001. Maintaining the DOE’s far-flung network of facilities demands ongoing attention, though, including one its most important resources, the Y-12 National Security Complex. According to the description provided by the DOE, “The Y?12 National Security Complex is a premier manufacturing facility dedicated to making our nation and the world a safer place and plays a vital role in the Department of Energy’s Nuclear Security Enterprise. Y?12 helps ensure a safe and effective U.S. nuclear weapons deterrent” (About Y-12, 2020, para. 3). The multiple responsibilities assigned to the Y-12 complex include the storage and retrieval of nuclear materials, the provision of fuel for the country’s naval nuclear fleet, and collaborating with other public and private sector organizations in furtherance of these responsibilities. Since its creation nearly three-quarters of a century ago, the Y-12 complex has become an increasing important strategic asset for the United States. Over the past decade, the Y-12 National Security Complex has launched a number of major remediation projects, including the following: · New On-Site Disposal Facility Planned · Outfall 200 Mercury Treatment Conceptual Design Project · Mercury Recovery Project · Y-12 Surveillance & Maintenance Corrective and Preventative Maintenance · Old Salvage Yard Scrap Removal · Building 9735 Demolition · Alpha 5 Project · Beta 3 (9204-3) Legacy Material Disposition Project · Beta 4 Legacy Material Disposition Project · Biology Complex and Building 9769 Deactivation and Demolition Project (Y-12 National Security Complex cleanup projects, 2020). Although each of these major projects relied on the acquisition process to achieve its intended outcome, this paper focuses on the Alpha 5 Project which is discussed below. Targeted at Building 9201-5, the largest building on the Y-12 complex, the Alpha 5 project involved a space that measured a massive 613,642 square feet (nearly as many square feet as the capacious Pentagon). The previous location of Alpha 5 (Building 9201-5) as shown in Figure 1 at Appendix A. Completed in May 1944, Building 9201-5 (Alpha 5) operated in a number of different capacities over the years, including as a production facility for the National Nuclear Security Administration Weapons Plant. Prior to its recent demolition, the Alpha 5 building was comprised of a massive basement and four floors which contained a wide array of equipment from its past operations. For example, Alpha 5 played an important role is hastening the end of World War II by serving as a uranium enrichment facility during the Manhattan Project. Following this historic contribution to the nation’s security, Alpha 5 was renovated to serve other purposes, and its mass spectrometers were converted in support of other strategic missions that also involved the manufacture and/or processing of extremely toxic substances, including the spillage of approximately 500,000 pounds of mercury. In addition, other toxic substances such as beryllium were identified throughout 60% of the Alpha 5 building. Although beryllium levels were relatively low in some parts of the building, the were some “hot spot” concentrations as well. According to the DOE, “The primary site related contaminants for the waste in this facility are enriched uranium, depleted uranium, beryllium and mercury” (Birchfield & Albrecht, 2012, p. 3). In other words, Alpha 5 was a major environmental threat for DOE employees and the public alike, and these threats were well recognized by top DOE officials. In this regard, Birchfield and Albrecht (2012) emphasize that, “The Alpha 5 facility was built in 1944 and supported a number of missions that used materials such as uranium, mercury, and beryllium. Since it ceased operations in 2005, this highly contaminated facility has experienced significant degradation (p. 3). The sheer size of the Alpha 5 facility combined with the wide array of highly toxic chemicals and radioactive materials made remediation and subsequent demotion… …demolition as well as the work that was required to demolish and dispose of the contaminated building materials that were involved. For example, the U.S. Office of Environmental Management’s Standard review plan: Acquisition strategy review module (2010) describes the role of the Alpha 5 project team leader as including the responsibilities set forth below. · In coordination with the Federal Project Director, selects the areas to be reviewed; · Based on the areas selected for review, project complexity and hazards involved, selects the members of the review team; · Verifies the qualifications: technical knowledge; process knowledge; facility specific information; and independence of the Team Members; · Leads the acquisition strategy review pre-visit; · Leads the review team in completing the Review Criteria for the various areas to be reviewed; · Coordinates the development of the data call and forwards to the Federal Project Director, a list of documents, briefings, interviews, and presentations needed to support the review; · Forwards the final review plan to the FPD and Environmental Management (EM) management for approval; · Leads the on-site review; · Ensures the review team members complete and document their portions of the review and characterizes the findings; · Coordinates incorporation of factual accuracy comments by Federal and Contractor personnel on the draft report; and, · Finally, forwards the final review report to the Federal Project Director and Environmental Management for consideration in making the decision to authorize start of construction (Standard review plan, 2010, p. 2). Although the precise acquisition process that was followed by the Alpha 5 project team was also guided by other federal and state regulations throughout its duration, the above-listed activities are highly congruent with the contextual definition of the acquisition process that was set forth in the overview and background section above. Built in 1944, the Alpha 5 Building 9201-5 played a historic role in helping bring an end to hostilities in World War II by its contributions to the Manhattan Project. In addition, the Alpha 5 facility was also instrumental in facilitating research into a wide array of chemical and radioactive materials thereafter, and the results of this research have likewise helped the United States retain its cutting edge lead in the arms race. Irrespective of the differing views about the impact of these activities, the research was consistent in showing that the Alpha 5 building was a critically important asset to the U.S. Department of Defense’s efforts to develop state-of-the-art military resources. In the final analysis, it is reasonable to conclude that the fact that the Alpha 5 project managers succeeded in completing this project on time and on budget was proof positive that the acquisition process followed by the Department of Energy was appropriate and effective… About DOE. (2020). U.S. Department of Energy. Retrieved from https://www.energy.gov/about-us, About Y-12. (2020). U.S. Department of Energy. Retrieved from https://www.y12.doe.gov/ about. Birchfield, J. W. & Albrecht, L. (2012). Successful characterization strategies for the active high risk Y-12 National Security Complex 9201-5 (Alpha-5) Facility, Oak Ridge, TN - 12164. United States. Recovery cleanup project at Y-12. (2010). U.S. Department of Energy. Retrieved from https://www.energy.gov/orem/articles/recovery-cleanup-project-y-12-leaves-alpha-5-empty-feeling. Standard review plan: Acquisition strategy review module. (2010, March). Washington, DC: Office of Environmental Management. Supplement analysis for the site-wide environmental impact statement for the Y-12 National Security Complex (DOE/EIS-0387-SA-02). (2018, May). U.S. Department of Energy. Retrieved from https://www.energy.gov/sites/prod/files/2018/05/f51/EIS-0387-SA02-2018_0.pdf. Teamwork successfully brings down the Alpha 5 Annex. (2018). U.S. Department of Energy. Retrieved from https://www.y12.doe.gov/news/blog/teamwork-successfully-brings-down-alpha-5-annex. Y-12 National Security Complex cleanup projects. (2020). U.S. Department of Energy. Retrieved from https://www.energy.gov/orem/downloads/y-12-national-security-complex-cleanup-projects. secondary literature and a survey of practitioners concerning the fact that Defense Logistics Agency (DLA) acquisition costs are often excessive because first article testing (FAT) requirements are often misapplied to DLA contracts. This study was guided by three objectives: (a) to determine the frequency of misapplication of First Article Testing requirements to Defense Logistic Agency contracts; (b) to determine Engineering Support Agency and Defense Logistics Agency employee interpretations of Technology Energy Efficiency The idea of electric cars, which run on big rechargeable batteries in opposition gas-powered internal combustion engines, has been around for years. But growing climate-alteration worries, tougher fuel-efficiency standards, billions in government subsidies, and a lot of venture capital seem to be generating a tilting point that could move electric cars from the transportation borders into the majority (The future of the electric car, 2010). An electric car by definition Decision-Making Process in Business Environment The activity of decision-making may be defined as mental processes leading to the choosing of one alternative out of many. All decision-making processes generate an ultimate choice. Decision-making output may be a chosen view or action. Broadly, decision-making represents a process of choosing between numerous alternatives and making a commitment to adopting some particular option for the future (Masood). The Nature of Decision-Making Successful decision-making, realizing where one has made the DOD: Pursuing Alternative Energy Energy conservation and finding an appropriate energy solution is something that impacts nearly every field and profession, and is an issue which directly impacts the Department of Defense in a constant and lasting manner. The Department of Defense has a responsibility to aggressively pursue alternative energy solutions, namely in the form of biofuels, just as they aggressively pursue the latest updates in technological warfare. In order Interior and Commerce Department agencies are to determine which species should be listed; individuals may petition the agencies to have species designated. The Fish and Wildlife Service, in the Interior Department, deals with land species; the National Marine Fisheries Service, located in the Commerce department, has jurisdiction over marine species. Any 'interested person' may petition the Interior Secretary to list a species as either endangered or threatened. The 1978 Managing All Stakeholders in the Context of a Merger Process Review of the Relevant Literature Types of Mergers Identifying All Stakeholders in a Given Business Strategic Market Factors Driving Merger Activity Selection Process for Merger Candidates Summary, Conclusion, and Recommendations The Challenge of Managing All Stakeholders in the Context of a Merger Process Mergers and acquisitions became central features of organizational life in the last part of the 20th century, particularly as organizations seek to establish and
<urn:uuid:30e208b2-02e2-418c-a06c-421e4b7641f4>
CC-MAIN-2022-33
https://www.studyspark.com/essay/acquisition-process-in-action-at-department-of-energy-research-paper-2174972
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571950.76/warc/CC-MAIN-20220813111851-20220813141851-00405.warc.gz
en
0.930628
2,756
2.609375
3
Pruning and Canopy Management Spotted Bear Vineyard's beautifully tidy rows are a result of diligent pruning throughout the year. Grapevines can be challenging to manage on a large scale. It's a good ideal to develop a canopy management plan for your vineyard before the growing season begins. While there are mechanized methods for accomplishing some canopy management goals in larger-scale vineyards, most grape growers in Montana function on a very small scale. For that reason, we will focus primarily on manual, hands-on canopy management techniques. Canopy management practices have three main objectives: 1) maximizing sunlight interception; 2) minimizing shading; and 3) balanced growth (“Care of Established Vineyards (PDF),” Minnesota Grape Growers Association; Dr. Richard Smart, South African Journal of Enology and Viticulture, Vol. 11, 1990). Several interventions made throughout the year will make harvest more efficient. Below are diagrams detailing the different parts of a grapevine and the associated terminology. Note that different growers may use different terminology for the parts of a grapevine (e.g. a "watersprout" is also known as a "lateral"). Above: Diagram of larger structural parts of grapevine, courtesy of University of Minnesota Extension. Above: Diagram of the smaller parts of a grapevine's shoots, courtesy of University of California Division of Agriculture and Natural Resources (UCANR). For established vines, dormant pruning is typically done in early spring, before growth starts (in Montana, this is usually sometime in March). The first step is to assess winter damage to the previous season's buds so that you can determine the extent of pruning needed on those canes. This is especially critical if the vines experience temperatures that could damage the buds in the previous fall and winter. To do this, you can follow this protocol or something similar that is suited to your vineyard: - Sample only mature vines, i.e. those with established cordons that will require pruning for crop load management. Sample at least 6-8 vines per cultivar, as different cultivars can have drastically different susceptibility to cold injury. - Take sample cuttings from the fruiting zone of the dormant vines’ shoots/canes. Collect two cane samples from each vine, sampling at least 6-8 vines per cultivar. Each cane sample should represent those that you would leave during dormant pruning (but try to choose canes that you do not plan to leave during pruning). Select pencil-sized canes with at least 8-10 buds located in the fruiting zone of the vine (near the cordon). The terminal sections of the canes (ends of canes that would not be kept after pruning) can be cut off. - Bring samples indoors and leave for 24-48 hours. This gives the damaged buds a chance to warm up and desiccate, making it easier to assess bud injury. - After 24-48 hours indoors, assess bud damage on all samples by using a razor blade to cut cross sections of each bud (see figures below). If the tissue is brown, the bud is dead; if green tissue is evident, the bud is probably healthy. For each vine, record the total number of buds, sample each bud, and record the number of surviving (green) primary buds (the central bud). - 11% bud injury is considered typical and no change is necessary in pruning technique. Bud injury greater than 15% of sampled buds indicates higher-than-normal cold injury and 1-2 extra buds should be left on a longer cane. If damage is near 50%, leave twice as many buds as originally planned. If damage is greater than 50%, pruning should be minimized. The figures below show the specific cuts to make and visuals to expect for assessing bud survival (from "Assessing Grapevine Bud Damage," Joseph A. Fiola, Ph.D., University of Maryland Extension): Assessing Grapevine Bud Damage, Joseph A. Fiola, Ph.D. (University of Maryland Extension, Feb. 2018) Assessing Bud Injury and Adjusting Pruning (list of resources compiled by Cornell University) Bud Thinning and Suckering Help your vines concentrate resources (carbohydrates/sugars and water) to the fruiting zone, which (depending on your training system) is along your trellising wire(s). In early spring, remove any fruiting vegetative buds from anywhere else on the vine, i.e the trunk. Simply rubbing a finger over the bud should cause it to fall off (see images below for reference). Your vines will likely continue to produce more buds throughout the spring, so this process can be repeated during the early stages of the growing season. Suckers will continue to sprout up throughout the growing season, so unless you're trying to replace a cordon, you'll want to diligently remove those suckers as they appear. Suckers emerging at the base of the vine should also be removed. These can originate from the trunk or roots. It’s best to simply pull these off when small, as not shearing closely enough with pruners may leave vegetative buds behind. If done early in the season, removal of suckers and buds should take little effort. Replacing a trunk or cordon To prepare a new trunk, leave a sucker or two, and allow them to grow throughout the summer. Then choose the healthier option, and train it up the trellis alongside the existing trunk. To replace a cordon, leave a strategically-placed bud (ideally one that is in line with your trellis wire), and train it along your trellis throughout the season. Shoot and Cluster Thinning There are a number of different types of shoots that can emerge from different points along the cordon. In general, prune to about 4-6 shoots per foot of cordon, using the following strategies: - Prune so that spurs (the woody nodes out of which spurs grow) are spaced ~6 inches apart on the cordon. This creates better opportunities for gaps in the canopy through which sunlight can penetrate. - Retain an average of two fruiting shoots per spur. When choosing which shoots to keep, consider the quality of the shoot (how many clusters, showing healthy growth, etc.). If you have more than two healthy shoots, keep the shoots that are closest to the base of the spur, so as to prevent “spur creep,” or the lengthening of the spurs along the cordon. - Once your vines have reached fruit set, thin any tertiary clusters on shoots, keeping only two clusters on each shoot. Thin the least hardy-looking cluster, or the farthest cluster from the base of the shoot. This will help increase cluster quality by allowing the vine to focus its resources into fewer clusters. You may be tempted to leave as many clusters as possible for greater yield, but you may have lower yields at harvest. - As the season progresses, the mature canes (shoots) will start producing secondary shoots. Prune the first 3-5 (closest to the cordon) secondary shoots on each cane. This will open up the vine further, allowing better airflow and greater sun exposure for the clusters (helping to hasten the ripening process). Unless vegetation is getting out of control later in the season, you can retain secondary shoots growing farther down the cane, as these will provide extra leaf surface area for photosynthesis. An optional but advantageous step you can add to your canopy management plan is leaf pulling. This involves pulling any leaves at or between the terminal cluster and the base of the shoot. This will help to expose clusters to sunlight. Pull leaves after fruit set and while the berries are still small and green. Pulling leaves before fruit set can result in stunted photosynthesis, and pulling leaves too long after veraison can result in the vulnerable young berries becoming sunburned. Training, Tying, and Combing Shoots An important piece of a successful canopy management plan is regular training (i.e. combing) vines. Because they will continue to grow vigorously throughout the season, grapevines can quickly become unruly, even if cleaned up earlier in the season. You might find your vines sagging off of the trellising, growing into neighboring vines, bunching together on top of wires, and attaching themselves to irrigation, bird netting, and anything else in their reach. Vineyard vigor is encouraged by allowing sufficient sunlight to penetrate the canopy, which will facilitate fruit ripening and robust woody tissue development, and minimize over-shading, which can restrict airflow within the canopy and harbor diseases and pests. In addition to the pruning techniques described above, vines must be maintained periodically, yet canopy management can be overdone. Each shoot requires about 15 leaves to ripen the clusters. Once the shoots have reached this threshold, the tips can be trimmed if needed. Vineyard management tools Bamboo or Other Stakes Placing a stake (of bamboo or other material) is key to training vines vertically towards the top wire. Stakes should be secured to trellising and thin enough so that tape or clips can be secured both around the stake and vine. At WARC, we use a combination of bamboo poles and wire clips for stabilization. A tapener is an efficient tool for attaching new cordons to wires. While the tape is relatively weak and ineffective for attaching larger trunks and canes, it can help provide support in conjunction with clips. Clips and Ties Heavy-duty clips and/or ties can be used for securing thicker, heavier trunks and canes to trellising wires. With these tools, keeping a tidy canopy throughout the season is just a matter of persistence and consistency when employing the following techniques. A satisfyingly neat row of vines with "combed" shoots in a private vineyard managed by winemaking Larry Robertson. Training and “Tying Off” Check vines throughout the season, and secure the first and second years of growth to the trellis. A good strategy for protecting against wholescale winter die-back is to establish and train two separate trunks for each vine. Ideally, each trunk should originate below the lowest wire, as close to the ground as possible. If you choose to take this route, you will need to keep an eye on these newly established trunks throughout the season, taking care to secure new growth up the stake and eventually onto the wire using a combination of tape, clips, and whatever other ties you choose. You may also notice older, well-established cordons becoming heavier and sagging off of the wire. Add clips and tape as necessary to secure these cordons before the current season’s growth has fully come in, as it can add so much weight that it deforms the cordons or pulls them clear off of the wire. As your vines start putting on vigorous growth from their shoots, they have a tendency to clump together on top of the wire and creep their way into neighboring vines’ canopies. It is important to periodically go through and detangle or “comb” these shoots so that they hang down vertically. For shoots growing more horizontally that are intruding into their neighbors’ canopies, you may want to simply trim them back. Try to be as gentle as possible when combing shoots, so as not to break them off from the spur. Tendrils can become difficult to break or detach by hand, so clipping these with pruners can help the detangling process go more smoothly.
<urn:uuid:bb0165e2-f315-49ea-83f3-7f98a8791a20>
CC-MAIN-2022-33
https://agresearch.montana.edu/warc/guides/grapes/managing-vineyard/canopy-management.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570901.18/warc/CC-MAIN-20220809033952-20220809063952-00604.warc.gz
en
0.929114
2,445
2.796875
3
Far before the terms Native American or Indian were created, the tribes were spread all over the Americas. Before any white man set foot on this land, it was settled by the forefathers of bands we now call Sioux, or Cherokee, or Iroquois. For centuries, the American Indian developed its culture and legacy without interference. And that history is captivating. From Mayan and Incan ruins, from the mounds left in the central and southern regions of what’s today the U.S. we have learned quite a bit. It’s a tale of beautiful artwork and deep spirituality. Archaeologists have unearthed highly elaborate structures and public works. While there was inevitable tribal conflict, that was just a slight blemish in the experience of our ancestors. They were at peace with this beautiful continent and deeply connected to nature. The European Settler Arrives When European leaders dispatched the first ships in this direction, the aim was to explore new resources – but the quality of climate and the bounty of everything from wood to wildlife soon changed their tune. As those leaders learned from their explorers, the drive to colonize spread like wildfire. The English, French and Spanish rushed to slice up the “New World” by sending over inadequately prepared colonists as fast as possible. Initially, they skirmished with the surprised Indians of America’s eastern seaboard. But that soon gave way to trade, because the Europeans who arrived here understood their survival was doubtful without Indian help. Thus followed years of relative peace as the settlers got themselves established on American land. But the drive to push inland followed soon after. Kings and queens from thousands of miles away were restless to locate even more resources, and some colonists came for freedom and adventure. They needed more space. And so began the process of forcing the American Indian out of the way. It took the shape of cash payments, barter, and notoriously, treaties that were almost uniformly ignored after the Indians were pushed away from the land in question. The U.S. government’s policies towards Native Americans in the second half of the nineteenth century were determined by the desire to expand westward into areas occupied by these Native American tribes. By the 1850s almost all Native American tribes, approximately 360,000 in number, were living to the west of the Mississippi River. These American Indians, some from the Northwestern and Southeastern territories, were confined to Indian Territory located in contemporary Oklahoma, while the Kiowa and Comanche Native American tribes shared the land of the Southern Plains. The Sioux, Crows and Blackfeet dominated the Northern Plains. These Native American groups encountered hardship as the continuous flow of European immigrants into northeastern American cities delivered a stream of immigrants into the western lands already occupied by these various groups of Indians. Find Native American Indian Jewelry in North Baltimore, Ohio The early nineteenth century in the United States was marked by its continual expansion to the Mississippi River. However, due to the Gadsden purchase, that lead to U.S. control of the borderlands of southern New Mexico and Arizona in addition to the authority over Oregon country, Texas and California; America’s expansion would not end there. Between 1830 and 1860 the U.S. nearly doubled the amount of land within its control. These territorial gains coincided with the arrival of troves of European and Asian immigrants who wanted to join the surge of American settlers heading west. This, combined with the discovery of gold in 1849, presented alluring possibilities for those prepared make the extended quest westward. Consequently, with the military’s protection and the U.S. government’s assistance, many settlers set about building their homesteads in the Great Plains and other parts of the Native American tribe-inhabited West. Native American Tribes Native American Policy can be defined as the laws and regulations and procedures developed and adapted in the United States to define the relationship between Native American tribes and the federal government. When the United States initially became an independent nation, it adopted the European policies towards these local peoples, but over two centuries the U.S. designed its very own widely varying policies regarding the changing perspectives and necessities of Native American supervision. In 1824, in order to execute the U.S. government’s Native American policies, Congress made a new bureau inside the War Department called the Bureau of Indian Affairs, which worked closely with the U.S. Army to enforce their policies. At times the federal government recognized the Indians as self-governing, separate political communities with varying cultural identities; however, at other times the government attempted to force the Native American tribes to abandon their cultural identity, hand over their land and assimilate into the American traditions. Find Native American Indian Art in North Baltimore, OH With the steady flow of settlers in to Indian controlled land, Eastern newspapers circulated sensationalized stories of savage native tribes carrying out widespread massacres of hundreds of white travelers. Although some settlers lost their lives to American Indian attacks, this was certainly not the norm; in fact, Native American tribes routinely helped settlers cross over the Plains. Not only did the American Indians sell wild game and other supplies to travelers, but they served as guides and messengers between wagon trains as well. Despite the good natures of the American Indians, settlers still feared the possibility of an attack. Find Native American Jewelry in Ohio To calm these worries, in 1851 the U.S. government organised a conference with several local Indian tribes and established the Treaty of Fort Laramie. Under this treaty, each Native American tribe accepted a bounded territory, allowed the government to construct roadways and forts in this territory and pledged not to ever attack settlers; in return the federal government agreed to honor the boundaries of each tribe’s territory and make total payments to the Indians. The Native American tribes responded quietly to the treaty; in fact the Cheyenne, Sioux, Crow, Arapaho, Assinibione, Mandan, Gros Ventre and Arikara tribes, who signed the treaty, even agreed to end the hostilities amongst their tribes in order to accept the conditions of the treaty. Navajo Jewelry is Celebrated Worldwide by American Indian Art Collectors This peaceful accord between the U.S. government and the Native American tribes didn’t hold long. After hearing reports of fertile land and great mineral wealth in the West, the government soon broke their pledge established in the Treat of Fort Laramie by allowing thousands of non-Indians to flood into the region. With so many newcomers heading west, the federal government established a plan of limiting Native Americans to reservations, modest areas of acreage within a group’s territory that was set aside exclusively for Indian use, to be able to provide more land for “” non-Indian settlers. In a series of new treaties the U.S. government commanded Native Americans to surrender their land and migrate to reservations in exchange for protection from attacks by white settlers. In addition, the Indians were given a yearly payment that would include cash in addition to food, animals, household goods and agricultural equipment. These reservations were established in an effort to pave the way for increased U.S. growth and administration in the West, as well as to keep the Native Americans isolated from the whites in order to lessen the chance for conflict. History of the Plains Indians These agreements had many challenges. Most significantly many of the native peoples did not altogether grasp the document that they were confirming or the conditions within it; moreover, the treaties did not acknowledge the cultural norms of the Native Americans. In addition to this, the government institutions accountable for administering these policies were weighed down with awful management and corruption. In fact many treaty conditions were never accomplished. The U.S. government almost never held up their side of the accords even when the Native Americans migrated quietly to their reservations. Dishonest bureau agents sometimes sold off the supplies that were intended for the Indians on reservations to non-Indians. Moreover, as settlers needed more territory in the West, the government constantly decreased the size of reservation lands. By this time, many of the Native American peoples were dissatisfied with the treaties and angered by settlers’ persistent demands for land. A Look at Native American Symbols Angered by the government’s dishonest and unfair policies, some Native American tribes, including bands of Cheyennes, Arapahos, Comanches and Sioux, fought back. As they fought to maintain their territories and their tribes’ survival, more than one thousand skirmishes and battles broke out in the West between 1861 and 1891. In an effort to compel Native Americans onto the reservations and to end the violence, the U.S. government responded to these skirmishes with significant military operations. Clearly the U.S. government’s Indian regulations were in need of a change. Find Native American Indian Music in North Baltimore, OH Native American policy shifted radically after the Civil War. Reformers felt that the policy of pushing Native Americans on to reservations was too severe even though industrialists, who were concerned about their land and resources, thought of assimilation, the cultural absorption of the American Indians into “white America” as the only long-term strategy for ensuring Native American survival. In 1871 the federal government passed a pivotal law stating that the United States would not treat Native American tribes as independent nations. This law signaled a major change in the government’s relationship with the native peoples – Congress now viewed the Native Americans, not as nations outside of its jurisdictional control, but as wards of the government. By making Native Americans wards of the U.S. government, Congress believed that it would be easier to make the policy of assimilation a broadly recognized part of the cultural mainstream of America. More On American Indian History Many U.S. government officials perceived assimilation as the most effective remedy for what they viewed as “the Indian problem,” and the sole permanent method of guaranteeing U.S. interests in the West and the survival of the American Indians. In order to accomplish this, the government urged Native Americans to move out of their established dwellings, move into wooden dwellings and turn into farmers. The federal government handed down laws that forced Native Americans to quit their traditional appearance and way of life. Some laws outlawed customary spiritual practices while others ordered Indian men to cut their long locks. Agents on more than two-thirds of American Indian reservations established tribunals to impose federal polices that often prohibited traditional cultural and religious practices. To boost the assimilation course, the government established Indian schools that attempted to quickly and vigorously Americanize Indian youth. As per the founder of the Carlisle Indian School in Pennsylvania, the schools were created to “kill the Indian and save the man.” To be able to make this happen objective, the schools forced pupils to speak only English, dress in proper American fashion and to switch their Indian names with more “American” ones. These new policies brought Native Americans nearer to the conclusion of their original tribal identity and the start of their daily life as citizens under the full control of the U.S. authorities. Native American Treaties with the United States In 1887, Congress approved the General Allotment Act, the most important element of the U.S. government’s assimilation platform, which was developed to “civilize” American Indians by educating them to be farmers. In order to make this happen, Congress planned to establish private ownership of Indian land by dividing reservations, which were collectively held, and providing each family their own stretch of land. In addition to this, by forcing the Native Americans onto limited plots of land, western developers and settlers could purchase the left over land. The General Allotment Act, referred to as the Dawes Act, required that the Indian lands be surveyed and every family be awarded an allotment of between 80 and 160 acres, while unmarried adults received between 40 to 80 acres; the rest of the acreage was to be sold. Congress was hoping that the Dawes Act would breakup Indian tribes and encourage individual enterprise, while trimming the expense of Indian supervision and producing prime property to be sold to white settlers. Find Native American Indian Clothing in North Baltimore, OH The Dawes Act turned out to be disastrous for the American Indians; over the next decades they existed under policies that outlawed their traditional lifestyle and yet did not provide the crucial resources to support their businesses and families. Dividing the reservations into small parcels of land caused the significant reduction of Indian-owned land. Within three decades, the tribes had lost in excess of two-thirds of the acreage that they had controlled before the Dawes Act was enacted in 1887; the majority of the remaining land was sold to white settlers. Usually, Native Americans were cheated out of their allotments or were forced to sell their property in order pay bills and take care of their families. Because of that, the Indians were not “Americanized” and were generally unable to become self-supporting farmers or ranchers, as the makers of the policy had wished. Further, it produced anger among Indians toward the U.S. government, as the allotment practice sometimes ruined land that was the spiritual and social centre of their activities. Native American Culture Between 1850 and 1900, life for Native Americans changed substantially. Through U.S. government regulations, American Indians were forced from their living spaces because their native lands were parceled out. The Plains, which they had previously roamed alone, were now filled up with white settlers. The Upshot of the Indian Wars Over these years the Indians have been defrauded out of their property, food and approach to life, as the federal government’s Indian plans shoved them on to reservations and tried to “Americanize” them. Many American Indian bands would not endure relocation, assimilation and military defeat; by 1890 the Native American population was reduced to under 250,000 persons. As a result of generations of discriminatory and ruthless policies instituted by the United States government between 1850 and 1900, life for the American Indians was changed forever. [google-map location=”North Baltimore OH”
<urn:uuid:795afed4-07b1-43b5-a881-66293723227d>
CC-MAIN-2022-33
https://americanindiancoc.org/native-american-tribes-the-indian-history-in-north-baltimore-ohio/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00405.warc.gz
en
0.970753
2,916
3.6875
4
A brief history of Buddhism Throughout history, tragic events have sparked an increasing interest in religion. There is a direct correlation between massive injury, illness, and high death tolls with an urgent need to seek out a higher power—in order to comfort the soul and provide a sense of security, in an otherwise chaotic world. In a very simple sense, people are on a journey to discover the meaning and purpose of life. Buddhism is an interesting and peaceful religion that offers philosophies on how to attain an understanding of life and the universe. Buddhism began about 2500 years ago (Fouberg, Murphy, and de Blij 212). Legend has it that when Prince Siddhartha Gautama was born, in the sixth century B.C.E., an old sage foretold that the prince would become an ascetic or a supreme monarch. In order to ensure that he became the future leader and a great warrior, the King decided to isolate Prince Gautama within the confines of the palace, living a life of ultimate luxury. The Prince married and had a son. At 29, he finally made it out of the palace and discovered aging, illness, and death for the very first time. On his journey, he met a homeless man, who smiled at him with a sense of contentment and peace. Confused, he asked how this man, with so very little, could find happiness in such a miserable world. He was told that the homeless man had met with a holy man, someone who had achieved complete liberation. This trip was shocking and dramatically changed the Prince’s outlook. He renounced his luxurious lifestyle, believing it to be an empty existence. He embarked on a spiritual quest, determined to find a solution to human suffering (Toropov and Buckles 200). After years of living ascetically and intense studying, Gautama came to sit under a Bodhi tree, vowing to stay there and meditate until becoming fully liberated. “On the morning of the seventh day, he opened his eyes and looked out to the morning star. At that moment, he became enlightened” (Toropov and Buckles 201). From this moment, Gautama became known as the Buddha, which means enlightened one (Fouberg, Murphy, and de Blij 213); today he is known as Buddha Tathagata , which means “he who has gone through completely” (Toropov and Buckles 200); in China, and some parts of the Eastern world, is referred to as Buddha Shakyamuni (Kung 4). Five of the Buddha’s companions met with him in the Deer Park, in Varanasi, a city in Northeastern India on the Ganges. Here, under the Bodhi tree, the Buddha delivered his speech on the Four Noble Truths, “There is suffering. There is the origin of suffering. There is the cessation of suffering. There is the path out of suffering” (Sumedho 8). These Truths, along with the Eightfold Path, comprise the basic doctrines of Buddhism. “A Noble Truth is a truth to reflect upon; it is not an absolute; it is not the Absolute” (Sumedho 13). The First Noble Truth, ‘There is suffering,’ is a basic insight, requiring only recognition. It’s important to remove any personal feelings and attachments to this insight; The suffering is not ‘mine,’ but rather the insight is made as a reflection, “there is this suffering, this dukkha (suffering). It is coming from the reflective position of Buddha seeing the Dhamma (truth)” (Sumedho 9). Human existence is inherently painful and this dukkha is the common bond that unites all human beings. It is also important to note that a first step in accepting and admitting that ‘there is suffering,’ is to let go of the psychological ego. “Admission in Buddhist meditation is not from a position of: ‘I am suffering’ but rather, ‘There is the presence of suffering’ because we are not trying to identify with the problem but simply acknowledge that there is one” (Sumedho 15). The Buddha said, “What is the Noble Truth of Suffering? Birth is suffering, aging is suffering, sickness is suffering, dissociation from the loved is suffering: in short the five categories affected by clinging are suffering” (Samyutta Nikaya LVI, 11) Within the Second Noble Truth, ‘There is origin of suffering,’ the Buddha taught that suffering arises from three types of desire, craving, and attachment. The three types of desire include: the desire for sensual pleasure, the desire to become, and desire to get rid of. In a very simple sense, suffering originates from attachment to desire—grasping at desire (Sumedho 27-30). Again, the Truth is a reflection, requiring the acknowledgement of desire, rather than identifying with it. The Buddha said, “What is the Noble Truth of the Origin of Suffering? It is craving which renews being and is accompanied by relish and lust, relishing this and that: In other words, craving for sensual desires, craving for being, craving for non-being. But whereon does this craving arise and flourish? Wherever there is what seems lovable and gratifying, thereon it arises and flourishes” (Samyutta Nikaya LVI, 11). The Third Noble Truth is ‘There is cessation of suffering.’ “The whole aim of Buddhist teaching is to develop the reflective mind in order to let go of the delusions” (Sumedho 36). It’s about contemplating versus forming opinions, reflections versus belief. This mental state is vital to eliminating suffering. A principle of the Dhamma that the Buddha stressed was: “Each thing arises from a cause. We must know the cause of that thing and the ceasing of the cause of that thing” (Bhikkhu 4). In a very simple sense, once a person has transcended craving and attachment, he or she enters the state of Nirvana, and suffering ceases (Toropov and Buckles 203). The Buddha said, “What is the Noble Truth of the Cessation of Suffering? It is the remainderless fading and cessation of that same craving; the rejecting, relinquishing, leaving, and renouncing of it. But whereon is this craving abandoned and made to cease? Wherever there is what seems lovable and gratifying, thereon it is abandoned and made to cease” (Samyutta Nikaya LVI, 11). The Fourth Noble Truth, ‘There is the path out of suffering’ teaches the Eightfold Path. The elements are grouped together in a sequence with three sections: 1. Wisdom – Right Understanding; Right Aspiration Reflecting on the previous three insights and accepting things for how they are; “When there is Right Understanding, we aspire for truth, beauty, and goodness” (Sumedho 54-55) 2. Morality – Right Speech; Right Action; Right Livelihood “Taking responsibility for our speech and being careful about what we do with our bodies” (Sumedho 57). 3. Concentration – Right Effort; Right Mindfulness; Right Concentration These three elements refer to the spirit and the heart, free from selfishness. “When the heart is pure, the mind is peaceful... Reflectiveness of mind or emotional balance is developed as a result of practicing concentration and mindfulness meditation” (Sumedho 48, 60). The Buddha taught that this is the path one must take to reach Nirvana: the state of final liberation from the cycle of birth and death. “This immortal element is the cessation of the mortal element” (Bhikkhu 36). It is worth noting that The Four Noble Truths are an original set of founding ideas that have never been used as a justification for acts of violence, war, or military exploit. In addition to the above Truths, the Buddha taught several other principles: Walking the Middle Way, striking a balance to create optimal conditions for studying and practice, and success in eliminating suffering; Self-help, not relying on fortunes and fame, God, or celestial beings, each person making their own effort – self is the refuge of self; Avoid evil, do good, purify the mind; and nothing is permanent, “All compounded things are perpetually flowing, forever breaking up. Let all be well-equipped with heedfulness” (Bhikkhu 5). Moreover, Buddha taught that “to believe straight away is foolishness; to believe after having seen clearly is good sense” (Bhikkhu 19). One must not whole-heartedly accept what information is offered, without first fully contemplating, considering, examining, and analyzing the information for oneself. Anyone has the ability to seek out understanding, follow the Eightfold Path, and achieve enlightenment. The Buddha continued teaching until his death at age 80. His final words say it best, “All composite things decay. Diligently work out your salvation” (Toropov and Buckles 206). The Diffusion of Buddhism The hearth of Buddhism lies in Northeastern India, in the city on the Ganges River, where the Buddha achieved enlightenment, and delivered his first lecture on the Dhamma. For the next 45 years, he and his followers traveled through the North Indian River Plains, Eastern India, parts of modern day Pakistan, Southern Nepal, and areas of Bangladesh “teaching, ordaining monks, and promoting the solitary, secluded style of spiritual discipline” (Toropov and Buckles 205). After his death, the new religion grew slowly. The Buddha had not named a formal successor and three months after his death, a first council was convened at Rajagrha, to uphold the Buddha’s final wishes: not to follow a leader, but to follow the teachings and spread the Buddhist wisdom. One hundred years later, a second council was convened at Vaishali to settle growing disputes about the teachings. In the third century B.C.E., Emperor Asoka, leader of the large and powerful Maurya Empire, converted to Buddhism. He ruled his empire in accordance with the Buddhist teachings and sent missionaries to distant lands to spread the teachings (Fouberg, Murphy, and de Blij 213). A third council was convened by the emperor and led by a monk, Mahakasyapa, to settle growing differences among followers and determine a direction for the faith. As a result, 18 schools of Buddhism emerged and were acknowledged. Of these 18, only the Theravada school exists today (Toropov and Buckles 208). The Theravada school considers the Buddha’s teachings to be the most sacred and emphasizes the solitary life of personal religious discipline. Around 100 B.C.E., a contrasting view of Buddhism, Mahayana, was presented from Nagarjuna, a revered philosopher. One of his most important principles was that Nirvana is a reality that can exist in the present moment (Toropov and Buckles 209-211). Buddhism continued to spread through Southeast Asia, Sri Lanka, and China via trade and sea routes. During the second century B.C.E., sculptures of the Buddha’s life and teachings appeared and in the first century C.E., Buddhist art took on anthropomorphic representations. Chinese monks brought Buddhism to Korea around 372 C.E. From Korea, Buddhism spread to Japan, where it became the official state religion in the eighth century C.E. Buddhism also diffused into Tibet, Laos, Cambodia, Taiwan, and Thailand. Buddhism didn’t reach the West until the 18th century. As people traveled to colonies in the East, they encountered these new philosophies and brought back Buddhist texts. European scholars began to study them and in the 19th century, these texts were translated into a few European languages. This allowed for more people to study them and write out their own reflections on the topics. By the 20th century, the entire Theravada scriptures and the important Mahayana texts had been translated in English, French, and German. In 1924, The Buddhist Society was established in London, helping to grow interest in Buddhism through meditative sessions, lectures, and circulating the Buddhist literature (BuddhaNet). Chinese and Japanese immigrants helped spread Buddhism through North America. At the end of the 19th Century, two revered Buddhist spokesmen attended the World Parliament of Religions in Chicago. Dharmapala, from Sri Lanka, and Soyen Shaku, a Zen Master from Japan, gave inspiring lectures that impressed their audience. They helped to establish interest in Theravada and Zen Buddhism in the United States (BuddhaNet). After WWII, interest in Buddhism in the United States grew. Tibetan refugees brought Vajrayana Buddhist traditions to America and, during the 60’s, Zen Buddhism traditions became popular. Today, a number of Buddhist centers and societies exist across Australia, New Zealand, Europe, and North and South America. Behl, Benoy K. "The Great Buddhist Diffusion." Buddhist Channel. The Buddhist Channel, 17 Dec. 2008. Web. 01 Dec. 2012. Bhikkhu, Buddhadasa. "Buddha Dhamma for University Students." (n.d.): n. pag. Buddhanet. Web. 2 Nov. 2012. "Buddhism Timeline." Timeline of Buddhism. Religion Facts, 17 Mar. 2004. Web. 01 Dec. 2012. <http://www.religionfacts.com/buddhism/timeline.htm>. "The Buddhist World: Spread of Buddhism to the West." The Buddhist World: Spread of Buddhism to the West. BuddhaNet, 2008. Web. 01 Dec. 2012. Fouberg, Erin H., Alexander B. Murphy, and H.J. De Blij. Human Geography People, Place, Culture. 10th ed. Hoboken: Wiley & Sons, 2012. Print. Kung, Chin. "The Art of Living Part 1 and 2." (n.d.): n. pag. Buddhanet. Web. 2 Nov. 2012. <http://www.buddhistelibrary.org/en/displayimage.php?pid=69>. Kung, Chin. "Buddhism as an Education." (n.d.): n. pag. Buddhanet. Web. 2 Nov. 2012. <http://www.buddhistelibrary.org/en/displayimage.php?pid=71>. Sumedo, Ajahn. "The Four Noble Truths." (n.d.): n. pag. Buddhanet. Web. 2 Nov. 2012. <http://www.buddhistelibrary.org/en/displayimage.php?pid=43http://>. Toropov, Brandon, and Luke Buckles. The Complete Idiot's Guide to World Religions. 3rd ed. New York: Beach Brook Productions, 2004. Print.
<urn:uuid:a5e346e1-0acb-4687-8483-87c85d7c9c58>
CC-MAIN-2022-33
https://hubpages.com/religion-philosophy/A-brief-history-of-Buddhism
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571745.28/warc/CC-MAIN-20220812170436-20220812200436-00205.warc.gz
en
0.936176
3,250
3.40625
3
Skip to Main Content It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results. African Americans and the Vote The year 2020 marks the centennial of the Nineteenth Amendment and the culmination of the women’s suffrage movement. The year 2020 also marks the sesquicentennial of the Fifteenth Amendment (1870) and the right of black men to the ballot after the Civil War. The theme speaks, therefore, to the ongoing struggle on the part of both black men and black women for the right to vote. This theme has a rich and long history, which begins at the turn of the nineteenth century, i.e., in the era of the Early Republic, with the states’ passage of laws that democratized the vote for white men while disfranchising free black men. Thus, even before the Civil War, black men petitioned their legislatures and the US Congress, seeking to be recognized as voters. Tensions between abolitionists and women’s suffragists first surfaced in the aftermath of the Civil War, while black disfranchisement laws in the late nineteenth and early twentieth centuries undermined the guarantees in the Fourteenth and Fifteenth Amendments for the great majority of southern blacks until the Voting Rights Act of 1965. The important contribution of black suffragists occurred not only within the larger women’s movement, but within the larger black voting rights movement. Through voting-rights campaigns and legal suits from the turn of the twentieth century to the mid-1960s, African Americans made their voices heard as to the importance of the vote. Indeed the fight for black voting rights continues in the courts today. The theme of the vote should also include the rise of black elected and appointed officials at the local and national levels, campaigns for equal rights legislation, as well as the role of blacks in traditional and alternative political parties. Please use this guide to find resources on the 2020 Black History Month Theme: African Americans and the Vote and to find various events to celebrate Black History Month at TTC. Visit the different TTC Library campuses to view the Black History Month displays and to find resources on the 2020 Black History Month theme: African Americans and the Vote. Thornley Campus Library Palmer Campus Library Conflict by This in-depth examination looks at African American women's navigation of the interlocking obstacles of race and gender specifically within the political arena. Conflict: African American Women and the New Dilemma of Race and Gender Politics offers a provocative examination of an increasingly important voting bloc, one that impacted the 2008 election and whose loyalties will have far-reaching implications for future contests. This fascinating study is three-pronged. It explores the conflicts African American women experience in prioritizing race over gender, offers data-backed analysis of the substantial power of this bloc to influence elections, and looks at the ways in which the very existence of that influence impacts the political and social empowerment of this dual-identity population. As background to the present-day story, the book surveys the history of African American females in elective office in the United States, as well as their roles in the Women's Suffrage and Civil Rights movements. The first work to undertake a study of African American women in this expansive political context, this important volume will help readers assess where African American women have been, where they are now, and what their roles might be in the future. * Quotes from African American women about their dual-identity conflicts * A bibliography Publication Date: 2012-07-20 Stolen Justice by A thrilling and incisive examination of the post-Reconstruction era struggle for and suppression of African American voting rights in the United States.Following the Civil War, the Reconstruction era raised a new question to those in power in the US: Should African Americans, so many of them former slaves, be granted the right to vote?In a bitter partisan fight over the legislature and Constitution, the answer eventually became yes, though only after two constitutional amendments, two Reconstruction Acts, two Civil Rights Acts, three Enforcement Acts, the impeachment of a president, and an army of occupation. Yet, even that was not enough to ensure that African American voices would be heard, or their lives protected. White supremacists loudly and intentionally prevented black Americans from voting -- and they were willing to kill to do so.In this vivid portrait of the systematic suppression of the African American vote for young adults, critically acclaimed author Lawrence Goldstone traces the injustices of the post-Reconstruction era through the eyes of incredible individuals, both heroic and barbaric, and examines the legal cases that made the Supreme Court a partner of white supremacists in the rise of Jim Crow. Though this is a story of America's past, Goldstone brilliantly draws direct links to today's creeping threats to suffrage in this important and, alas, timely book. Call Number: JK1924 .G65 2020 Publication Date: 2020-01-07 The Unfinished Agenda of the Selma-Montgomery Voting Rights March by WHY A 56-MILE WALK FOR FREEDOM IN 1965 STILL CHALLENGES AMERICA TODAY THE VOTING RIGHTS ACT OF 1965 WAS THE CROWNING ACHIEVEMENT OF THE CIVIL RIGHTS MOVEMENT, FOREVER CHANGING POLITICS IN AMERICA. NOW, FOR THE FIRST TIME, VOICES OF THE ERA, ALONG WITH SOME OF TODAY'S MOST INFLUENTIAL WRITERS, SCHOLARS, AND SOCIAL ACTIVISTS, COMMEMORATE THE STRUGGLE AND EXAMINE WHY THE BATTLE MUST STILL BE WON. ""One of the difficult lessons we have learned is that you cannot depend on American institutions to function without pressure. Any real change in the status quo depends on continued creative action to sharpen the conscience of the nation.""--MARTIN LUTHER KING JR. ""As long as half our eligible voters exercise the right that so many in Selma marched and died for, we've got a very long bridge to cross.""--BILL CLINTON ""I would hope that students today can learn from Selma to acquire a better understanding of how oppressed people with limited resources can free themselves and make the world better.""--CLAYBORNE CARSON, STANFORD UNIVERSITY Call Number: F334 .S4 U54 2005 Publication Date: 2005-02-01 The Voting Rights Act of 1965 by Provides a detailed account of the events that led to the Voting Rights Act of 1965. Explores both the racial discrimination and violence that pervaded the South and the civil rights protests that changed American voting rights. Includes a narrative overview, biographical profiles, primary source documents, and other helpful features. Call Number: JK1924 .H55 2009 Publication Date: 2008-12-10 The Woman's Hour by "Both a page-turning drama and an inspiration for every reader"--Hillary Rodham Clinton Soon to Be a Major Television Event The nail-biting climax of one of the greatest political battles in American history: the ratification of the constitutional amendment that granted women the right to vote. "With a skill reminiscent of Robert Caro, [Weiss] turns the potentially dry stuff of legislative give-and-take into a drama of courage and cowardice."--The Wall Street Journal "Weiss is a clear and genial guide with an ear for telling language ... She also shows a superb sense of detail, and it's the deliciousness of her details that suggests certain individuals warrant entire novels of their own... Weiss's thoroughness is one of the book's great strengths. So vividly had she depicted events that by the climactic vote (spoiler alert: The amendment was ratified!), I got goose bumps."--Curtis Sittenfeld, The New York Times Book Review Nashville, August 1920. Thirty-five states have ratified the Nineteenth Amendment, twelve have rejected or refused to vote, and one last state is needed. It all comes down to Tennessee, the moment of truth for the suffragists, after a seven-decade crusade. The opposing forces include politicians with careers at stake, liquor companies, railroad magnates, and a lot of racists who don't want black women voting. And then there are the "Antis"--women who oppose their own enfranchisement, fearing suffrage will bring about the moral collapse of the nation. They all converge in a boiling hot summer for a vicious face-off replete with dirty tricks, betrayals and bribes, bigotry, Jack Daniel's, and the Bible. Following a handful of remarkable women who led their respective forces into battle, along with appearances by Woodrow Wilson, Warren Harding, Frederick Douglass, and Eleanor Roosevelt, The Woman's Hour is an inspiring story of activists winning their own freedom in one of the last campaigns forged in the shadow of the Civil War, and the beginning of the great twentieth-century battles for civil rights. Call Number: JK1911.T2 W45 2018 Publication Date: 2018-03-06 Jim Crow Capital by Local policy in the nation's capital has always influenced national politics. During Reconstruction, black Washingtonians were first to exercise their new franchise. But when congressmen abolished local governance in the 1870s, they set the precedent for southern disfranchisement. In the aftermath of this process, memories of voting and citizenship rights inspired a new generation of Washingtonians to restore local government in their city and lay the foundation for black equality across the nation. And women were at the forefront of this effort. Here Mary-Elizabeth B. Murphy tells the story of how African American women in D.C. transformed civil rights politics in their freedom struggles between 1920 and 1945. Even though no resident of the nation's capital could vote, black women seized on their conspicuous location to testify in Congress, lobby politicians, and stage protests to secure racial justice, both in Washington and across the nation. Women crafted a broad vision of citizenship rights that put economic justice, physical safety, and legal equality at the forefront of their political campaigns. Black women's civil rights tactics and victories in Washington, D.C., shaped the national postwar black freedom struggle in ways that still resonate today. Publication Date: 2018-11-19 The Shadow of Selma by The Shadow of Selma evaluates the 1965 civil rights campaign in Selma, Alabama, the historical memory of the campaign?s marches, and the continuing relevance of and challenges to the Voting Rights Act. The contributors present Selma not just as a keystone event but, much like Ferguson today, as a transformative place: a supposedly unimportant location that became the focal point of epochal historical events. By shifting the focus from leaders like Martin Luther King Jr. to the thousands of unheralded people who crossed the Edmund Pettus Bridge?and the networks that undergirded and opposed them?this innovative volume considers the campaign?s long-term impact and its place in history.The volume recalls the historical currents that surrounded Selma, discussing grassroots activism, the role of President Lyndon B. Johnson during the struggle for the Voting Rights Act, and the political reaction to Selma at home and abroad. Using Ava DuVernay's 2014 Hollywood film as a stepping stone, the editors bring together various essays that address the ways media?from television and newspaper coverage to "race beat" journalism?represented and reconfigured Selma. The contributors underline the power of misrepresentation in shaping popular memory and in fueling a redemptive narrative that glosses over ongoing racial problems. Finally, the volume traces the fifty-year legacy of the Voting Rights Act. It reveals the many subtle and overt methods by which opponents of racial equality attempted to undo the act?s provisions, with a particular focus on the 2013 Shelby County v. Holder decision that eliminated sections of the act designed to prevent discrimination.Taken together, the essays urge readers not to be blind to forms of discrimination and injustice that continue to shape inequalities in the United States. They remind us that while today's obstacles to racial equality may look different from a literacy test or a grimfaced Alabama state trooper, they are no less real.Contributors: Alma Jean Billingslea Brown | Ben Houston | Peter Ling | Mark McLay | Tony Badger | Clive Webb | Aniko Bodroghkozy | Mark Walmsley | George Lewis | Megan Hunt | Devin Fergus | Barbara Harris Combs | Lynn Mie Itagaki Publication Date: 2018-02-27 Suffrage Reconstructed by The Fourteenth Amendment, ratified on July 9, 1868, identified all legitimate voters as "male." In so doing, it added gender-specific language to the U.S. Constitution for the first time. Suffrage Reconstructed considers how and why the amendment's authors made this decision. Vividly detailing congressional floor bickering and activist campaigning, Laura E. Free takes readers into the pre- and postwar fights over precisely who should have the right to vote. Free demonstrates that all men, black and white, were the ultimate victors of these fights, as gender became the single most important marker of voting rights during Reconstruction. Free argues that the Fourteenth Amendment's language was shaped by three key groups: African American activists who used ideas about manhood to claim black men's right to the ballot, postwar congressmen who sought to justify enfranchising southern black men, and women's rights advocates who began to petition Congress for the ballot for the first time as the Amendment was being drafted. To prevent women's inadvertent enfranchisement, and to incorporate formerly disfranchised black men into the voting polity, the Fourteenth Amendment's congressional authors turned to gender to define the new American voter. Faced with this exclusion some woman suffragists, most notably Elizabeth Cady Stanton, turned to rhetorical racism in order to mount a campaign against sex as a determinant of one's capacity to vote. Stanton's actions caused a rift with Frederick Douglass and a schism in the fledgling woman suffrage movement. By integrating gender analysis and political history, Suffrage Reconstructed offers a new interpretation of the Civil War-era remaking of American democracy, placing African American activists and women's rights advocates at the heart of nineteenth-century American conversations about public policy, civil rights, and the franchise. Publication Date: 2015-11-06 This Bright Light of Ours by This Bright Light of Ours offers a tightly focused insider's view of the community-based activism that was the heart of the civil rights movement. A celebration of grassroots heroes, this book details through first-person accounts the contributions of ordinary people who formed the nonviolent army that won the fight for voting rights. Combining memoir and oral history, Maria Gitin fills a vital gap in civil rights history by focusing on the neglected Freedom Summer of 1965 when hundreds of college students joined forces with local black leaders to register thousands of new black voters in the rural South. Gitin was an idealistic nineteen-year-old college freshman from a small farming community north of San Francisco who felt called to action when she saw televised images of brutal attacks on peaceful demonstrators during Bloody Sunday, in Selma, Alabama. Atypical among white civil rights volunteers, Gitin came from a rural low-income family. She raised funds to attend an intensive orientation in Atlanta featuring now-legendary civil rights leaders. Her detailed letters include the first narrative account of this orientation and the only in-depth field report from a teenage Summer Community Organization and Political Education (SCOPE) project participant. Gitin details the dangerous life of civil rights activists in Wilcox County, Alabama, where she was assigned. She tells of threats and arrests, but also of forming deep friendships and of falling in love. More than four decades later, Gitin returned to Wilcox County to revisit the people and places that she could never forget and to discover their views of the "outside agitators" who had come to their community. Through conversational interviews with more than fifty Wilcox County residents and former civil rights workers, she has created a channel for the voices of these unheralded heroes who formed the backbone of the civil rights movement. Publication Date: 2014-02-11 Voting Hopes or Fears? by When President Lyndon B. Johnson signed the 1965 Voting Rights Act, he explained that it flowed from "a clear and simple wrong." But a generation later, whites still remain resistant to the election of blacks to public office. That widespread resistance, Keith Reeves illustrates, can beexplained in large part by election campaign appeals to whites' racial fears and sentiments. Based on empirical research examining white voters' attitudes towards black candidates and racial framing of campaign news coverage, Voting Hopes or Fears? explosively documents that racial discriminationagainst black candidates is contemporary, specific, and identifiable. Reeves concludes by outlining possible remedies such as modified at-large voting systems and by defending the practice of race-conscious legislative districting, now under attack by the Supreme Court.Marshaling startling evidence of voting discrimination against black candidates on account of race, and featuring a Foreword by The Honorable A. Leon Higginbotham Jr., Chief Justice Emeritus of the US Court of Appeals, Voting Hopes or Fears? will be mandatory reading for political and socialscientists, scholars of racism and African-American Studies, civil rights litigators, journalists, black lawmakers and office-seekers, and general readers interested in the subject of race and politics in American society. Publication Date: 1997-10-23
<urn:uuid:3b19382f-54a7-40d6-9af9-a60863cdad84>
CC-MAIN-2022-33
https://libguides.tridenttech.edu/c.php?g=1003560&p=7268012
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572198.93/warc/CC-MAIN-20220815175725-20220815205725-00005.warc.gz
en
0.929459
3,752
4.125
4
The Consolidation Act of 1854 extended Philadelphia’s territory from the two-square-mile “city proper” founded by William Penn to nearly 130 square miles, making the municipal borders coterminous with Philadelphia County and turning the metropolis into the largest in extent in the nation, a position it held until Chicago leapt ahead in 1889. Consolidation’s supporters believed the measure would enable municipal authorities to deal with the epidemics of riot and disease that ravaged the city in the 1830s and 1840s, while giving them the power and dignity to challenge for metropolitan supremacy. Although the bid to overtake New York as the first city failed, the 1854 act led to some impressive civic achievements. Since its passage, the city’s boundaries have barely changed, and despite charter revisions in 1887 and 1951, contemporary Philadelphia still bears the imprint of the mid nineteenth-century measure. Until 1854, Philadelphia’s population concentrated within William Penn’s original city boundaries, between the Delaware and Schuylkill rivers and from what is now South Street to Vine. By 1820, however, inhabitants in the independent boroughs, districts, and townships that made up the rest of the county already outnumbered those in the city proper. Some of these suburbs were places of significance in their own right, with Spring Garden, the Northern Liberties, and Kensington, all north of the city center, ranking as the ninth, eleventh, and twelfth biggest urban settlements in the nation in the 1850 census. These districts, in common with their neighbors, had won from the Commonwealth the right to establish their own local governments, with powers to tax, borrow, and spend, and thus remained independent of Philadelphia City’s control. While they varied in their social and political character, they tended to be poorer and more Democratic than the historic center, which they sometimes referred to as the “Whig Gibraltar.” The first organized calls for uniting the built-up portions of the county under one municipal authority came in response to two major riots in 1844. The anti-Catholic violence, which broke out in the northern suburb of Kensington and the southern district of Southwark–both neighborhoods in which Irish immigrants and native-born Protestants lived in close proximity–exposed the inadequacy of the prevailing system of law enforcement. With no uniformed officers in the county, and every jurisdiction responsible for its own policing, there was little to prevent violence from escalating. It took state militia armed with cannon to suppress the Southwark disturbance. Soon after the riots, the Public Ledger called for annexing the built-up outlying districts, and in November, citizens gathered at the County Court House (Congress Hall) to make the case for enlarging the city boundaries. Opposition to a New Charter The move for a new charter over the winter of 1844-5, however, came to very little. A bill was drawn up for consideration by the Commonwealth–which then, as now, held the power to create, alter, and destroy local government–but influential owners of property and city debt like Horace Binney (1780-1875) organized to oppose the proposal. Critics feared that consolidation would hand the keys of the Whig city to suburban Democrats, and that real estate owners in the prosperous city proper would be taxed to pay the interest on loans taken out by indebted outlying districts, which needed to borrow to maintain their rapid growth. The opponents of consolidation lobbied for legislation that would maintain the districts’ independence yet still address the issue of civil disorder by requiring that all built-up portions of Philadelphia retain one policeman for every 150 taxable inhabitants. This measure failed to prevent another major riot in 1849, which sparked renewed calls for annexation. While this time the proposal enjoyed more support from the city’s merchants, manufacturers, and professionals, it failed once again in the state capital. Instead of consolidation, Harrisburg legislators established a police force under an elected marshal to deal with disorder across the built-up sections of the metropolis. The Marshal’s Police proved relatively successful in maintaining the peace, and despite endemic fighting among rival companies of volunteer firemen and street gangs, there were no major riots from 1850 to the eventual passage of the Consolidation Act in 1854. Calls for metropolitan union nevertheless grew louder, despite the relative calm of the early 1850s. By then, municipal reformers hoped to do more than inoculate the city against the violence of the preceding decades. Many saw the district system as unnecessarily costly, as dozens of jurisdictions duplicated services that could have been provided more efficiently by a single government. Others feared that the city proper might become “an appendage to her own colonies,” as growth in industrial districts like Spring Garden and Kensington outpaced the historic center. Some no longer saw those suburbs as a financial burden, but rather as a potential source of tax revenue, because heavy investment in the Pennsylvania Railroad after its chartering in 1846 had left the city proper far more heavily indebted than its neighbors. Real estate owners in central Philadelphia complained that suburban property holders benefited from the trade that resulted from the rail link to Pittsburgh but had contributed little in the way of public funds to the railroad’s construction. Rivalry With New York Perhaps most importantly, though, supporters of consolidation believed that only a united Philadelphia would have the power and status to overtake New York in the struggle for metropolitan supremacy, a race the city had languished in for at least three decades as the completion of the Erie Canal (1825) and Chestnut Street’s decline as a financial center after Andrew Jackson’s attack on the Second Bank of the United States enabled Manhattan to pull ahead. As North and South clashed over the question of slavery extension, advocates of annexation for Philadelphia readily adopted the rallying cry “In Union There Is Strength” for their own cause. In the early 1850s both of the dominant political parties, the Whigs and the Democrats, promised to back annexation, but in Harrisburg, proposals for charter revision went nowhere. To break the impasse supporters of the measure–prodded by their erstwhile opponent Binney–decided in 1853 to nominate their own slate of candidates for the Pennsylvania Assembly and Senate. In alliance with advocates of a professional fire department, they put forward a mixture of independents and regular Whig and Democratic party nominees. At the head of the ticket was Eli Kirk Price (1797-1884), a progressive real estate attorney, while the wealthy locomotive builder Matthias W. Baldwin (1795-1866) was among the candidates for the lower house. Most of the consolidation slate triumphed, and before Price went off to take his seat in the Senate, an Executive Consolidation Committee met in Philadelphia to draft a bill. The Executive Consolidation Committee that convened in the Board of Trade rooms at the Merchants’ Exchange over the winter of 1853-54 represented a cross-section of Philadelphia’s economic elite. Many owned substantial real estate beyond the historic corporate boundaries, and by proposing to annex the entire county rather than just the much smaller built-up environs of the city proper, they went much further than their predecessors. Despite murmurs of protest from rural districts, the charter passed both houses and was signed into law in February. The new metropolis, encompassing industrial suburbs, romantic rural retreats, and vast stretches of farmland, came into being four months later. Architects of the 1854 charter saw it as a victory over the self-interested politicians of the district system and the triumph of a rational, modern government over an antiquated predecessor. Executive power was invested in a mayor elected at-large for a two-year term, and voters chose the nativist playwright Robert T. Conrad as the first to hold the office. In place of the old boundaries on the county map, meanwhile, twenty-four wards sent representatives to the Common and Select Councils. Ward representation preserved an element of localism in the councils–something party politicians quickly learned to exploit–but the financial muscle and territorial reach of the enlarged city enabled urban planning on a far greater scale than previously had been possible. Preserving Open Spaces The Consolidation Act resulted in other important changes for newly expanded Philadelphia. Among them, the legislation gave municipal authorities the duty to preserve open spaces, and before and after the Civil War steps were taken towards creating Fairmount Park, which lay entirely beyond the boundaries of the old city proper. Standardized street names and numbers (1857), a professionalized the fire department (1871), and a new city hall at Broad and Market Streets (1871-1901) demonstrated civic authorities’ readiness to raise the city’s metropolitan status, as did the suburban expansion fueled by horse-drawn streetcar lines and other infrastructure improvements that opened up cheap land in the consolidated city for builders. When Philadelphians in the second half of the nineteenth century contrasted their city of row homes with the tenements of New York, they credited the city’s expansion with eliminating the need for “vertical slums.” Perhaps most importantly, though, consolidation gave the municipal government the power to maintain the peace. While violence did occasionally break out–in 1871, for instance, the African American civil rights campaigner Octavius Catto was shot dead on a turbulent election day–the mayor, with his control of a large, uniformed police force, always had the resources at his disposal to prevent the kind of conflagrations that threatened to engulf the city in 1844. Under Republican stewardship, Philadelphia avoided the draft riots that occurred in New York in 1863 and the worst of the conflict between railroads and workers in the Great Strike of 1877. Citizens credited the Consolidation Act for the relative peace in a city once notorious for disorder. Some of these developments, however, owed more to legislation in Harrisburg than they did to actions by the city government, and by the late 1860s, the habit of state officials overriding the municipal authorities in matters pertaining to the metropolis caused frequent complaints. So too did the tendency of councilmen to claim executive power for themselves, thus weakening the powers of the mayor’s office, which consolidators had sought to strengthen. As party bosses–usually Democratic in the immigrant enclaves of South Philadelphia, but Republican in the growing suburbs–established ward strongholds, centralized city- and state-wide Republican machines distributed jobs and contracts to supporters. After the Civil War a generation of affluent reformers began to see the 1854 act more as a giant source of patronage than a measure designed to bring peace, prosperity, and economic government. They hoped another new charter, eventually passed in 1887, would improve matters, but under Republican leadership, Philadelphians remained, in Lincoln Steffens‘ memorable phrase, “the most corrupt and the most contented.” This was consolidation’s unanticipated legacy, but the act’s limitations should not mask its real achievements in laying the foundations of modern Philadelphia. Andrew Heath is a Lecturer in American History at the University of Sheffield, U.K. He is currently writing a book on the Consolidation of 1854. (Author information current at time of publication.) Copyright 2013, Rutgers University Connecting Headlines with History
<urn:uuid:4617e0ee-5773-41a6-8253-d430591fc737>
CC-MAIN-2022-33
https://philadelphiaencyclopedia.org/essays/consolidation-act-of-1854/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572408.31/warc/CC-MAIN-20220816151008-20220816181008-00404.warc.gz
en
0.956135
2,332
3.453125
3
Jan 29, 2015Fueled For a Championship On its way to an NCAA championship last fall, the University of Notre Dame women’s soccer team worked with the athletic department’s dietitian on eating right throughout the season. By Erika Whitman Erika Whitman, RD, CSSD, is the Sports Dietitian at the University of Notre Dame, where she oversees the nutrition needs of more than 700 student-athletes in 26 varsity sports. Arguably the toughest part of a nutritionist’s job is getting the athletes we work with to implement the advice we provide them about performance nutrition for consistent results. It’s easy to give athletes the right information about fueling for their sport–the hard part is getting them to follow through on it. It takes constant education, effort, and sometimes creativity. Here at the University of Notre Dame, nutrition is considered a critical component to performance and the support I receive from the rest of the athletic department in this area certainly makes my job easier. Our sport coaches, strength and conditioning staff members, and administrators all emphasize the importance of nutrition right alongside our athletes’ training regimens on and off the field. But I’ve found that if the athletes themselves don’t truly understand why a consistent, healthy diet is so important, they’re much less likely to effectively process the information I give them. In recent years, I’ve had success in helping our athletes remember key nutrition concepts–and act on them–with an acronym they won’t easily forget: The Fighting I-R-I-S-H. I-Intake adequate fuel R-Recovery nutrition for daily training I-Include top performance foods from all food groups S-Schedule a fueling plan to optimize training H-Hydrate to keep the body running efficiently A great example of athletes who took this acronym to heart last year is our national champion women’s soccer team. I’m confident that the squad increased its competitive edge by following through on these five key concepts throughout the season. Here’s a look at how I customized the I-R-I-S-H acronym for the team. INTAKE ADEQUATE FUEL Though it’s a simple and universally accepted concept that athletes must fuel their bodies with food in order for their muscles to operate efficiently, many athletes don’t meet this basic goal. In a sport like soccer, energy stores are critically important as the duration and intensity of the game place great demand on the energy stored in the body. Glycogen (stored carbohydrates in the liver and muscles) is the main fuel that powers the body during long, strenuous activity like a soccer game, so it’s important for soccer athletes to keep their tanks full. Some telling research in 2009 found that soccer players who had low levels of glycogen stores covered 25 percent less distance and spent a greater amount of time walking than players who had adequate glycogen levels. Soccer athletes must understand that they will not achieve optimal performance levels if they don’t properly fuel for practices and competition. It’s important to note that even within the same sport, energy needs vary from athlete to athlete. Factors such as duration of play, position, level of activity off the field, and body composition all play a role in determining a player’s fuel intake requirements. Individual diet plans should be based on athlete’s specific needs and performance goals, not just the sport they play. Still, it’s best to start with some average numbers. The typical elite-level female soccer player needs between 19 and 21 calories per pound of body weight per day during the season. So for a 140-pound player, this means consuming between 2,660 and 2,940 calories per day. Using this range as a starting point, I then factor in other variables that affect each player’s energy requirements and give them specific daily calorie goals. For an athlete who needs to put on healthy lean mass, I would help her figure out ways to add between 500 and 700 calories to her diet per day. For an athlete who is working toward weight loss, I would consult with her on ways to cut about 500 calories per day. RECOVERY NUTRITION When athletes hear “recovery nutrition,” they usually think of what they’re supposed to eat and drink after a game or practice. As a sports dietitian however, I know that when extensive daily physical demands are placed on the body, all foods consumed throughout the day can be classified as “recovery” foods. Recovery nutrition for athletes who are practicing or working out almost every day of the week should actually begin well before activity. The best pre-activity foods for soccer players are rich in carbohydrates to provide a quick energy source and protein to protect against muscle breakdown and delay energy availability. These foods should also contain minimal fat since fat takes longer to digest. The type and amount of food consumed is dependent on how much time the athlete has before activity. Fortunately, pregame meals are one time that I can make sure the athletes have plenty of proper foods to choose from. I also explain to them that these are the types of choices they should be making daily to ensure they are fully fueled for activity. Our soccer team’s pregame meals are typically scheduled three to four hours prior to game time. I encourage players to also have a snack like a granola bar, half of a bagel, small sports bar, piece of fruit, or yogurt one to two hours before the first whistle blows. The NCAA championship game was held in the afternoon last fall, and the squad’s pregame meal included pancakes, eggs, various breads and bagels, assorted cereals, fresh fruits, milk, juice, water, and condiments like peanut butter and jelly. If the game were held later in the evening, the meal would have consisted of grilled chicken, steamed vegetables, pasta, rice, salads, rolls, and fresh fruits. The next step to maximizing performance is maintaining fuel availability during activity. Because of their fast-acting properties, carbohydrates are the most efficient fuel source during exercise. The general recommendation is 30 to 60 grams of carbohydrates per hour of exercise if the training bout lasts longer than an hour at moderate to vigorous intensity. Soccer obviously fits this criteria since an NCAA game is at least 90 minutes long. Examples of carbohydrate-rich supplements good for use during activity include various sports drinks and energy chews, gummies, or sport beans. It is important players consume sources their stomachs can tolerate during exercise. Supplemental sports bars and beverages can be a great option during a practice or game, but I suggest athletes experiment with them at practice before using them during competition. Although nutrition may be the last thing on an athlete’s mind after a game or practice, I spend a lot of time stressing the importance of refueling within the “30-minute window” immediately following activity. During this time, the body is rebuilding at its fastest rate, using carbohydrates to replenish glycogen stores and repairing muscles via protein resynthesis. To improve this recovery, soccer athletes should consume 30 to 60 grams of carbohydrates and 10 to 20 grams of protein within the 30-minute window. Examples of foods that include these ratios are chocolate milk, a sandwich that includes a lean protein like turkey, a yogurt parfait, cheese and crackers, pretzels, trail mix with dried fruit and nuts, or a carbohydrate- and protein-containing shake or bar. A well-balanced meal within two hours will further enhance the recovery process. Regular meals and snacks should be consumed every two to four hours until the next pregame or pre-practice meal when the cycle begins all over again. INCLUDE TOP FOODS Unfortunately, college athletes are prone to the same temptations as their non-athlete friends. It’s easy for first-year players who come from homes that don’t regularly stock potato chips, sugary cereals, soda, and frozen pizza to be tempted by these previously forbidden foods. We focus on the 80-20 rule, which states that 80 percent of an athlete’s diet should consist of top performance foods, with only 20 percent of their diet made up of those less nutritious foods some athletes feel they just can’t live without. It’s my job to make sure our athletes understand why some of those favorite foods are not good choices and which ones are. (When necessary, we may shift the percentages to rename this the 90-10 rule.) Healthy food doesn’t have to mean boring food. Variety is important both in appealing to the athlete’s palate and in making sure our players meet nutrient needs necessary for the body to convert stored energy into available energy. A great way to illustrate what this looks like is to create a healthy plate with real food or using food models. Create examples of meals and snacks so they can see the differences in color and nutrients on healthy, balanced plates of food versus plates that lack variety and color, and therefore important nutrients. One tip I give our players is to make sure each meal has at least three different food groups represented on the plate. And from meal to meal, the foods that fit into each food group should also change. For example, if a player only eats broccoli and never spinach to represent the vegetable group, she will get a lot of vitamin C and calcium but miss out on a great source of iron and vitamin A. (For examples of top foods in each food group, see “Top Performance Foods” below.) SCHEDULE FOR OPTIMIZATION College athletes’ daily routines are often hectic. There are classes, meetings, practices, lifting sessions, and hopefully some socializing time, too. But none are an excuse for nutrition to be put on the backburner. That’s why sticking with a good eating schedule involves planning. I sit down with our players early in the season to outline their schedules and identify convenient and optimal meal and snack times for each day of the week. I emphasize the importance of avoiding long periods between scheduled meals or snacks, and show them what it means to balance calorie intake throughout the day. Dining halls are good options in many cases, but they’re not open very late at night after a long practice. And during the day when classes are going on, students may not have enough time for a sit-down meal. Some easy snack ideas that I suggest to athletes for these crunched times include granola bars, trail mix, sports bars, a piece of fruit, a cup of yogurt, or a peanut and butter jelly sandwich they made in the morning before they left for class. HYDRATE FOR EFFICIENCY Last, but certainly not least, hydration is not only important for proper fueling, but also to guard against dehydration and heat illness during the hottest months of the year. Similar to scheduling when athletes eat during the day, I’ve found that it’s important to outline a hydration plan. In addition to maintaining fluid intake throughout the day, soccer players need to begin seriously hydrating for competition two to three hours before the start of play. This means at least 14 to 22 ounces of water or a sports drink, then another six to 12 ounces every 10 to 20 minutes as tolerated up until game time. During competition, six to 12 ounces every 15 to 20 minutes is suggested. And at least 16 to 24 ounces per pound of fluid lost is recommended after practices and games. This is going to sound like a whole lot of liquid to the athletes, but I tell them performance has been shown to visibly decrease when fluid losses reach two percent of an athlete’s body weight (or three pounds of fluid weight loss in a 150-pound athlete). Game time is one of the toughest stretches for soccer players to stay on top of their hydration. With few clock stoppages and subbing opportunities, the athletes are on the go constantly. I advise our players to use every opportunity that arises–after a goal, a penalty, or maybe when the ball is kicked far out of bounds–to grab a water bottle from the sideline. I also make sure our players know how to look to their own bodies for information on their hydration status. For example, the color and smell of their urine reveals a lot. Clear and odor-free is the goal, and the darker brown and stronger the odor, the more dehydrated they are. The players are advised to watch their urine carefully throughout the day for these signs. As a sports dietitian dealing with the subject every day, it can be easy to forget that most athletes spend little time thinking about what they eat and drink. However, with a little prompting and reminders of how it can improve their performance, they will make better choices. I’ve found here at Notre Dame that sometimes all it takes is one little word. Sidebar: TOP PERFORMANCE FOODS Below are some of the best foods athletes can choose to eat from each of the food groups below, along with why they are great choices. Fruits (carbohydrates): Fresh, frozen, or canned berries, melons, bananas, apples, and grapes; dried fruits like raisins and cranberries; 100-percent fruit juices provide potassium, magnesium, fiber, vitamins A and C, and other vitamins, minerals, and antioxidants. Whole fruits are also filling, low fat, and low calorie. Vegetables (carbohydrates, protein): Fresh, steamed, frozen, grilled, or canned spinach, carrots, green beans, tomatoes, and corn provide potassium, fiber, vitamins A and C, and other vitamins, minerals, and antioxidants. Vegetables are also filling, low fat, and low calorie. Grains (carbohydrates, protein, fat): 100-percent whole wheat or whole grain pastas, breads, bagels, and cereals; brown rice; oatmeal; quinoa provide more B vitamins, iron, zinc, potassium, and fiber than refined and processed counterparts. Meats & beans (protein, carbohydrates, fat): Baked or grilled lean meats like boneless, skinless chicken breast, turkey breast, salmon, and tuna; various beans like black beans; lentils like chickpeas or prepared hummus; nuts and seeds; egg whites provide zinc, iron, B-12 and other B vitamins, selenium, and a variety of amino acids. Lean meats are low in fat and calories. Dairy (carbohydrates, protein, fat): Skim and low-fat milk; cheeses like cottage cheese, mozzarella, and various soft cheeses; yogurt and Greek yogurt; pudding are low fat, rich in calcium, magnesium, potassium, phosphorus, and vitamins A and D. Other (fat): Baked goods with real fruit and nuts like Fig Newtons, banana bread, and trail mix bars; oils like olive oil, fish oils, and flax; non-creamy salad dressings provide anti-inflammatory omega-3 fatty acids.
<urn:uuid:daf886c3-f94c-449e-b74d-6c2a874470dd>
CC-MAIN-2022-33
https://training-conditioning.com/article/fueled-for-a-championship/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00205.warc.gz
en
0.952469
3,139
2.5625
3
Text: The Epic of Gilgamesh Author: (ancient Mesopotamians) In The Epic of Gilgamesh, Gilgamesh, who is the king of Uruk, is also 2/3 god and 1/3 man. He is a cruel, arrogant king. He treated his subjects terribly, and they prayed for the gods to help them. The gods decide that either Gilgamesh or his best friend, Enkidu, has to die. They end up making Enkidu very ill, and right before he dies, he has a dream of the underworld that he tells Gilgamesh about. He dies, and Gilgamesh can’t stop grieving for his friend and thinking about his own eventual death. He sets off on a journey, determined to find Utnapishtim, the Mesopotamian version of Noah from Noah and the Flood. After the flood, the gods granted Utnapishtim eternal life, and Gilgamesh hopes that Utnapishtim can tell him how he might avoid death too. A ferryman named Urshanabi takes Gilgamesh on the boat journey across the sea and through the Waters of Death to Utnapishtim. Utnapishtim gives him a test where he has to stay awake for a week if he wants to live forever. Gilgamesh fails the test. Utnapishtim’s wife feels bad for Gilgamesh and has him tell Gilgamesh about a plant that restores a person’s youth. A snake (serpent) steals the plant. Gilgamesh heads back home, defeated, and realizes that he can’t live forever, but mankind in general can. A terrible curse and plague is destroying Thebes, where Oedipus is King. Oedipus finds out the curse will go away if the murderer of the previous king, Laius, is found and prosecuted. In his journey, Oedipus is told by Tiresias, a blind prophet, that Oedipus himself killed Lauis. He’s upset and bothered so his wife, Queen Jocasta tells him not to believe prophets, that they’re not always right. She tells him how a prophet said that her and Laius’ son would grow up to kill Laius and sleep with his mother. This makes Oedipus feel worse since he was told as a kid by a drunk old man that he was adopted and that he’d one day kill his biological father and sleep with his biological mother. He also once killed a man at a crossroads, and the man he killed sounds a lot like Laius. All of this comes together, and Oedipus realizes that he killed his biological father, Laius, and married—and had children with—his mother, Jocasta. Jocasta hangs herself, and Oedipus gouges his eyes out and is exiled from Thebes. Text: A Doll’s House Author: Henrik Ibsen Nora and Torvald Helmer are a married couple. Years ago, Torvald was very sick and a vacation to the warm south of Italy was the only way to save his life. Nora secretly took out a loan for 4800 crowns years ago from a man named Krogstad. As a woman, she was not legally allowed to take out a loan on her own so she forged her dying father’s signature, accidentally dating it for 3 days after his death. Years later, Krogstad, who now works for Nora’s husband, is blackmailing Nora. If she doesn’t save him from being fired, Krogstad will tell Torvald about the loan. Nora goes to great lengths to try to hide her secret. She only tells her friend, Mrs. Linde, about the loan. When Krogstad is fired, he mails a letter to Torvald, telling him about everything. Mrs. Linde tries to convince Krogstad to not blackmail Nora, and she succeeds, but it is too late. (Mrs. Linde and Krogstad also rekindle an old romance.) At Nora’s house, Torvald reads the letter and is furious. He is ready to isolate and disown Nora, except when they’re in public, when he receives a 2nd letter from Krogstad saying that everything was just a terrible misunderstanding. He forgives Nora, but Nora ends up leaving her husband and children. Othello (the Moor) is a dark-skinned Venetian army general who is in love with his wife, Desdemona. Desdemona’s dad is pissed because he doesn’t like Othello. Othello promotes Cassio in the army because he has extensive training in strategy. Iago, Othello’s right hand man, is pissed because he wanted to be promoted. He’s been in the army longer than Cassio. Iago gets together with his friend, Roderigo, who is in love with Desdemona, and launches this whole big scheme and convinces Othello that Desdemona is cheating on him with Cassio. (She’s not.) In the end, Iago kills Roderigo. Othello kills Desdemona. Iago then kills his wife, Emelia, who was Desdemona’s friend and blew his cover. Othello realized he was wrong and attempts to kill Iago before killing himself. Text: The Things They Carried Author: Tim O’Brien Lieut. Jimmy Cross Tim O’Brien (narrator) In this collection of vignettes, Tim O’Brien recollects his experiences in the Vietnam war through his memories of his experiences, his friends, and their stories. Text: In the Time of the Butterflies Author: Julia Alvarez Mama & Papa Mirabal Minerva (m. Manolo) Patria (m. Pedrito) Dedé (m. Jaimito) María Teresa/ Mate (m. Leandro) In the Time of the Butterflies is the fictional story of four real persons, the Mirabal sisters of the Dominican Republic. In 1960, three of the sisters, members of the underground movement opposing the regime of the dictator Rafael Trujillo, were ambushed on a lonely mountain road and assassinated. Alvarez’s novel, made up of three sections and an epilogue, intersperses chapters for each sister. All except Dedé’s are first-person narrations; Dedé does narrate the epilogue, however. Section 1 of the novel (“1928 to 1946”) opens in 1994 with a woman interviewing Dedé about her martyred sisters. The section then describes how youthful Minerva, María Teresa, and Patria awoke to political awareness. Minerva learned of the dictator’s brutality from her schoolmate Sinita, whose family lost all of its men to Trujillo. Minerva educates young María Teresa (Mate). Patria begins to quest on her faith in God and Trujillo as a young wife plunged into a religious crisis after a stillbirth. Minerva is the first to act on her political convictions. Won over to Sinita’s hatred of Trujillo, she performs in a play covertly celebrating pre-Trujillo freedom. Near its end, Sinita, playing Liberty, suddenly walks up to Trujillo with her toy bow and aims an imaginary arrow at him. She is quickly subdued, and the tense moment passes, but Minerva has come to Trujillo’s notice. Section 2, “1948 to 1959,” covers the years of the Mirabals’ resistance activity. Minerva meets activist Virgilio (Lío) Morales and continues in his path when he is forced to flee the country. One day, she discovers her father’s mistress and four illegitimate daughters living in poverty. She also finds letters from Lío that her father has kept from her. Shortly thereafter, Trujillo summons her to attend a dance; when he tries to hold her vulgarly close, she slaps him. Her family quickly whisks her away, but she leaves behind her purse, containing Lío’s letters. Her father is soon detained for interrogation, and the experience breaks his health. Over the next months, Mate joins Minerva in the underground; both marry fellow revolutionaries and have daughters. Eventually, Patria’s son Nelson yearns to join too, and Patria is herself converted when she witnesses a massacre of young rebels by Trujillo forces. Section 3 relates events leading up to the death of the three sisters, now known nationwide as “La Mariposas” or “The Butterflies.” Trujillo attacks the underground, and Minerva, Mate, the three husbands, and Nelson are arrested. Mate and Minerva keep up the spirit of resistance in their crowded cell, and a solidarity grows between the political and nonpolitical prisoners there. Mate is eventually subjected to electric shock torture. Meanwhile, though, the political tide has begun to turn. The Organization of American States comes to investigate prison conditions, and Mate manages to slip their representative a statement by her cellmates. Soon afterward, Trujillo releases the Butterflies. When Minerva tries to track down information on the state of the underground, she learns that they have become national symbols of resistance. In fact, Trujillo claims his biggest problems are the church and the Mirabal sisters. Before long, Minerva’s and Mate’s husbands are moved to a remote prison. On November 25, 1960, the two wives and Patria set out with Rufino, their driver, to visit the men, despite Dedé’s warning that it is dangerous for them to travel together. They make it to the prison safely, but midway home the narrative breaks off abruptly. In the epilogue, Dedé recalls that for weeks afterward, people brought her information about her sisters’ last hours. They were strangled and clubbed, then returned to the Jeep and pushed off the cliffside. Dedé, enmeshed in grief, barely noted events of the next few years: Trujillo’s assassination, the murderers’ trial, the country’s first free elections in thirty-one years, a coup followed by civil war, and finally peace. The Mirabal sisters, meanwhile, become legends, and Dedé the conservator of their memory. Alvarez’s postscript explains that her father was a member of the same resistance movement as the Mirabals and fled the Dominican Republic shortly before their deaths. Alvarez grew up hearing about the sisters and decided to write their story. When she began researching their lives, however, she uncovered a wealth of legends and anecdotes about them, but few verifiable facts. She thus turned to fiction to discover who they were. She began this project to answer the question, “What gave them that special courage?” She ends by noting that the anniversary of their deaths, November 25, is now, appropriately, the International Day Against Violence Toward Women. Your Creative Writing Choices : (select ONE) •Write a letter from Dedé to one of the other characters, of your choosing, after the book ends. This must be at least 1 full page. •Write another scene from In the Time of the Butterflies. This scene could take place at any point in the book. This must be at least 1 full page. •Write a poem that expresses the story and beliefs of the Mirabal sisters. Use this to show why their story matters. This must be at least 1 full page. This is worth a test grade, and due on Wednesday, June 17th. Ms. Barbour is an 11th grade English and Poetry teacher at Franklin High School.
<urn:uuid:56ca851f-bc83-4994-8086-4782fa8d6a85>
CC-MAIN-2022-33
http://www.mskbarbour.com/english-11-blog/archives/06-2015
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573908.30/warc/CC-MAIN-20220820043108-20220820073108-00004.warc.gz
en
0.961939
2,696
3.171875
3
In Part One of Grand Canyon Futures, Intercontinental Cry’s Garet Bleir looks at some of the key issues he uncovered during his investigation into the Canyon uranium mine owned by Toronto-based mining company, Energy Fuels. Just a few months ago, millions of gallons of water fell into the depths of the Canyon Mine. Suddenly, this once-clean water contained dangerously high levels of uranium and arsenic, according to the United States Geological Survey (USGS). Most of the contaminated water was hauled offsite in trucks or otherwise evaporated into an already water-starved climate. Shortly before the incident, Energy Fuels (EFR) was drilling its mine shaft when it breached the Coconino Perched Aquifer below. Fred Tillman, who oversees the Canyon Mine as a head hydrologist at the USGS, told IC that the company tried to separate the contaminated water from the clean drinking water. However, this effort came to an abrupt halt when its hauling bucket dropped into the shaft and crashed through the separation system. At first, the water pouring in from the Coconino Aquifer contained just minor amounts of trace elements including uranium; but that soon changed. “When Energy Fuels started letting that water fall to the bottom of the shaft after the bucket broke, the uranium levels rose,” Tillman told us. “Originally it would have been very useful to ranchers and cattle, but they can no longer distribute it after the uranium levels rose.” Directly below the Cococino Perched Aquifer lies a much larger groundwater source known as the Redwall Muav Aquifer. The Redwall Muav feeds into several seeps and springs that support the Grand Canyon’s delicate ecosystem. It is also the Havasuapai Tribe’s main source of water. No one knows if the Redwall Muav has been contaminated as a result of this recent string of events. When IC visited the Canyon mine last month, the USGS was still in the process of testing the water’s safety. Whether or not there is any uranium in the Redwall Muav, the Havasupai now face with a daunting future. At any moment and likely without warning their water may become too toxic to drink. “Arizona calls itself the Grand Canyon State. But what are they doing about the Grand Canyon? They are allowing this to happen,” Havasupai Chairman Rex Tilousi told IC. “Water is going to be just as valuable as gold in the future.” When the Havasupai first heard that the Canyon Mine was being developed in the late 1970s on the site of Red Butte, it wasn’t the company who told them. Someone working in the nearby city of Tusayan noticed the development and asked the tribe if they knew the land they held sacred was being actively desecrated. “Without letting us know, Energy Fuels had already scraped the ground, the sage, and underneath the dust they found ancient grinding stones, baskets, the pottery work our people traded with other tribes, and even bones,” Tilousi said. “They had scraped everything away getting that place ready for mining.” The Havasupai voiced their concerns, but their words fell on deaf ears. The Arizona Department of Environmental Quality (ADEQ) issued the company’s water permits and the United States Forest Service (USFS) gave the mining operation a green light. However, these government agencies did so without knowing the full impact of the mine. They also failed to take precautionary measures to ensure that the aquifer would not be adversely affected. According to a joint lawsuit filed by the Havasupai, the Grand Canyon Trust, the Center for Biological Diversity and the Sierra Club, the USFS made two critical mistakes when it approved the Canyon mine. According to the plaintiffs, the Forest Service did not require Energy Fuels to update its “outdated” Environmental Review or carry out formal consultations with the Havasupai before issuing the permit. Beyond the mine’s fences Soon after we arrived in Arizona, Haul No!, an indigenous-led action and education group, held an eye-opening two-week-long tour that took us far beyond the Canyon mine’s fences. We followed Haul No! as the group traveled 300 miles from the city of Bluff, Utah, to the sacred site of Red Butte in time for the Havasupai’s four-day prayer gathering–an event that hasn’t been held in over ten years. IC had the opportunity to speak to many people from affected communities along the way. We documented protests, police confrontations, educational demonstrations, community discussions, community council meetings, and more. Klee Benally is a volunteer with Haul No! and member of the Navajo Nation. He told us, “Two of my uncles worked at Canyon Mine back in the 80s. They helped lay concrete and build the headframe for the uranium mining shaft. Today I feel a great sense of responsibility to rectify their transgressions.” During the Havasupai’s prayer gathering, members from the Havasupai Tribe, Navajo Nation, and Haul No! took their message to the gates of the Canyon Mine. “Energy Fuels has announced that they could start transporting uranium ore this summer,” said Sarana Riggs, the Native America Volunteer Coordinator at the Grand Canyon Trust. “We could be facing 12 trucks per day carrying 30 tons each of high-level radioactive ore [that is] not required to be thoroughly labelled and covered only with tarps.” EFR is planning to send that ore to the White Mesa Mill in Blanding, Utah. The White Mesa Mill is the only fully-licensed conventional uranium mill operating in the United States and it has a checkered past. The mill is built on the sacred ancestral lands of the Ute Mountain Ute Tribe, in a region that hosts hundreds of significant cultural sites including burial sites, kivas, and artifacts. During the mill’s construction, a number of these cultural sites were demolished without the Ute Mountain Ute’s consent. Energy Fuels has subjected the tribe to environmentally-racist practices ever since. In more recent times, the mill, which employs many people in the struggling economy of Blanding, has been embroiled in another controversy. According to the Grand Canyon Trust, the mill is actively emitting toxic levels of the radioactive gas radon. These radon emissions exceed Clean Air Act standards and endanger the health of all mill workers and nearby communities. According to various conservation groups and involved scientists, there are also severe oversights in the mill’s ongoing operations. “The state of Utah and Energy Fuels are not providing credible assurances or protective measures to instill confidence in the White Mesa tribe or surrounding communities that the members of these communities and their environments will be safe from the documented impacts of the White Mesa Mill,” said the Ute Mountain Ute’s Environmental Programs Director. “More needs to be done to ensure that the environment will actually be protected.” Despite these glaring concerns, however, we found a great deal of support for the mill during our time in the cities of Blanding and Bluff. The mill workers we spoke with went so far as to praise the safety of all operations at the mill. At a recent meeting in Blanding, the State of Utah took comments regarding the mill’s re-licensing. Over a hundred people from both sides of the debate were in attendance. If the mill is not granted a re-license it would be effectively shut down. Sean McKay is a worker at the mill, a lifelong resident of San Juan County, and a member of the Navajo reservation. During the meeting, he said he understands there have been uranium mining and milling mistakes in the past, including within the Navajo reservation, but he assured that the mill has been compliant. “There aren’t a lot of jobs that take care of you like this one does and the mill helps the local people and county,” McKay said in his public comment. “All of the workers down there are healthy and it supplies a lot of jobs for Navajos on the reservation.” Many others on the Navajo Reservation, such as Sarana Riggs, do not share McKay’s sentiment. “I grew up in a family where my great grandfather was a mill worker at Rare Metals Mill,” said Riggs. He was a soldier who fought in Normandy, lived, and made it back home to provide for his family. But he faced these unknown dangers of radioactive contamination throughout his life.” Rigg’s great grandfather unknowingly suffered from stomach cancer and passed away about 10 years ago, said Riggs. No one was aware that this was what he was suffering from. “Somebody went to check on him to see if he needed food and to see if he was okay,” Riggs said. “They found him on the kitchen floor bleeding from all areas of his body.” The Rare Metals Mill has since been shut down and houses around the mill were demolished due to an increase in birth defects among families in the area. Others in the Blanding community who survive on wages from the mill, aren’t so easily persuaded by previous tragedies. “If Rigg’s family has cancer that is so prevalent, then it probably has to do with a gene in her family and nothing to do with uranium,” responded Wendy Black, a resident of Blanding and the wife of one of the mill workers. “Someone should probably tell her that she should be tested, because I got my cancer from my mother and she got it from her mother who got it from her mother.” Black says that her husband has worked there for years in the tailing ponds in Monticello and that none of his family has cancer. “Cancer is not a bad thing. It’s completely different than it used to be,” Black concluded her statement. Many of the mill’s supporters offered a different concluding remark. “I am not afraid of uranium”, they repeatedly announced. But the statistical evidence provided by the Indian Health Service tells another story. The cancer death rate on the Navajo Reservation doubled from the 1970’s to the 1990s, while the U.S. rate as a whole declined slightly, according to a report from the Indian Health Service. In Part Two of Grand Canyon Futures, Garet Bleir takes a closer look at the Canyon Mine’s impacts on water. He also interviews Mark Chalmers, the President and COO of Energy Fuels. Garet Bleir is an investigative journalist working for Toward Freedom documenting human rights and environmental abuses surrounding uranium mining in the Grand Canyon region. To follow along with interviews and photos highlighting indigenous voices and to receive updates on his series for Toward Freedom and Intercontinental Cry, follow him on Instagram or Facebook.
<urn:uuid:374955e5-84d7-4964-8d85-53510603556c>
CC-MAIN-2022-33
https://towardfreedom.org/story/archives/environment/butting-heads-great-divide-uranium-mining-grand-canyon/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571536.89/warc/CC-MAIN-20220811224716-20220812014716-00005.warc.gz
en
0.9647
2,327
2.796875
3
~~May 19, 2014~~ Lorraine Hansberry, author of “A Raisin in the Sun”—she was born today in 1930! When it opened in 1959, the play was the first written by an African-American woman to make it to Broadway. Lorraine Vivian Hansberry (May 19, 1930 – January 12, 1965) was an American playwright and writer. Hansberry inspired Nina Simone’s song “To Be Young, Gifted and Black“. She was the first black woman to write a play performed on Broadway. Her best known work, the play A Raisin in the Sun, highlights the lives of Black Americans living under racial segregation in Chicago. Hansberry’s family had struggled against segregation, challenging a restrictive covenant and eventually provoking the Supreme Court case Hansberry v. Lee. The title of the play was taken from the poem “Harlem” by Langston Hughes: “What happens to a dream deferred? Does it dry up like a raisin in the sun?” After she moved to New York City, Hansberry worked at the Pan-Africanist newspaper Freedom, where she dealt with intellectuals such as Paul Robeson and W. E. B. Du Bois. Much of her work during this time concerned the African struggle for liberation and their impact on the world. Hansberry has been identified as a lesbian, and sexual freedom is an important topic in several of her works. She died of cancer at the age of 34. Lorraine Hansberry was the youngest of four children born to Carl Augustus Hansberry, a successful real-estate broker, and Nannie Louise (neé Perry) a school teacher. In 1938, her father bought a house in the Washington Park Subdivision of the South Side of Chicago, violating a restrictive covenant and incurring the wrath of their white neighbors. The latter’s legal efforts to force the Hansberry family out culminated in the U.S. Supreme Court’s decision in Hansberry v. Lee. The restrictive covenant was ruled contestable, though not inherently invalid. Carl Hansberry was also a supporter of the Urban League and NAACP in Chicago. Both Hansberrys were active in the Chicago Republican Party. Carl died in 1946, when Lorraine was fifteen years old; “American racism helped kill him,” she later said. The Hansberrys were routinely visited by prominent Black intellectuals, including W.E.B. Du Bois and Paul Robeson. Carl Hansberry’s brother, William Leo Hansberry, founded the African Civilization section of the history department at Howard University. Lorraine was taught: ‘‘Above all, there were two things which were never to be betrayed: the family and the race.’’ Lorraine Hansberry has many notable relatives including director and playwright Shauneille Perry, whose eldest child is named after her. Her grand-niece is actress Taye Hansberry. Her cousin is the flautist, percussionist, and composer Aldridge Hansberry. Hansberry became the godmother to Nina Simone‘s daughter Lisa — now Simone. ~~Marriage and sexuality~~ On June 20, 1953, she married Robert Nemiroff, a Jewish publisher, songwriter and political activist. Hansberry and Nemiroff moved to Greenwich Village, the setting of The Sign in Sidney Brustein’s Window. Success of the song “Cindy, Oh Cindy“, co-authored by Nemiroff, enabled Hansberry to start writing full-time. It is widely believed that Hansberry was a closeted lesbian, a theory supported by her secret writings in letters and personal notebooks. She was an activist for gay rights and wrote about feminism and homophobia, joining the Daughters of Bilitis and contributing two letters to their magazine, The Ladder, in 1957 under her initials “LHN.” She separated from her husband at this time, but they continued to work together. A Raisin in the Sun was written at this time and completed in 1957. On her religious views, Hansberry was an atheist. According to historian Fanon Che Wilkins, “Hansberry believed that gaining civil rights in the United States and obtaining independence in colonial Africa were two sides of the same coin that presented similar challenges for Africans on both sides of the Atlantic.” In response to the independence of Ghana, led by Kwame Nkrumah, Hansberry wrote: “The promise of the future of Ghana is that of all the colored peoples of the world; it is the promise of freedom.” Regarding tactics, Hansberry said Blacks “must concern themselves with every single means of struggle: legal, illegal, passive, active, violent and non-violent. They must harass, debate, petition, give money to court struggles, sit-in, lie-down, strike, boycott, sing hymns, pray on steps — and shoot from their windows when the racists come cruising through their communities.” In a Town Hall debate on June 15, 1964, Hansberry criticized white liberals who couldn’t accept civil disobedience, expressing a need “to encourage the white liberal to stop being a liberal and become an American radical.” At the same time, she said, “some of the first people who have died so far in this struggle have been white men.” Hansberry was a critic of existentialism, which she considered too distant from the world’s economic and geopolitical realities. Along these lines, she wrote a critical review of Richard Wright’s The Outsider and went on to style her final play Les Blancs as a foil to Jean Genet’s absurdist Les Nègres. However, Hansberry admired Simone de Beauvoir’s The Second Sex. In 1959, Hansberry commented that women who are “twice oppressed” may become “twice militant”. She held out some hope for male allies of women, writing in an unpublished essay: “If by some miracle women should not ever utter a single protest against their condition there would still exist among men those who could not endure in peace until her liberation had been achieved.” Hansberry was appalled by the nuclear bombing of Hiroshima and Nagasaki which took place while she was in high school, and expressed desire for a future in which: “Nobody fights. We get rid of all the little bombs — and the big bombs.” She did believe in the right of people to defend themselves with force against their oppressors. The Federal Bureau of Investigation began surveillance of Hansberry when she prepared the Montevideo peace conference. The Washington, D.C. office searched her passport files “in an effort to obtain all available background material on the subject, any derogatory information contained therein, and a photograph and complete description,” while officers in Milwaukee and Chicago examined her life history. Later, an FBI reviewer of Raisin in the Sun highlighted its Pan-Africanist themes as dangerous. After a battle with pancreatic cancer she died on January 12, 1965, aged 34. James Baldwin believed “it is not at all farfetched to suspect that what she saw contributed to the strain which killed her, for the effort to which Lorraine was dedicated is more than enough to kill a man.” Hansberry’s funeral was held in Harlem on January 15, 1965. Paul Robeson and SNCC organizer James Forman gave eulogies. The presiding minister, Eugene Callender, recited messages from Baldwin and the Reverend Martin Luther King, Jr. which read: “Her creative ability and her profound grasp of the deep social issues confronting the world today will remain an inspiration to generations yet unborn.” She is buried at Asbury United Methodist Church Cemetery in Croton-on-Hudson, New York. Raisin, a musical based on A Raisin in the Sun, opened in New York in 1973, winning the Tony Award for Best Musical, with the book by Nemiroff, music by Judd Woldin, and lyrics by Robert Britten. A Raisin in the Sun was revived on Broadway in 2004 and received a Tony Award nomination for Best Revival of a Play. The cast included Sean Combs (“P Diddy”) as Walter Lee Younger Jr., Phylicia Rashad (Tony Award-winner for Best Actress) and Audra McDonald (Tony Award-winner for Best Featured Actress). It was produced for television in 2008 with the same cast, garnering two NAACP Image Awards. In 2002, scholar Molefi Kete Asante listed Hansberry as one of his 100 Greatest African Americans. The Lorraine Hansberry Theatre of San Francisco, which specializes in original stagings and revivals of African-American theatre, is named in her honor. Singer and pianist Nina Simone, who was a close friend of Hansberry, used the title of her unfinished play to write a civil rights-themed song “To Be Young, Gifted and Black” together with Weldon Irvine. The single reached the top 10 of the R&B charts. A studio recording by Simone was released as a single and the first live recording on October 26, 1969, was captured on Black Gold (1970). Lincoln University‘s first-year female dormitory is named Lorraine Hansberry Hall. There is a school in the Bronx called Lorraine Hansberry Academy, and an elementary school in St. Albans, Queens, New York, named after Hansberry as well. On the eightieth anniversary of Hansberry’s birth, Adjoa Andoh presented a BBC Radio 4 programme entitled “Young, Gifted and Black” in tribute to her life. In 2013, Lorraine Hansberry was posthumously inducted into the American Theatre Hall of Fame. A Raisin in the Sun (1959) A Raisin in the Sun, screenplay (1961) “On Summer” (essay) (1960) The Drinking Gourd (1960) What Use Are Flowers? (written c. 1962) The Arrival of Mr. Todog – parody of Waiting for Godot The Movement: Documentary of a Struggle for Equality (1964) The Sign in Sidney Brustein’s Window (1965) To Be Young, Gifted and Black: Lorraine Hansberry in Her Own Words (1969) Les Blancs: The Collected Last Plays / by Lorraine Hansberry. Edited by Robert Nemiroff (1994) Toussaint. This fragment from a work in progress, unfinished at the time of Hansberry’s untimely death, deals with a Haitian plantation owner and his wife whose lives are soon to change drastically as a result of the revolution of Toussaint L’Ouverture. (From the Samuel French, Inc. catalogue of plays.) ~~Lorraine Hansberry: Mini – DOCUMENTARY~~ ~~Uploaded on Jan 10, 2011~~ Lorraine Hansberry (May 19, 1930 — January 12, 1965) was an African American playwright and author of political speeches, letters, and essays. Her best known work, A Raisin in the Sun, was inspired by her family’s legal battle against racially segregated housing laws in the Washington Park Subdivision of the South Side of Chicago during her childhood. Hansberry attended the University of Wisconsin — Madison, but found college uninspiring and left in 1950 to pursue her career as a writer in New York City, where she attended The New School. She worked on the staff of the black newspaper Freedom under the auspices of Paul Robeson, and worked with W. E. B. DuBois, whose office was in the same building. A Raisin in the Sun was written at this time, and was a huge success. It was the first play written by an African-American woman to be produced on Broadway. At 29 years, she became the youngest American playwright and only the fifth woman to receive the New York Drama Critics Circle Award for Best Play. While many of her other writings were published in her lifetime – essays, articles, and the text for the SNCC book The Movement, the only other play given a contemporary production was The Sign in Sidney Brustein’s Window. We ALL are ONE!! ~~To Be Young, Gifted And Black – Nina Simone – Live – 1986~~ ~~Uploaded on Mar 20, 2010~~ The Video “To Be Young, Gifted And Black” by Nina Simone was recorded in front of the liveconcert at 21 December 1986 in Zürich by ISIS VOICE. All Rights Reserved © Produced by ISIS VOICE Bern – Switzerland A listen to the magic of “ISIS VOICE” production © Dr. Nina Simone & ISIS VOICE, Suzanne E. Baumann Will never be forgotten!
<urn:uuid:cd0ef850-647d-44df-8a75-ecbf4404b5d7>
CC-MAIN-2022-33
https://hrexach.wordpress.com/tag/federal-bureau-of-investigation/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.95/warc/CC-MAIN-20220817032054-20220817062054-00205.warc.gz
en
0.96862
2,824
2.859375
3
Pupil collaboration has develop into more and more vital over the previous years. Academics are inspired to advertise collaboration within the classroom to realize higher studying outcomes. The consensus is that fostering scholar collaboration by way of group actions results in extra partaking and environment friendly studying. Nevertheless, collaborative studying will not be about dividing college students into teams and assigning duties. Nor does it restrict itself to open dialogue classes. Over the past two years, we’ve seen a brand new face of schooling because of the pandemic, the place lecturers and college students discovered to include edtech into their every day lives. This main change is right here to remain, taking part in a significant function in how college students now be taught and work together with one another and their lecturers. The latter have numerous instruments to advertise scholar collaboration each within the classroom and in a digital surroundings. That is important since many establishments proceed implementing edtech even now when in-person courses are once more doable. Be taught why scholar collaboration ought to be on the coronary heart of the instructing expertise and what edtech instruments you possibly can depend on to foster it. Why scholar collaboration within the classroom issues Studying collectively within the classroom helps enhance scholar engagement. The advantages of scholar collaboration are backed by the social studying principle, which helps the concept that an interactive studying surroundings yields higher scholar outcomes. It improves college students’ need to be taught, their involvement in what is going on within the classroom, and their potential to accumulate and reveal new competencies. Thre are many benefits that social studying gives, similar to: - Construct and enhance relationships inside the academic context; - Purchase new information in a sensible and interactive method; - Enhance their communication and analytical expertise by way of significant discussions; - Enhance their problem-solving expertise by working collectively to finish time-bound duties; - Enhance their confidence, in addition to their sense of accountability; - Purchase future-ready expertise that can assist them of their careers. Learn extra: Classroom collaboration: Studying collectively What does scholar collaboration appear like in right now’s classroom? There are lots of methods to advertise scholar collaboration, each within the classroom and the digital sphere. The aim is to create a free and open studying area the place college students can enhance their social and emotional studying (SEL) skills. Elevated want for scholar autonomy Immediately’s college students have completely different expectations in comparison with the earlier generations. The pandemic has solely heightened their want for autonomy, self-paced studying, and customized academic experiences. On the identical time, college students managed to understand the significance and which means of collaborative studying. Away from their colleagues and lecturers, they perceived the worth of classroom group from a unique perspective and began to understand it extra. Digital studying communities are on the rise Because of video conferencing and different communication instruments, college students and lecturers may nonetheless have interaction in significant collaborative actions within the digital classroom. In accordance with a survey performed on 1,500 U.S. school college students, most of them thought-about that the digital classroom managed to supply a great sense of group. Furthermore, nearly 25% of scholars felt that on-line courses supplied them with extra of a group feeling than in-person courses. Much less want for face-to-face conferences In-person instructing and studying are nonetheless helpful. Nevertheless, some conferences, similar to conferences wanted for workforce initiatives, can occur on-line by way of video conferencing instruments, boards, teams, and different collaboration instruments. College students who’re a part of the identical analysis or debate group can work collectively even remotely and rapidly change from particular person examine to group or one-on-one discussions. Digital communication is a future-ready ability College students can simply share concepts and sources with out assembly head to head. This sensible method is just about the identical in a distant or hybrid workplace. It’s environment friendly, interactive, and interesting. Due to this fact, it helps college students put together for a aggressive work surroundings the place workers should show good digital communication expertise. These findings are vital as a result of we’d by no means revert to the outdated pre-pandemic classroom. Know-how has shifted how college students and lecturers have interaction in academic actions. Additionally, academic establishments worldwide have tapped into the massive potential of edtech and are able to proceed exploring it. How LMS teams allow scholar collaboration for higher studying outcomes Firstly of the pandemic, some lecturers dreaded on-line programs particularly as a result of the adjustments occurred in a single day. Nevertheless, many realized that distance studying has its advantages, together with the opportunity of fostering world collaboration. There are completely different instruments that may assist lecturers obtain this, and some of the complete options is the studying administration system (LMS). One of many important LMS options that foster collaboration are teams. What are LMS teams? LMS teams are the most effective methods to advertise scholar collaboration in digital areas. Teams are channels that permit college students and lecturers to speak on-line. Since you are able to do this from any cellular gadget, you and your college students will be capable to talk by way of the group at any time, from wherever. These are built-in options an LMS can present. You simply should set them up in accordance with your wants and preferences. Teams provide loads of configuration choices. You may simply add members to a gaggle inside seconds and get at-a-glance details about every group’s lively members and what number of college students have joined. What are the several types of LMS teams? LMS teams are versatile. Academics can create as many teams as they need to for various functions similar to: - Research and analysis - Golf equipment - College or class teams - Neighborhood teams Teams may be as generic or restrictive as you need them to. The most typical makes use of that allow scholar collaboration are: The varsity teams are solely out there to members of a sure faculty. In case your faculty is a part of a district, that is a vital group to have since there’s loads of data that solely must be shared at a college degree. It’s additionally a good suggestion to mix faculty teams with membership, examine and analysis, and even hobbies since these are areas that contain school-level participation. You may create a personal group that only some authorised members may have entry to. Which means the admin or trainer controls member entry. A college administrator can create a big group group and add all of the establishment’s college students or simply college students in the identical 12 months. After all, they can be utilized just for lecturers, directors (if there are a couple of), mother and father, instructing assistants, and many others. Since excessive parental involvement results in higher scholar outcomes, it’s vital to have interaction them as effectively. Academics can create teams for your entire class, which is restricted to members of a sure class, similar to Yr 1 Geography. This kind of group may be simply accessed by the scholars enrolled in a category because it often reveals as a tab inside that LMS class. Moreover, every class can have a number of teams for various functions (basic dialogue, initiatives, hubby golf equipment, and many others. ) Group task teams Academics may also assess a complete scholar group by way of workforce assignments. This characteristic is great while you need to group college students into groups and grade their mixed efforts. The workforce task is rather like another task within the sense that it has directions, analytics, rubrics, competencies, rating guidelines, and extra. The intention of the digital group is to allow college students to speak and collaborate simpler whereas engaged on the undertaking from dwelling. It’s also simpler for them to avoid wasting and submit a undertaking straight on the platform. They’ll additionally add attachments or use the HTML editor to create the undertaking. After you assign duties, college students should work on them of their corresponding teams the place they’ll talk, and share concepts, obligations and updates. The group’s members will be capable to use chats, boards, wiki, and useful resource areas to work on their initiatives. College students may even create and keep a workforce weblog on the group. Plus, they’ll actually have a workforce sport through which college students achieve factors and badges by finishing the undertaking. As a trainer, you possibly can supervise all of the teams’ actions and monitor their progress and communication. After the duty submission, you possibly can grade every group’s members in accordance with their: - Effort and involvement; - Participation within the actions; - Accuracy in finishing their assigned duties; - Skill to satisfy deadlines. The group members get the identical grade for his or her mixed efforts. Methods to use LMS teams for scholar collaboration Teams are built-in LMS options, however often, platform directors should allow them first. Listed below are some steps that can assist you arrange these teams: 1. Select the LMS group sort When including a brand new group, first choose its sort. Many LMSs provide a ready-made listing with the out there teams, a few of which we’ve listed above. No matter their sort, the teams may have the identical performance. For instance, for a college or district group, with a interest or membership objective, you might merely add them from the Teams tab: For sophistication teams, first go to the category after which add a brand new group: Then, you’ll mechanically see your newly created group in a corresponding tab or part of the platform. There, it is possible for you to to view all of the teams you’re a member of and people the place you’re an administrator. 2. Enable college students to create teams College students themselves may also create teams, however they’ll want an administrator’s approval because it’s often a college coverage characteristic. That is fully elective, nonetheless, if you wish to allow collaboration, it’s a good suggestion to provide college students the liberty to create their very own teams. After all, directors and lecturers can entry these as effectively so college students received’t be unsupervised. 3. Add group members Teams may be seen to everybody or non-public. If you wish to add members to the group, first be sure that all college students are enrolled within the platform. So, you’ll solely have to go to the members’ space, sort a reputation, or use a filter, after which embrace that particular person within the group. - You may both add members manually or ship them an invite they’ll obtain by way of e mail. As soon as they settle for the invitation, they be a part of the group. - One other strategy to entry a brand new group is thru an entry code, so all college students will be a part of the group utilizing the code. - Add college students mechanically. It is a extremely popular method so as to add members to a gaggle. As an example, each time a scholar joins a category, lecturers can set a rule for them to even be added to the category group. 4. Customise teams and select related options There are additionally a number of design choices you should utilize to personalize your teams similar to - Altering the images; - Including completely different descriptions; - Altering the group’s identify or colour scheme. Nevertheless, these are simply surface-level customization choices. Teams can have completely different options similar to calendar, information, useful resource space, boards, chat, weblog, and many others. Which means college students add examine sources and even create a weblog so as to add updates about their initiatives, add related bulletins, and customarily have every little thing they want for a profitable collaboration. - Newsfeeds: permit everybody within the group to remain updated with the newest information, occasions, and bulletins concerning the subject of curiosity. Group members may even “like” or touch upon the objects they discover within the information feed; - Sources: college students add related sources, similar to articles and movies associated to the group topic; - Boards: boards are glorious methods for college students to collaborate with one another, as they’re principally Q&A based mostly and lecturers can intervene and add their very own solutions; - Chat: allow chatrooms so college students can rapidly get in contact with one another and focus on administrative points, similar to undertaking deadlines; - Group video games: probably the most thrilling strategy to customise a gaggle is so as to add a group sport, which features principally like a category sport, however college students achieve factors and badges by collaborating within the group. After all, lecturers can resolve which options are related or not for a specific group, however often, it’s good to have many choices for college students to get in contact with one another. 5. Observe scholar progress utilizing teams Teams are additionally powered with good options that allow you to trace scholar progress. Competencies are one in every of them. You may add these to a gaggle based mostly on their sort and objective. For instance, examine teams can have competencies related to the brand new ideas and expertise college students ought to purchase. This works finest for workforce assignments, which we touched upon within the “What are the several types of LMS teams?” part, as a result of workforce assignments usually are not only a common group, they’re project-based, so all college students are literally buying expertise whereas finishing this sort of task. 6. Routinely add to teams for finishing duties Automation is nice since lecturers don’t should manually add college students to teams. You may see an instance underneath “Add members” the place one of many enrollment guidelines is so as to add college students to a sure group each time they enroll in a category. Nevertheless, lecturers may also resolve so as to add college students to a sure group after they go a particular module or full an task. Learn extra: How lecturers can take advantage of LMS automation LMS teams enhance scholar collaboration LMS teams are simply one of many LMS options that foster scholar collaboration and group actions. As a trainer, these allow you to talk with college students at any time and share sources and updates simpler. Nevertheless, those that profit probably the most from this characteristic are the scholars themselves. LMS teams permit them to speak with colleagues and lecturers the identical method they do with pals, — by way of a user-friendly platform. College students may also entry sources, brainstorm concepts, get organized, and even develop into extra motivated to work collectively by way of group assignments. Keep within the loop! We’ll maintain you up to date with probably the most helpful EdTech ideas and sources. Subscribe and by no means miss out! Ioana Solea is a passionate blogger and on-line international language trainer. She loves all issues edtech and he or she believes that it’s the way forward for instructing. She writes about numerous matters, from on-line instructing instruments to pedagogical methods.
<urn:uuid:93626c41-3f2a-430b-99cb-07eba43a19a2>
CC-MAIN-2022-33
https://wwwinsurance.top/how-lms-teams-allow-scholar-collaboration-for-higher-studying-outcomes/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571996.63/warc/CC-MAIN-20220814052950-20220814082950-00605.warc.gz
en
0.954942
3,249
2.640625
3
COPYRIGHT PROTECTION IN INDUSTRIAL DESIGNS Section 2(c) of the Designs Act, 2000 provides that “copyright” means the exclusive right to apply a design to any article in any class in which the design is registered. This means that if a design is registered in respect of several articles, the proprietor of the design shall have the right to apply the design to all those articles. The copyright used under the Copyright Act, 1957 is different from the meaning of the copyright mentioned under the Designs Act, 2000. In the present act it means protection from the use of the design registered in the name of the owner by any third person without the permission/ consent of the owner. COPYRIGHT ON REGISTRATION OF DESIGN AND DURATION OF REGISTRATION: SECTION 11 On registration of a design, the registered proprietor of the design shall, subject to the provisions of this Act, have copyright in the design during ten years from the date of registration. If, before the expiration of the said ten years, application for the extension of the period of copyright is made to the Controller in the prescribed manner, the Controller shall, on payment of the prescribed fee, extend the period of copy-right for a second period of five years from the expiration of the original period of ten years. Thus, the maximum period a design can be registered is 15 years and on expiry of 15 years the design shall fall within public domain. In the case of Parle Products Private Limited v. Surya Food & Agro Limited (Date of Judgment: 9 September, 2008) the Madras High Court held that The purpose of the Designs Act is to protect novel designs devised to be applied to particular articles to be manufactured and marketed commercially. The main concern is what the finished article is to look like and not with what it does and the monopoly provided for the Proprietor is effected by according not, as in the case of ordinary copyright, a right to prevent direct reproduction of the image registered as the design but the right, over much more limited period, to prevent the manufacture and sale of articles of a design not substantially different from the registered design and the stress is therefore upon the visual image conveyed by the manufactured article. It cannot be denied that the ‘copyright’ in an industrial design is governed by the Designs Act, 2000. If a design is registered under that Act, it is not eligible for protection under the Copyright Act. However, in the case of a design which is capable of being registered under the Designs Act, but no so registered, copyright will subsist under the Copyright Act, but it will cease to exist as soon as any article to which the design has been applied has been reproduced more than 50 times by an industrial process by the owner of the copyright or with his license by any other person. The Designs Act, unlike the Copyright Act, gives monopoly protection in the strict sense of the word rather than mere protection against copying as under the Copyright Act and though there is an area of overlap between the Copyright Act and the Designs Act, the two Acts do not give coterminous protection as regards the subject matter, in the considered opinion of this Court. LAPSED DESIGNS AND THERE RESTORATION: SECTION 12-14 Section 12 provides that where a design has ceased to have effect by reason of failure to pay the fee for the extension of copyright, the proprietor of such design or his legal representative and where the design was held by two or more persons jointly, then, with the leave of the Controller one or more of them without jointing the others, may, within one year from the date on which the design ceased to have effect, make an application for the restoration of the design in the prescribed manner on payment of such fee as may be prescribed. An application under this section shall contain a statement, verified in the prescribed manner, fully setting out the circumstances which led to the failure to pay the prescribed fee, and the Controller may require from the applicant such further evidence as he may think necessary. Section 13 provides if, after hearing the applicant in cases where the applicant so desires or the Controller thinks fit, the Controller is satisfied that the failure to pay the fee for extension of the period of copyright was unintentional and that there has been no undue delay in the making of the application, the Controller shall upon payment of any unpaid fee for extension of the period of copyright together with prescribed additional fee restore the registration of design. The Controller may, if he thinks fit as a condition of restoring the design, require that any entry shall be made in the register of any document or matter which under the provision of this Act, has to be entered in the register but which has not been so entered. Section 14 provides for the rights of proprietor in respect of the restored designs which were lapsed. It states that where the registration of a design is restored, the rights of the registered proprietor shall be subject to such provisions as may be prescribed and to such other provisions as the Controller thinks fit to impose for the protection or compensation of persons who may have begun to avail themselves of, or have taken definite steps by contract or otherwise to avail themselves of, the benefit of applying the design between the date when the registration of the design ceased to have effect and the date of restoration of the registration of the design. No suit or other proceeding shall be commenced in respect of piracy of a registered design or infringement of the copyright in such design committed between the date on which the registration of the design ceased to have effect and the date of the restoration of the design. CANCELLATION OF DESIGNS: SECTION 19 Section 19 provides certain grounds on which any person interested may present a petition for the cancellation of the registration of a design at any time after the registration of the design, to the Controller which are as follows: - Design has been previously registered in India. - Design has been published in India or in any other country prior to the date of registration. - Design is not a new or original design. - Design is not registerable under this Act. - It is not a design as defined under clause (d) of section 2. Design has been previously registered in India In the case of Gopal Glass Works Ltd. v. Assistant Controller Of Patents and Designs (2006 (33) PTC 434 Cal) the court held that under the law presently in force in India, specifications, drawings and/ or demonstrations in connection with registration of a design do not per se constitute publications which prohibit future registration of that design. Had publication of design specifications by a registering authority, particularly a registering authority in a foreign country, in connection with registration of a design, in itself, amounted to prior publication, that would hit all future applications in India for registration of designs, prior registration in India would not separately have been made a ground for cancellation of a registered design. Moreover, it is significant that Parliament consciously, made publication in a country other than India a ground of cancellation, in addition to publication in India, but expressly restricted the embargo of prior registration to registration in India. Registration in a country other than India has not been made a ground for the cancellation of a registered design. Design has been published in India or in any other country prior to the date of registration The design is considered to be published when it is no longer a secret and has been disclosed to the public. Publication can be of two kinds i.e. publication by a prior user or publication by a prior document. In the case of The Wimco Limited vs Meena Match Industries (ILR 1984 Delhi 121) it was held that The word “published” used in the Act has not been defined in the Act. Publication within the meaning of the Act means the opposite of being kept secret. It is published if. a design is no longer a secret. There is publication if the design has been disclosed to the public or the public has been put in possession of the design. In the case of Rotela Auto Components Pvt. Ltd. vs. Jaspal Singh and Ors., 2002 (24) PTC 449 (Del.) the court held that as far as present Act is concerned, the legislature in its wisdom by incorporating Sub-section (3) of Section 2 of the Act has made every ground, on which registration of a design may be cancelled, available as a ground of defence. The ground on which cancellation can be sought of registration are enumerated in Section 19 of the Act. It may be noticed that the design is a conception, suggestion or idea of a shape and not an article. If is has already been anticipated, it is not new or original. If it has been pre-published, it cannot claim protection as publication before registration defeats the proprietor’s rights to protection under the Act. In the case of Niki Tasha P. Ltd. v. Faridabad Gas Gadgets P. Ltd., (AIR 1985 Del 136) It was held that under the designs Act, mere registration of a design abroad would not be a ground for cancellation of design in India unless it is show that the prior design has been published abroad prior to the date of registration. It, thus, follows that prior registration of a design abroad is not a bar. Publication is essentially a question of fact to be decided as per the evidence laid in each case. Existence of a design in the publication record/office of a Registrar of design abroad may or may not, depending on the facts of each case, amount to prior publication. In the case of Itc Limited vs The Controller of Patents And Designs (Judgment delivered on 6 March, 2017) To constitute prior disclosure by publication to destroy the novelty of a registered design, the publication of the design applied to the same article, would have to be in tangible form. Prior publication of a trade catalogue, brochure, book, journal, magazine or newspaper containing photographs or explicit picture illustrations that clearly depict the application of the design on the same article with the same visual effect would be sufficient. When the novelty of an article is tested against a prior published document, the main factor required to be adjudged is the visual effect and the appeal of the picture illustration. Design is not a new or original design In the case of Prayag Chand Agarwal vs. M/s. Mayur Plastics Industries, (72 (1998) DLT 1) the court held that in view of the facts that the broad pattern of the two soles seems to be the same and the entire sole of both the shoes have same patterns, cuts, rigid roofs and lines pattern and in view of the fact that law is well settled that when serious disputed question on various grounds such as prior publication, lack of originality, trade variation is raised in a particular case, no injunction should be granted. Taking into consideration that the impugned design is registered in favor of the plaintiff in the year 1995 and the same having been shown to be in use from 1988 onwards i.e. prior to the registration of plaintiff’s design and plaintiff has prima facie failed to establish that he was the originator of the design, it is difficult to injunct the defendant. In the case of Glaxo Smithkline Consumer Ltd v. Anchor Health And Beautycare 2004(29) PTC 72 (Del.) It was held that the design registered in the favour of plaintiff was a virtual reproduction of the plaintiff’s own earlier design and which has already gone in public domain. In the case of Reckitt Benkiser India Ltd vs Wyeth Ltd. 2013 (54) PTC 90 (Del.) The Delhi High Court held that new, original or novelty of the pattern or design when replicated or applied or fully understood by the eye for being applied to an article is a sine qua non to avail and get benefit of the rights under the Act. This underlying principle is accentuated as well as protected when Section 4(b) and 19 stipulate and provide for rejection/cancellation of registration of a design which has been published in any country prior in point of time. Though publication in India or abroad are enlisted as separate grounds under Section 4 and 19 of the Act, they are in a way interlinked and intertwined with the question of whether design is new or original for determining the meaning of “design” as understood and as defined in Section 2(d) of the Act. OVERLAPPING OF REMEDIES UNDER TRADEMARKS AND DESIGNS ACT: Shapes can be either used as trademark or as design under the Designs Act, 2000. Thus, there prima facie exists a conflict/overlapping in the remedies of the two acts. Both the acts have same remedy of passing off available which makes the provisions of Designs Act and Trademarks Act inconsistent. The duration of Trademark protection under the Trademark Act, 1999 is for an unlimited period provided renewal fee is paid in every 10 years whereas maximum period of protection under the Design Act, 2000 is 15 years. Thus, the main discrepancy is if a shape that has been registered as a design under the Designs Act, 2000 is being used as a trademark would it defeat its protection under the Designs Act, 2000 or not. A proprietor cannot avail the remedies under both the acts, he has to choose between these two inconsistent remedies so that necessary balance can be maintained between the two remedies and rights can be adjudicated appropriately. In the case of Micolube India Limited vs Rakesh Kumar Trading 2013 (55) PTC 1 (DEL.) the court held that the remedy of passing off would continue to be available along with the infringement of registered designs and can be joined with the same in order to prevent consumer confusion which may be caused by the use of trade mark, get up, trade dress or in any other manner excepting the shape of the goods which is or was forming the subject matter of the registration of the Design. The remedy of the passing off in so far as the shape of the article is concerned shall also be available even during currency of the design monopoly or even after the expiry of same to the extent that the claim of the feature of the shape is not covered within the novelty claim under the Design monopoly rights and the said claim of the protection qualifies all the necessary ingredients of the Trade Mark. In the case of Dabur India Limited vs K.R. Industries (2008) 10 SCC 595 the court held that The fundamental edifice of a suit for infringement under the Designs Act would be the claim of monopoly based on its registration, which is premised on uniqueness, newness and originality of the design. Whereas, the action for passing off is founded on the use of the mark in the trade for sale of goods and/or for offering service; the generation of reputation and goodwill as a consequences of the same; the association of the mark to the goods sold or services offered by the plaintiff and the misrepresentation sought to be created by the defendant by use of the plaintiff’s mark or a mark which is deceptively similar, so as to portray that the goods sold or the services offered by him originate or have their source in the plaintiff. It is trite to say that different causes of action cannot be combined in one suit In the case of Mohan Lal and Ors. v. Sona Paint &Hardwares and Ors. (AIR 2013 Delhi 143) the court while discussing whether the conception of passing off, as available under the Trade Marks Act, can be joined with the action under the Designs Act when the same is mutually inconsistent with that of remedy under the Designs Act, 2000. The court while answering in affirmative held that a composite suit for infringement of a registered design and a passing off action would not lie. The Court could, however, try the suits together, if the two suits are filed in close proximity and/or it is of the view that there are aspects which are common to the two suits. The discretion of the court in this matter would necessarily be paramount.
<urn:uuid:a38c03c6-415b-4ed1-bc9e-57a532791ab3>
CC-MAIN-2022-33
http://lawnetra.com/copyright-in-registered-designs-and-cancellation-of-registered-designs-under-the-designs-act-2000-overlapping-of-remedies-under-trademark-and-designs-act/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571472.69/warc/CC-MAIN-20220811133823-20220811163823-00405.warc.gz
en
0.958972
3,299
2.59375
3
Too much exposure to the sun can cause an inflammatory reaction to the skin. Among all the types of UV rays, UVB is most responsible for sunburn. The skin becomes red and very hot to touch. Sunburn mostly causes people to lie down for hours in the hope of getting tanned. This happens due to melanin, which works by giving the color of the skin and protecting it. But the amount of melanin is produced depends on genetics. That’s why some people get tanned and some others get a sunburn. People with less melanin can cause skin cells to become red because of unprotected sun exposure. Apart from being red, the skin also gets very painful and swollen. Sunburn can be very different from mild to extreme. After sunburn body can try to get rid of the damaged cells by peeling them off the skin. But trying to peel off the skill manually instead of naturally can cause more pain. Sunburn not only happens due to the Sun, but It can also be caused by other artificial sources containing ultraviolet light such as sunlamps. Repeated sunburn can cause many more major skin diseases such as dark spots, rough spots, skin cancer, wrinkled skin, and melanoma, etc. Sunburn can take a few days to fade completely. But, sunburn remedies can be used to quicken the process. Things to know about sunburn There are certain things about the sunburn that you need to know: The intensity of the sun and the UV ray varies due to many factors such as time of the day, geographic location, and seasons, etc. The faster burning of the skin is due to the higher range of the UV index. Be very careful of the UV rays mostly when the sun is stronger. However, even when the sun is low, the risk of UV rays is still there. Protecting yourself from the sun every day of the year can get you sunburn relief. Light pink is bad The main color of the sunburn is red. However, if the color gets light pink, it is still considered to be a sign of sunburn. Every mild sign of sunburn is very important to be taken care of with sunburn remedies. As they can cause much major skin damage such as cancer or premature aging. Sunburn has a direct connection with the complexion of a person. Some of the skin types are prone to have sunburn more than usual. People with a fair complexion have the greatest risk of developing sunburn. But that does not mean that people with wheatish or dusky complexion are spared; they can also get tanned due to greater exposure to the Sun and lead to sunburn. Sunburn increases the risk of cancer for every person. People with dark skins cannot determine the redness and ignore it, which is very dangerous as it can lead to cancer. Cellular level damage can be caused by the sun. A Sunray varies from time to time and different seasons. In cloudy weather, Sun might not be shining, but UV rays can still harm your skin badly as the UV rays can penetrate the cloud 80 percent of the time. Using sunburn remedies can prevent sunburn. Symptoms of Sunburn Sunburn can cause many damages to the body; often people ignore the basic symptoms causing the pain and swelling to increase even it can cause major skin damage including cancer. Dark-skinned people might not notice the skin getting red causing major damages. Symptoms are very useful to detect as early as possible to apply sunburn remedies, which can prevent pain and swelling. An individual with sunburn can have many symptoms at a time, so it is better to be aware of these given symptoms: - Fluid-filled blisters on all around the area of sunburn damage. These blisters can crack-open, causing pain and swelling - Swelling throughout the area damaged by sunburn - Skin gets too hot to touch or warm feeling - The skin turns red or pink - Pain, itching, and tenderness in the area - Extreme sunburn can cause nausea, fatigue, fever, and headache. Along with these other parts of the body that is exposed to the sun can also burn, such as lips, scalp, ear lobes, even the eyes if not covered. Sunburn in the eye can be very painful as the eye is very sensitive to the UV rays. Covered parts of the body can also be sunburned if the cloth is thick or have a gap that can make the UV ray to get through. Sunburn remedies can be used to get relief from the pain of affected areas. You can experience the effect of being sun-burnt either instantly or after a few days. The top layer of your skin will start peeling off like a healing procedure. The new skin that appears after the peeling tends to be severely sensitive and may appear different than your original complexion. The healing and repair of the affected skin solely depend on the severity of the sunburn. Risks of a Sunburn The risks of developing sunburn depend on many factors. People with different skins have different risk of developing sunburn. Fair skin people tend to have an increased chance of getting sunburn. The intensity of the sun can play a major part in developing sunburn. There are many factors such as time of the day, season, weather and geographical location, which lead to the burning of the skin. However, on a cloudy day, the risk of sunburn can be there because UV rays can penetrate the cloud 80% of the time. The risk of developing sunburn into major diseases increases if you don’t know how to treat sunburn. Risks of having a sunburn Here are some reasons which can increase the risk of developing sunburn: - People who are related to outdoor sports or work outside have an increased chance of developing sunburn. Playing or working outside means getting out in the sun for long, which can get a sunburn. Even repeated sunburn can be caused to the people who need to do work or play in the sun again after recovering from the last sunburn. - Damage of the skin can build up under the skin from the 1st Severe sunburn means the damage of sunburn is increasing. The damage can lead to skin cancer, which can be very deadly. - Melanoma is a very deadly disease. The chance of melanoma can increase with even one sunburn blistering in childhood. The chances of developing melanoma increases with the damage of sunburn. - People with fair skin or genetic predisposition have an increased risk of developing sunburn. UV rays can alter a tumor-suppressing gene. It can cause a small chance of cells to repair before developing cancer. The chance of melanoma also increases with sunburn damage. - People who have faced more than 5 sunburns have also doubled the increased chance of developing cancer. Using sunburn remedies can prevent the sunburn repeat. - Photo-sensitizing medications are also known for causing sunburn. Taking these medicines can make you more likely to burn. - Drinking alcohol under the sun at high altitudes increases the chance of developing the sunburn. Complications of Sunburn Sunburn can make your skin damaged, and repeated sunburn can cause many major problems. Some of the major risk factors from the sunburn include cancer and photoaging. Repeated sunburn causes damage to the skin severely, which may cause the skin to age faster, making you look older than your actual age. The abnormality of skin aging because of the damage from sunburn is called photo-aging. Photo-aging can cause Deep wrinkles, rough and dry skin, freckles in shoulders and face. Apart from this, it can cause the tissue of the skin to get weak, reducing the strength and elasticity of the skin. Precancerous skin lesions Areas that are damaged by the sun tend to have white, tan, brown and pink scaly and rough patches, which are also known as precancerous skin lesions. Among the damaged parts of the body by sunburn face, neck, hands, and head are the common spot for this condition to appear. These patches are also called Actinic keratoses and solar keratoses, which can also evolve in cancer. One of the most fatal and major diseases that can be caused by sunburn is skin cancer. Skin cancer is known to be very deadly and life-threatening. The DNA of the skin cells can get damaged by it. Sunburns in childhood increase the chance of developing melanoma. Home Remedies to Soothe Sunburn Sunburn doesn’t stay for many days, but the pain and swelling from the sunburn in a short period can be very uncomfortable. If you want to soothe the pain or cool down the hotness of the skin, you can use many home remedies. Some of the common home remedies are: Honey is very common in every household. It is mainly used for eating alongside food. But you can also use honey in case of sunburn. Honey has many beneficial properties that can be used for reducing infection and pain. It also helps to speed up the process of healing. Researchers claim that using honey instead of antibiotic creams can be very beneficial. Do not use honey on the burn of babies younger than 12 years as it can develop infant botulism. Milk is known very well for its unique beneficial characteristics. Use cool milk, on the area of burn with a piece of cotton. It can reduce the pain and heat, along with discomfort. Oatmeal can work as an anti-inflammatory agent. Colloidal oatmeal is very common in every drug store, and it can be mixed with water to get rid of the inflammation. But it can also be made in house with the help of a blender. When it comes to anything related to skin, the first thing that comes to our mind is aloe vera. Every skin product from face wash to whitening skin uses aloe vera. You can easily get comfort from pain, discomfort and speed up the healing with the inner gel of cactus. You can apply the gel directly from the leaf itself, or you can also but gel from drug stores. You can use witch hazel as an anti-inflammatory. You can use leaf and bark extract along with the water of witch haze on the skin directly to get relief from the pain and swelling. Also, you can apply the cream 20 minutes three to four times a day to get the best results. Moisturizing the skin Using gentle moisturizing lotion can cool the burn and reduce the swelling and pain. But some ointments can make the burn worse and may trap the heat. Repeating the process can reduce the burn and help the skin to peel in its course of time. Cornstarch and baking soda If you want to get relief from redness and also want to speed up the process, you can use baking soda or cornstarch. Do not apply any of this directly to the skin, mix them with water before using. Cool it down The feeling of warm and hotness in touch can be felt after sunburn. To cool down the sunburn pain, you can jump in the cold pool for just a few seconds. Continue to compress the effect of burn by applying ice or taking a shower, but not for too long as the pain can be prolonged. Also, while taking a bath, make sure not to use soaps with chemicals and harsh soap, which can increase the irritation. Home remedies of sunburn can also be used to cool the burndown. Sunburn can happen from exposure to the sun for too long. Also, sunburn can cause many major diseases, such as cancer. It is very important to use treat the burn to reduce the pain when any of the above mention symptoms are there.
<urn:uuid:af05f588-2b0f-4b6a-b075-a47376cc6850>
CC-MAIN-2022-33
https://www.easyworknet.com/health/sunburn-remedies/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573029.81/warc/CC-MAIN-20220817153027-20220817183027-00005.warc.gz
en
0.958159
2,481
3.203125
3
Contingency planning as a necessity In today's world, more and more reliance is placed upon the project manager and his or her team to complete a project successfully. Effective planning and execution of the plan are essential in supporting project success. Within the early stages of the project, the team participates in activities that explore risk factors, which may negatively impact the project. Of major importance to the project, is to identify the risks and determine how the team will address them. This is done via “risk identification, risk quantification, risk response development and risk response control” (PMI, 1996,111); the four components of “project risk management” (Duncan, 1996, p. 111). “Contingency planning involves defining action steps to be taken if an identified risk event should occur” (PMI, 1996, p. 120), and this paper views contingency planning as a necessity in today's project world. One of the first tasks the project manager and the project team participate in is the identification of the risks that may impact the project. The initial step is to identify events that pose a threat or risk to the project's success. Basically, there are two types of risks that need to be identified and evaluated: internal and external risks. Internal risks are those “things that the project team can control or influence,” (PMI, 1996, p. 111) while external risks are those “things that are beyond the control of the project team” (PMI, 1996, p. 111). Of the two types, the internal risks are often easier to identify because external risks are not as obvious. One of the projects that this author has been involved with is the processing of a large (in excess of 250,000 transactions) volume of data within a given time frame through a data processing architecture. The architecture contained many application systems and spanned multiple processing platforms. The project team identified an internal risk, which was that there might not be enough disk space to accommodate the volume of updates scheduled for processing. If this risk should occur, processing will stop until a solution to the problem is determined. The amount of processing time lost impacts the project since it may mean that the process would need more time to be completed than the time frame allowed for processing. Thus, the project would be completed later than scheduled and advertised. An external risk cited during the risk identification process was the possibility of a winter snow/ice storm during the weekend of the scheduled processing. Weekends are chosen for volume processing since client usage of the data systems is negligible. The project being used in this example is business critical, and the time frame is mandated by government regulation. Harsh financial penalties, in excess of $1,000,000, may be imposed if processing does not occur within the mandated time frame. In addition, processing occurs during the winter when snow/ice storms are possible. In order to support the processing, application support personnel are required to monitor application processing progress for the duration of the process from an “on site” central work location control room. The project manager recognizes that certain application processes execute longer than others do and that fatigue is a factor for those team members monitoring long-running processes. To address the fatigue factor, processing is monitored in shifts, and each shift is no longer than 12 hours. The external risk is that an occurrence of a snow/ice storm may prevent team members traveling from home to work to perform shift changes because of hazardous driving conditions. The “on duty” personnel would not be relieved, and poor or wrong decisions may result due to fatigue. The execution of the risk assessment process is a team exercise with the project manager present as a guide/moderator. The submission of risks from the team needs to be in an open forum, and the team must be able to freely discuss the merits of the risk being identified. In addition, the team needs to be in agreement as to whether an identified risk is really a risk to the project or not. An illustration of this occurred when a team member identified an additional external risk that the data center may “crash,” becoming inoperable during processing preventing the project from completing. The team members bypassed steps in the process and began to discuss how they would handle this possibility. The discussion centered on how they would acquire access to an alternate data center; consequently, one team member stated that this is a “risk” all team members face in their daily work outside this particular project. The team then agreed that this was not a risk to the project and determined that it should not be included in the list of risks to be evaluated. This illustration not only lends itself to the strength of the team and the freedom they need to explore possibilities, but it also lends itself to the team taking responsibility for determining which risks are reasonable. Once the team has identified the risks, the next step is to quantify them. The purpose of risk quantification is to determine which risks would be most detrimental to the project should they occur. The next step after quantification entails addressing these risks. There are many procedures that enable a team to perform risk quantification. Regardless of the process, a basic concept involves both the project manager and the team as they determine the probability of the risk occurring. Some procedures attach a numerical probability (percentage) to each risk. Another procedure may rate each risk using a scale of high, medium or low as a form of evaluation. In some cases, subject matter experts may assist the team by helping to assess and determine the risk occurrence. Another dimension that works in conjunction with risk probability is risk severity. Risk severity refers to the impact that the risk would have on the project if it were to occur. The severity/impact can be quantified by the words high, medium, or low. (The method of using risk severity is documented in Paul S. Royer's article of March 2000, entitled “Risk Management: The undiscovered dimension of project management.”) In this case, a process that quantifies risk for both probability and severity/impact was exercised. The team evaluates each risk for risk probability as well as risk severity/impact. At this point, the team must be cautious while quantifying in order not to misevaluate a risk probability when reviewing severity/impact and vice versa. All risks will be evaluated for risk response. However, those risks that earn “high-high, high-medium, high-low, medium-low, and low-low severity/impact-probability combinations will be evaluated for mitigation and contingency strategies" (Royer, 2000). Those risks that earn “medium-high, medium-medium, and low-medium, severity/impact-probability combinations will be evaluated for mitigation strategy" (Royer 2000). Finally, those risks that earn a low-high severity/impact-probability combination will be “treated as project assumption” (Royer, 2000). During the project, the internal and external risks aforementioned were both rated as having a high severity/impact and a high probability factor. Using Royer's process, both risks warranted mitigation and contingency strategies. Risk Response Development There are many different strategies within response development. Of the strategies, available, Royer applies mitigation and contingency. While both strategies are planned, each strategy addresses risk during a different time in the project. Mitigation addresses risk before manifestation and attempts to reduce its impact before occurring. Contingency addresses the risk at the time the event occurs and attempts to reduce its negative effects. The team mitigated the internal risk of not having enough disk space to accommodate the number of updates in the following way: The team planned to receive estimates of the number of update transactions that would be received for processing. They further planned to use the estimates to calculate how much additional disk space was needed to accommodate the processing. Knowing how much space was needed; members of the team planned to meet with the system Data Base Administrators (DBAs) and request them to validate that their databases would successfully process the number of updates. In addition, the team members were to request that the DBAs review all of the databases and take the necessary steps to ensure that they were in an operational state supporting efficient updating of the database. The purpose was to minimize processing time in order to complete processing on schedule. Along with planning the mitigation strategy, the team also developed a contingency plan. The team requested that the DBAs engage the data center management team and ask them to reserve extra disk space and disk drives. The requested space and disk drives were to be used in support of the process in case it was necessary. The team reviewed the external risk and developed a contingency plan. In this case, no mitigation strategy could be planned because the weather was clearly out of the team's control! The plan developed called for several actions to be taken. The project manager would review the weather reports starting 10 days prior to the beginning of process execution. If an ice/snow storm were predicted for the weekend, the team was prepared to spend those days and nights at the work-location site. In accordance with the plan, the team members were to bring sleeping bags to work so that during their “off shift” hours, they would be able to sleep. An investigation would be necessary to identify available offices where the team members could sleep during that weekend. The project manager was responsible for coordinating the team's meals and snacks by working with the location's cafeteria manager. The work location has showers that are used during the week by employees who exercise during their lunch breaks. Further arrangements were to be planned with the location's landlord for team access to those showers during the weekend. All of the plans made would not be put into effect unless the ice/snow storm was predicted. It was also planned that initial contact with the cafeteria supervisor and the location's landlord would take place prior to the monitoring of the weather report. The purpose was to inform the landlord and cafeteria manager of the upcoming processing and the actions necessary to be taken in case the storm came to fruition. The contingency plan to stay at work over the weekend, in the event the storm occurred, may be viewed as a creative solution to the problem. While it was a team decision, acceptance and commitment to the plan by all members was necessary for a plan of this nature to be successfully executed. Another important factor is that this project this project occurred at a time when telecommuting was not a viable option. If the same problem presented itself today, more than likely one of the contingency plans would be to work from home and report status of the project via e-mail. Although risk assessment is executed in the primary stages of the project, further risk assessments can/should be done as the project progresses to ensure that all risks are addressed. Risk Response Control Risk response control “involves executing the risk management plan in order to respond to risk events over the course of the project.” The mitigation strategy plan developed to address the possibility of not having enough disk space for all of the update transactions was implemented successfully. The result was that there was no need to employ the contingency plan as proposed. The estimates obtained were accurate, and the steps completed by the DBAs, to ensure efficient updating, supported timely processing completion. The mitigation strategy to address the event of an ice/snow storm did not have to be executed since there was no storm. Fortunately, the weather during the weekend of processing was clear, and shift changes occurred without incident. The project completed on time and within the mandated government window. One might conclude that since neither contingency plan presented was implemented, there may be no need to develop them; however, this is not true. A contingency plan is executed when the risk presents itself. The purpose of the plan is to lessen the damage of the risk when it occurs. Without the plan in place, the full impact of the risk could greatly affect the project. The contingency plan is the last line of defense against the risk. For a project manager, it is better to have the contingency ready for implementation than to have to develop one as the risk is taking its toll. The contingency is another instrument in the arsenal of tools that a project manager carries to support project success. Due to the timing when a contingency needs to be implemented, contingency planning is a necessity in today's project management world. PMI Standards Committee. (1996). A guide to the project management book of knowledge. Upper Darby, PA: Project Management Institute. Risk Management. (1993). Arlington, VA: Educational Services Institute. Royer, Paul S. (2000). Risk management: The undiscovered dimension of project management. Project Management Journal (March), 6–13. Proceedings of the Project Management Institute Annual Seminars & Symposium September 7–16, 2000 • Houston, Texas, USA
<urn:uuid:21d44455-1e44-4565-9601-ca3148f8e05b>
CC-MAIN-2022-33
https://www.pmi.org/learning/library/contingency-planning-necessity-risk-assessment-8898
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571246.56/warc/CC-MAIN-20220811073058-20220811103058-00005.warc.gz
en
0.963253
2,668
2.53125
3
Totally terrific trees. Totally terrific trees. Trees have three main parts: leaves, bark, and roots. They each have a purpose for the tree to survive. Today we are going to look at the bark. The bark of a tree is super important. Bark is the “skin” of a tree, and protects inner part of the tree from severe weather and harsh sun, bumps from animals, helps retain water, and even protects from forest fires. Bark is also a home for many animals. Not only insects like bark beetles but bats like to roost under the bark of the shagbark hickories. Grab that nature journal you created last week and take it outside. you will need: Find a tree. Place a blank sheet of your journal up to the bark. Take your crayon and rub away. Don’t do it too hard or you will break your paper. Do this with many different trees. There are trees with smooth, scaly, bumpy, or craggy bark. Try to identify the tree and label your bark rubbing. Extra challenge: take your journal outside and try to draw different types of bark. This tree is found throughout Pennsylvania. The bark is smooth, the leaves turn bright yellow in the fall, and the nuts feed lots of animals. Answer: This is a beech tree from Crow’s Nest Preserve. The bark scars easily and you may see drawings or initials in the bark (please don’t do that – they remain because the tree is unable to heal itself, and opens the tree up to infection.) Bird nests can be found in many different shapes and sizes. They could be an intricate hanging basket like an oriole, or a scrape in the ground like a killdeer. We are going to look at nests in trees. A bird nest is place for a bird to lay eggs and take care of them until they grow wing feathers that are large enough for flight. When a baby bird grows those feathers we say it has fledged. Once the babies fledge the nest is no longer needed. Some bird nests are holes in trees, some are made with grasses, some with spider webs, or even made with trash. Spring is the time for birds to start making nests. Here are five backyard birds every child under five should know: https://www.everythingbirds.com/articles/birds-every-kid-should-learn-before-five/ Bald Eagles construct the largest nest of all birds in North America. It can weigh more than 2 tons (4,000 pounds!), measure 8 feet wide, and 10-15 feet deep. Go on a nature walk around your backyard or in your neighborhood. Don’t forget your journal! You can write down all the nests you see. Some may be leftovers from last year, some may be new. See if any bird is flying in and out of the area of the nest. Collect things you think a bird might use for a nest. You might even find something around your house. Once you collect everything pretend to be a bird! Use your thumb and pointer finger as a beak and try to weave the items you collected into a bird nest. If you want to get super dirty, use mud. (Robins use mud to help the nest keep its shape.) Older kids and adults, you can be a scientist. Join Nest Watch. For more than a decade people like you have helped scientist by collecting valuable data on the successes and failures of nesting birds. If you visit our nature preserves you might wonder what is inside the wooden boxes you see. Today’s nature id challenge has the answer! Hint: this type of nest belongs to a that bird was declining in numbers because of habitat loss. They typically make their nests inside hollow logs. Answer: Eastern Bluebird nest. This picture of a bluebird box was taken at Green Hills Preserve. You can’t miss these birds, they are bright blue on the top and rusty on the bottom. Decades ago, Natural Lands started the bluebird box monitoring, currently we monitor more than 420 bluebird boxes! Today we’ll learn about the flowers and seeds that come from trees. Spring is upon us and trees are in bloom. You may have noticed on your walks bright yellow forsythia, big showy magnolias, dainty redbuds, beautiful pink cherry, or puffballs of pear flowers. Soon we will get the show of lilacs, dogwoods, crapemyrtle and all the beautiful blooms of spring. Most trees go through a series of eight growth stages that extend from their dormant state over the winter to the time when they grow fruit. As winter ends, buds develop on the trees. Once these buds burst, clusters of green start to open, eventually turning into buds that become the flower blossoms. Flowers get pollinated by wind, insects, bats, and many other ways. Then the flowers produce a fruit that contains a seed. Do you know what this tree is? Here’s a hint: In 1912, Mayor Yukio Ozaki of Tokyo gifted 3,000 of these type of trees to the nation’s capital. Answer: cherry blossom. This cherry tree is growing at Stoneleigh: a natural garden. Cherry blossom season in the U.S. is from mid-March to mid-April. Many people say spring arrives when the cherry trees start blooming. You can use nature to make art. Andy Goldsworthy is a world–renowned artist that uses found objects in nature to create art. The art is temporary as the wind blows and the rain falls, will disperse its beautiful creations. Get inspired by his artwork and create your own. Don’t forget to take your journal and draw out what you would like to create. Take a picture of it when done and post it to our Facebook page. What does a leaf do? Go outside and thank a leaf. A leaf takes in energy from the sun and carbon dioxide in the air and turns it into sugars (food for the plant) and oxygen. They basically take the air we breathe out and make air for us to breathe in. Leaves can be either deciduous (fall off during the winter) or evergreen (stays on the tree during the winter). They can be broad and flat or look like needles. Use these guides to learn more about leaves and identify trees: leaf prints with crayon: With permission of your adult, collect some leaves from your yard (or a preserve), and create a leaf print. It could be as simple as laying the leaf on a table, bumpy side up. Place a white sheet of paper over the leaf and rub the paper with a crayon. leaf prints with paint: Spread out newspaper and place a leaf down with the bumpy side up. Paint the leaf and then carefully pick it up and place it on a clean spot on the newspaper. Place a clean piece of paper over the painted leaf and gently rub with your finger to make a painted print. Take a walk around your neighborhood or just look out your window and try to identify any trees you see. In your journal you can make some notes: Use your notes and drawings to do research online to identify the trees you saw. Can you guess what this critter is? Here’s a fun fact: This bug curls a leaf around itself for protection, using silk. When the leaf is crushed it has a nice smell. Answer: This spicebush caterpillar lives at Hildacy Preserve. The back end of the spicebush swallowtail has two large eyespots to make it look like a snake to scare away predators. The spicebush butterfly and the eastern tiger swallowtail both lay their eggs on the spicebush. More than 20 species of birds including both gamebirds and song birds such as Ring-necked Pheasant, Bobwhite, and Ruffed Grouse have been known to feed on spicebush. You can be a dendrologist. The study of trees is called dendrology. In Greek ‘dendron’ means trees and ‘ology’ means study of. Adopt a tree in your yard, neighborhood or park and observe it all year long. Here are things you can look for each season: Become a citizen scientist. Invasive diseases and pests threaten the health of America’s forests. Scientists are working to understand what allows some individual trees to survive, but they need to find healthy, resilient trees in the forest to study. That’s where concerned foresters, landowners, and citizens (you!) can help. Tag trees you find in your community, on your property, or out in the wild using TreeSnap! Lenape native peoples used this tree as a fever reducer. Answer: This dogwood grows at Binky Lee Preserve. Dogwood flowers are not “true” flowers. White petals are actually leaves that surround miniature yellowish-green flower heads. Fruit of dogwood is known as a stone fruit. It is oval, berry-like, rightly colored, and filled with one or two seeds. The fruit and seeds of the dogwood represent an important source of food for wildlife. Here’s two fun games you can play with trees. 40/40 save all: Using a tree as the home-base, one person is ‘in’ and counts to 40 while other players run and hide. As soon as the person who is ‘in’ spots someone hiding, they run back to the tree, shouts out the name of the hider and where they are hiding, and that person is now ‘out’. The only way the hider can save themselves is if they get to the tree first and shouts ’40-40 save all’ – then everyone is safe and the original person remains ‘in’. If all the hiders are found then the first person who was ‘out’ is the next person to be ‘in’ at the tree. This game should be played in an area where there are plenty of trees – during a nature walk would be perfect. Players get into pairs and one person is blind-folded. They are then led to a specific tree by their partner. Here, the blindfolded ‘tree-hugger’ feels the tree and has to remember, the shape, texture and smell of the tree. Then they are taken back to the starting point, their blindfold is removed and they have to try and identify their tree. Note: make sure there are no fuzzy vines up the tree; that is poison ivy.
<urn:uuid:6b516000-f214-494f-8c6c-c90af4c57060>
CC-MAIN-2022-33
https://natlands.org/tremendous-trees/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00205.warc.gz
en
0.947575
2,315
3.6875
4
A new study has proven that DNA was used to determine the genetic diversity of the English bulldog breed. Selection for desirable physical characteristics has resulted in artificial genetic bottlenecks, which results in a narrowing of the allele diversity. This can be seen in large areas of the genome that are nearly identical within a breed, but distinct from other breeds. This is particularly the case for genes that regulate the body’s immune system. Tierheim English Bulldog The Bulldog is a breed of dog belonging to the mastiff family. The name is often used interchangeably with the English Bulldog or British Bulldog. This breed is medium-sized , and massive, with a wrinkled nose and a nose that is pushed in. The Bulldog is not just attractive, but it is also a breed with unique characteristics like a strong robust build, a powerful physique, and an unwavering following in England. The English Bulldog is a large-set and low-slung bulldog, with large shoulders and a strong set body. This characteristic was crucial for survival and reproduction. It allowed the Bulldog to get close to the ground. The Bulldog’s large head is another characteristic. It is about the same size as a dog’s shoulder. Large heads are able to accommodate strong muscles and a large jaw. The tail of this breed should be short however, it should always extend downwards from the base of the dog’s skull. Another characteristic of the English Bulldog breed is their tendency to chew on things. This is a normal behavior and shouldn’t be considered a problem. You must also teach your Bulldog to not chew on your personal belongings. This is essential to their health and well-being. Another characteristic of the English Bulldog is food-guarding. This behavior can be alleviated by teaching the dog to accept food from other people. Tierheim English Bulldog The English Bulldog is small and muscular, with a strong athletic appearance. The English Bulldog male weighs around 28 kilograms and stands 30 to 36 centimeters in height. The head of the English Bulldog sports an athletic, muscular look and its muzzle is slightly higher. It has strong legs and is standing straight. Females of this breed are smaller than males, and do not have the same muscular build. The English Bulldog’s hair is short and smooth. The English Bulldog’s distinctive appeal makes it a favored family pet. They are great characters in stories and aren’t demanding. Although their appearance may not reflect their nature, they are welcoming and easy to connect with. Although they are affectionate and friendly, the English Bulldog needs exercise to stay healthy and happy. A walk in cool weather is ideal for English Bulldogs. The English Bulldog is a strong-willed loving dog with a stubborn streak. Although he is prone to resent orders and get agitated at the slightest hint, the English Bulldog is a dog who is a good candidate for consistent gentle and loving training. In addition to its temperament and strength the English Bulldog is a great family pet and a great addition to any household. Be patient and spend time to train your English Bulldog. The conservation status of the English Bulldog is a hotly debated topic. Some breeders believe that the breed is genetically stable, but others disagree. While they believe that breeding for genetic markers will lead to healthier bulldogs, they are concerned that removing less desirable candidates from the gene pool could create an inbred breed. Nevertheless, this argument is based on outdated research, and there is no consensus among scientists on the subject. The English bulldog is susceptible to many diseases. The hips of bulldogs are a typical cause of dysplasia which is the development of the hip socket. Other eye issues include “cherry eye” the protrusion of the third eyelid. Bulldogs also suffer from heat-related problems. Skin folds that are sensitive to heat can result in infections. In addition dogs with these skin folds are extremely susceptible to drowning. Tierheim English Bulldog The English bulldog’s status as a conservation animal is Least Concern, meaning that they aren’t in danger of being extinct. The breed is widely used to breed across the world and there aren’t threats to the breed’s health. However, as with all other breeds, it is important to properly take care of these furry companions. A healthy English bulldog is an animal that is happy! The breed has a charming and playful personality, and its unique facial characteristics are sure to draw attention when you bring one home. Common health problems The English Bulldog can suffer from numerous health issues. Although the majority of these issues can be treated, a few may be more severe and require the attention of a veterinarian. Interdigital cysts can occur between the toes of Bulldogs. While they may be painful but these cysts can be treated by a veterinarian. The English Bulldog also has respiratory issues, protrusions in the eyelids, allergic reactions, and other health issues. These are the most frequent health issues of this breed. However, you’ll have to address at minimum one other problem. Upper respiratory tract disorders were common among the Bulldog breed that affected 10.5% of dogs. The most common respiratory problem was brachycephalic Obstructive Apnea Syndrome, affecting 3.5% of dogs. Affected Bulldog may experience chronic breathing difficulties in eating, difficulty eating, or difficulties sleeping. It may cause a dog to suffer from apnea for a period of time, which affects their health and their ability to live. The most frequent health issues of the English bulldog extend from conception until adulthood. The breed has a high rate of puppy mortality which means that the majority of Bulldog puppies are born via C-section. Chondrodysplasia is the breed’s unusual build. This makes it more prone to joint and bone issues. Its weight-in-adequate is a risk factor for degenerative spinal diseases. The shoulders and back of the body make it difficult to give birth, and puppies may not be well-presented when they are born. Tierheim English Bulldog You’ve come the right place for those who have ever wanted an English Bulldog. Bulldogs are a British breed, and can be referred to as British Bulldog or English Bulldog. They are a medium-sized, heavy dog with wrinkled, wrinkled nose and big body. You might have even seen them in movies! You’re sure to find the one that is right for you regardless of their name! There are many different colors of English Bulldogs, not all are recognized by Kennel clubs. The American Kennel Club recognizes four major color varieties of the breed. You’re unlikely to be able to register your English Bulldog or take him to a show if he isn’t a standard-colored English Bulldog. You might prefer a bald English Bulldog to an entirely black or solid-colored dog, however, this kind of color isn’t considered to be as a standard. It is crucial to understand the history of the Bulldog’s breed prior to being able to identify it. The breed was initially bred to help hunter’s and its strength as well as its massive bite were essential in assisting people in their activities. This is why it was used for bull hunting in which the bull would grip the nose before dying. Today, the English Bulldog is a popular option! It is important to keep in mind that these dogs aren’t closely related to Pitbull breeds. Tierheim English Bulldog The proper English Bulldog size can be difficult to determine. The dog is thought to be a medium-size breed, and the proper Bulldog has wide-set, short front legs and long, narrow-set back legs. The dog should walk side to side, with the skin moving in a sideways direction at the base and loin. The BCA’s Education Committee has guidelines that you can look over to determine whether your Bulldog is the right size. The English Bulldog is a medium-sized dog with a muscular physique. As an adult, it weighs in at 50 pounds. Male bulldogs tend to weigh slightly more than females, with an average weight of about 50 pounds. English bulldogs who weigh more are more likely to have health issues. Below are the weight requirements for male and female English bulldogs. To choose the correct size, be sure to check the breeder’s website and speak with your veterinarian. To choose to get a Blue English Bulldog, check the bloodlines of both parents. Blue English Bulldogs are able to be either blue or blue, so parents don’t have to be blue to be able to mat. However when you mix two blue English Bulldogs could result in the double merle breed and the pups that are born from such a litter are at greater risk of developing genetic disorders. A litter of three to four Blue English Bulldog puppies should be taken into consideration. Apart from the usual health issues aside from the usual health issues, your English Bulldog could also be susceptible to skin diseases like Pyoderma. Pyoderma is caused by yeast, which leads to itchiness and redness on the skin. It can also make your pet more prone to skin conditions like allergies. You can prevent your dog from developing these symptoms by treating it with a topical cream containing benzoyl peroxide. Tierheim English Bulldog Overweight Olde English Bulldogges could cause serious health issues. Dogs who are overweight are at risk of developing back pain, digestive disorders and joint issues. You can assist your English bulldog stay away from overfeeding by offering treats during training. But, be careful not to overfeed. Also, ensure you give your dog plenty of clean, fresh water. Don’t forget that dogs require daily exercise, too and so keep these essential maintenance requirements in your mind. Another health issue related to this breed is brachycephalic syndrome which affects dogs that have long soft palates. The condition causes the airway to narrowing and obstruction, which may cause breathing problems and coughing. Your Bulldog could even collapse in severe instances. The majority of cases can be treated at-home or with medication. In more serious cases, however, veterinary care is required. Tierheim English Bulldog
<urn:uuid:e549dc7f-e2e0-4839-9746-803773a5be07>
CC-MAIN-2022-33
https://thebulldogranch.com/tierheim-english-bulldog/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571284.54/warc/CC-MAIN-20220811103305-20220811133305-00205.warc.gz
en
0.959012
2,154
2.90625
3
Shortly after President Donald Trump entered the White House, he pledged a large investment in America’s infrastructure during a nationally televised address to Congress. “To launch our national rebuilding, I will be asking the Congress to approve legislation that produces a $1 trillion investment in the infrastructure of the United States—financed through both public and private capital—creating millions of new jobs,” the President said during his February speech. The American Society for Civil Engineers gave American infrastructure a D+ in its 2017 report card. “Deteriorating infrastructure is impeding our ability to compete in the thriving global economy, and improvements are necessary to ensure our country is built for the future,” the report said. “While we have made some progress, reversing the trajectory after decades of underinvestment in our infrastructure requires transformative action from Congress, states, infrastructure owners, and the American people.” The report also said current road conditions cost the country $160 billion in time and money every year, while a report from the Federal Highway Administration says the country’s transit system has a $90 billion backlog on repairs. Democratic lawmakers have long-supported federal dollars to help improve America’s roads, bridges, dams, airports and tunnels. Trump’s plan offers a rare opportunity for bi-partisan support in an increasingly fractured political environment. While a final proposal has yet to be agreed upon, Trump has called for a public-private partnership, but that seems more viable in urban areas where companies can recoup money by levying tolls. In rural areas, where less people live, projects may not entice enough investment because of revenue concerns. In early May, President Donald Trump signaled that he’s open to raising the federal gas tax to help subsidize infrastructure improvements. “It’s something I would certainly consider,” the President said to Bloomberg News in an interview. His statement underscores an issue that’s plagued politicians for years. The last time Congress increased a nationwide gas tax was in 1993, when lawmakers approved a 19.3 cent per gallon tax on gasoline, and a 24.3 cent per gallon tax on diesel fuel. However, that tax has not adjusted for inflation, and the Highway Trust Fund has not kept pace. Since 2008, the federal government has injected $143 billion in the fund, while the tax has generated roughly $34 billion a year. However, the federal government usually spends $50 billion per year on transportation projects, leaving a $16 billion annual shortfall. The Congressional Budget Office said the fund will become insolvent by 2021 without additional funding. According to the Joint Committee on Taxation, increasing the federal tax to 35 cents per gallon will create an additional $473.6 billion over a period of 10 years. The Chamber of Commerce, AAA auto club and the American Trucking Association support a new gasoline tax. After the 2014 mid-term elections, the group sent a joint letter to the 114th Congress that lends support for a bill that increases the gas tax. “While no one wants to pay more, we urge you to support an increase to the federal fuels user fee, provided the funds are used to ease congestion and improve safety, because it is the most cost efficient and straightforward way to provide a steady revenue stream to the Highway Trust Fund.” However, critics believe a gas tax will hurt working families by leveling fees on middle class commuters, many of whom supported Trump. Low gas prices may provide cover for a tax increase, but a sharp spike in gas prices could change consumer sentiments. In an interview with CNBC, Chevron CEO John Watson, pushed back against a tax on gas. “I think a good first step would be to evaluate where existing taxes are going,” he said. “In other words, we have road taxes today. How are they being used? Are they being put to good use in rebuilding our infrastructure?” In light of the debate, several states have increased their respective gas tax to shore up roads, bridges, tunnels and other state-operated transportation systems.Since 2015, sixteen states and the District of Columbia have enacted legislation to increase taxes on gas that help support infrastructure programs. According to American Road and Transportation Builders Association, voters approved 269 of the 361 transportation funding measures that appeared on township, city, county or state ballots in 2016. Many of these initiatives were approved in Democratic and Republican-dominated regions of the country. Furthermore, Louisiana, Minnesota, Oklahoma and Oregon have transportation funding measures pending in their respective legislatures. States Enacting Gas Taxes in 2017 The State Senate approved a 10 cent per gallon tax hike in April as part transportation bill estimated to raise $5.2 billion a year to repair state roads and highways. The legislation, which was backed by Governor Jerry Brown, increases the per gallon tax rate from 18 cents to 30 cents. The law also mandates $100 annual fee for electric cars, as well as annual fees ranging from $25 for cars valued at or under $5,000, to $175 for a car worth $60,000 or more. About $34 billion of the first $52 billion would go to repairing roads, bridges, highways and culverts, with most of the money split 50-50 between state and local projects. Michigan drivers saw a 7 cent tax increase in fuel prices at the beginning of the year, increasing a 19 cent per gallon tax to 26.3 cents per gallon, while diesel fuel will increase 11.3 cents from 15 to 26.3 cents per gallon as well. Lawmakers also approved a 20 percent increase in vehicle registration fees, while gas-electric hybrid and electric vehicles will experience an added $47 and $135, respectively.The hike is the first gas tax increase in 20 years, and aims to fund crumbling bridges and roads with an additional $2.3 billion over the next four years. The plan allocates 61 percent of the funding to counties, cities and villages, while the rest goes to state projects. Governor Eric Holcomb signed a $1.2 billion highway improvement plan in April which increases the Hoosier gas tax from 18 cents to 28 cents per gallon in July. Furthermore, registration and licensing fees will increase by $15. There’s also a $50 fee on hybrids and a $150 fee on electric cars. In addition, Holcomb intends to draft a plan that adds tolls for certain interstate projects by the end of 2018. The Montana house assembly approved a bill this year that will levy a 6 cent per gallon gas tax increase phased-in over 6 years. More than four percent will take effect on July 1, while the remainder is implemented in 0.5 cent increments between 2019 and 2022. The Montana tax is expected to generate $28 million in 2018, and more in future years to help repair state roads and bridges, as well as the construction of new ones. On May 10 the South Carolina House and Senate overrode a veto from Governor Henry McMaster to approve an infrastructure bill that increases the state’s gas tax. The legislation will enact a 12 cent per gallon increase phased in over six years, with a two cent increase occurring in July. The tax will eventually reach 28.75 cents per gallon while generating $600 million for infrastructure projects throughout the state. The House and Senate approved a bill sponsored by Governor Bill Haslam, which would generate $350 million for the state’s highway fund, and boost road revenues for cities and counties. The gas tax will rise by 6 cents per gallon and the diesel tax by 10 cents on July 1. The bill also has several fee increases, including a $5 car registration increase and a $100 fee on electric car users States Enacting Gas Taxes in 2016 In October 2016, Governor Chris Christie signed a bill that increased the gasoline tax to 23 cents per gallon. The bill marked the first tax hike of Christie’s tenure, and the first tax increase on gas since 1988. The law takes the second lowest gas tax rate from 14.5 cents per gallon, to 37.5 cents, the seventh highest. The law levies diesel users with a 15.9 cent per gallon tax increase, totaling more than 27 cents per gallon. The bill will generate $1.23 billion annually to help finance an eight year, $16 billion transportation program. The legislation comes after the state’s Transportation Trust Fund, which helps pay for Garden State roads, bridges and railways, had no money to pay for new projects over the summer. After Christie signed the bill in October, voters approved a November referendum to amend the state’s constitution to allocate the tax revenue to transportation projects. The law prevents lawmakers from reallocating the money to different projects. States Enacting Gas Taxes in 2015 The Georgia Legislature enacted the Transportation Act of 2015, increasing the excise gas tax by 7.5 cents per gallon, along with a four percent state sales tax, to 26 cents per gallon. These rates will then adjust to Consumer Price Index Every year. The money accrued from the tax will allocate to future state transportation projects. Additionally, the state will also collect a $5 per night hotel fee, as well as fees for heavy trucks and a $200 registration fee for electric cars. The law also eliminates a $5,000 tax credit for anyone who purchases an electric car. House Bill 170 also allows counties and municipalities to levy a 1 percent use tax on all motor fuels. The bill aims to collect $900 million a year to help fund transportation projects throughout the Peach State. Idaho’s gas tax increased 7 cents after state lawmakers approved a funding bill in April 2015 to help raise money for road repairs. The house bill increased the gas tax from 25 to 32 cents, to help raise more than $95 million a year. The accrued revenue is then split between local governments and state highway departments (60/40). Idaho also stipulates a $145 registration fee for an electric car, and a $75 fee for hybrid vehicles. However, a house bill introduced in 2017 will eliminate those fees if approved by the Governor. Governor Terry Branstad signed a bill in February 2015, which increased Iowa’s gas tax from 20 cents to 30 cents per gallon. The bipartisan initiative provides $215 million in annual funds for city, county and state roads. The gas tax increase received support from the Iowa Farm Bureau, the Chamber of Commerce, the trucking Industry, the Iowa State Association of Counties and the Iowa County Engineers Association. The gas tax legislation was the first fuel tax increase since 1989. Kentucky lawmakers approved legislation pegging the gas tax to the average wholesale price of gas over a three-month span. However, a $1.46 drop in gas prices in 2015 created a significant shortfall in the Commonwealth’s transportation budget. As a result, the lower gas taxes helped create a $125 million gap for transportation projects in local municipalities and townships, as well as state highways. To ensure a steady stream of revenue, lawmakers approved a 26-cent minimum for the gas tax rate. State legislators overrode a veto from the Governor to increase the state’s gas tax by cents per gallon, creating roughly $75 million a year in additional funding for transportation projects. The law says the gas rate will increase 1.5 cents every year for the next four years, through 2019. The State’s Transportation Innovation Act is estimated to raise $400 million. Nebraska’s gas tax has three components. This legislation will impact the fixed tax, which is set by state law at 12.3 cents per gallon in 2017. Meanwhile, the wholesale tax is pegged at the wholesale price and the variable tax adjusts every six months to meet the funding demands of previously approved state roads projects. The gas tax currently sits at 27.3 cents per gallon. Governor Pat McCrory signed a bill reducing the fuel tax from 37.5 cents per gallon to 34 cents per gallon by 2016. In January 2017, the gas tax began using a formula that accounts for population, energy prices and the consumer price index to help adjust the rate. The reformed gas tax formula takes population and energy prices into account when calculating future gas tax increases in the years ahead. The first of those increased the rate to 34.3 cents per gallon. Governor Dennis Daugaard signed legislation that increased the gas tax from 22 cents per gallon to 28 cents. The legislation also increases the excise tax on vehicle registration from three to four percent and increases license plate fees for noncommercial vehicles by 20 percent. The fuel tax hike will generate an estimated $40.5 million annually, while the excise tax increase will produce an additional $27 million to $30 million. Most of the revenue generated is allocated to state roads and local bridges. The new bill allows municipalities to levy their own taxes to repair roads in their jurisdiction. State lawmakers approved legislation to increase the state’s gas tax by 5 cents per gallon from 24.5 cents. The legislation levies a 12 percent wholesale tax on fuel and pegs future increases to a formula that considers fuel prices and inflation. In March 2017, the state house voted to pass fuel tax increases of .6 cents per gallon beginning in 2019 and 1.2 cents a gallon in 2020, essentially reworking the formula established two years prior. The Governor signed a 16-year transportation revenue package in August 2015, which increases the state’s gas tax to 44.5 cents per gallon. The legislation is part of the state’s $16 billion transportation project aimed at improving highways and roads, as well as non-highway projects like walkways, bike paths and transit systems. The two-part tax hike increased rates by an additional 4.9 cents a gallon, putting the total tax at 49.4 cents.
<urn:uuid:9fd9c8a3-744f-46cb-a41e-5d7ce405c29c>
CC-MAIN-2022-33
https://statecapitallobbyist.com/energy/legislative-insight-federal-state-gas-taxes/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00205.warc.gz
en
0.95249
2,866
2.53125
3
Wereldstage is actief op Curaçao en helpt je aan betaald werk, stages, vrijwilligerswerk en de invulling van een 'gap programma'. Ze organiseren voor alle leeftijden programma's van een paar weken of langer, waarbij je erachter komt welke toekomst bij jou past. Lokale initiatieven worden gesteund door vrijwilligers te plaatsen, en financieel bij te dragen aan de vele goede doelen op Curaçao. How to use financial statements for business accounting? - Chapter 1 Accounting can be defined as an information system that measures, processes, and communicates financial information about an identifiable economic entity. An economic entity is a unit that exists independently. Accounting can be seen as a link between business activities and decision makers. A business is an economic entity that has the aim to sell goods and services to customers at prices that will provide an adequate return to its owners. All businesses have two major goals; liquidity and profitability. The need to earn enough income to attract and hold investment capital is the goal of profitability. In addition, business must meet the goal of liquidity. This means having enough cash available to pay debts when they are due. All businesses pursue their goals by engaging in the following activities: Operating activities such as selling goods and services to customers, and buying and producing of goods. Investing activities like spending the capital of a company to achieve its goals. This includes buying resources necessary to operate such as land, buildings and equipment. Financing activities involve obtaining funds to be able to begin a business and to continue operating it. It includes for example obtaining capital from owners and from creditors such as banks. Using financial statements to determine whether a business is well managed and achieving its goals is called financial analysis. The effectiveness of such a financial analysis depends on the use of relevant performance measures. Management accounting is the internal managing of finance and is about financing, investing and operating activities to achieve profitability and liquidity. Financial accounting generates reports and communicates them to the external decision makers. These reports are called financial statements. Bookkeeping is the process of recording financial transactions and keeping financial records and is only a small part of accounting. Ethics is the code of conduct that is applicable to everyday life. Ethical financial reporting is important since fraudulent financial reports can have serious consequences. Accounting data is used by management, users with direct financial interest and users with indirect financial interest. The management are the people who are responsible for ensuring that a company meets its profitability and liquidity goals. Users with direct financial interest are for example investors, like stockholders, or creditors. Users who have indirect financial interest are tax authorities, regulatory agencies and other groups such as labor unions and consumer groups. The accountant must answer the following four basic questions in order to make an accounting measurement: What is measured? When should the measurement be made? What value should be placed on what is measured? How should what is measured be classified? Business transactions are economic events that affect the financial position of businesses. Transactions are recorded in terms of money; this concept is called money measure. The money measure depends on the country in which the business is located. In international transactions exchange rates are used to translate one currency to another. For accounting purposes, a business is seen as a separate entity; it is distinct not only from its creditors and customers but also from its owners. It should have a completely separate set of records, and its financial records and reports should refer only to its own financial affairs. There are three basic forms of business, which are discussed below: A sole proprietorship is a business owned by one person. The person is liable for all obligations of the business. A partnership has two or more owners. The partners share the profits and losses of the business according to a prearranged formula. A partnership must be dissolved as the ownership changes; i.e. as a partner dies or leaves. A corporation is a business unit chartered by the state and legally separated from its owners (the stockholders). The stockholders do not have direct control over the operations of the corporation. Because of the limited involvement in the corporation, their risk of loss is limited to the amount they paid for their shares. Therefore, stockholders often are willing to engage in risky activities. To form a corporation, an application must be signed with the proper state official. This contains the articles of incorporation which form a contract between the state and the incorporators. The authority to manage the corporation is delegated by the stockholders to the board of directors and then to the management. A unit of ownership in a corporation is called a share of stock. In the articles of incorporation is stated how many shares a corporation is authorized to issue. The most universal form of stock is common stock. A board of directors is elected by the stockholders. They appoint managers to carry out the day-to-day work. Only the board has the authority to declare dividends, which are distributions of resources, usually in the form of cash, to the stockholders. Corporate governance is the oversight of a corporation’s management and ethics by its board of directors. To strengthen corporate governance, an audit committee made up of independent directors who have financial expertise, is appointed by the board of directors. The audit committee is also responsible for reviewing the work of independent auditors of the company. The income statement, also called the statement of operations, is the most important financial report since it shows whether a company achieved its goals of profitability and liquidity. The basic elements of the income statement are revenues, expenses and net income. When revenues exceed expense the difference is called net income. When expenses exceed revenues the difference is called net loss. The retained earnings of a business are its income-producing activities minus amounts that have been paid to stockholders. Financial position refers to the economic resources that belong to a company and the claims against those resources at a particular time. Another term for claims is equities. As every corporation has two types of equities: creditors’ equities and stockholders’ equity, the following equation holds: Economic Resources = Creditors’ Equities + Stockholders’ Equities. In accounting terminology the economic resources are called assets and creditors’ equities are called liabilities. This gives the following accounting equation: Assets = Liabilities + Stockholders’ Equity Examples of assets are: monetary items as cash and non monetary items such as inventory and land. Liabilities are present obligations of a business to pay cash, transfer assets, or provide services to other entities in the future. Among them are debts, amounts owed to suppliers to borrowed money. The owner’s equity is called the stockholder’s equity. It has two parts: the amount that stockholders invest in the business: contributed capital the equity of the stockholders generated from the income-producing activities of the business and kept in use in the business: retained earnings. Revenues and expenses are the increases and decreases in stockholder’s equity that result from operating a business. Retained earnings can therefore increase as a result of revenues and decrease as a result of expenses and the payment of dividends. Four major financial statements are: The income statement focuses on the company’s profitability. It summarizes the revenue earned and expenses incurred by a business over a period of time. This will give the net income. The statement of retained earnings shows the changes in retained earnings over a period of time. It is the difference between the old and the new net income, less the dividends paid to stockholders. The balance sheet is used to show the financial position of a business on a certain date. Usually a balance sheet is made at the end of a month or year. The balance sheet presents a view of the business as the holder of the resources, or assets, that are equal to the claims against those assets. On the left side of the balance sheet are the assets, on the right side the liabilities and the stockholder’s equity. Both sides of the balance sheet must be equal. The statement of cash flows is directed toward the company’s liquidity goal. The outcome of the cash flows statement should be equal to the amount of cash on the balance sheet. To ensure that financial statements are understandable for their users, generally accepted accounting principles (GAAP) have been developed. These principles provide guidelines for financial accounting. - How to use financial statements for business accounting? - Chapter 1 - How to analyze business transactions? - Chapter 2 - How to measure business income? - Chapter 3 - How do Financial Reporting and Analysis work? - Chapter 4 - What are the Operating Cycle and Merchandising Operations? - Chapter 5 - What do inventories consist of? - Chapter 6 - What do cash and receivables include? - Chapter 7 - What are the time value of money and current liabilities? - Chapter 8 - What are the long term assets? - Chapter 9 - What are long term liabilities? - Chapter 10 - What is contributed capital? - Chapter 11 - What is the statement of cash flows? - Chapter 12 - What are financial performant measurements? - Chapter 13 - What do investments consist of? - Chapter 14 - Bulletsummary per chapter with the 11th edition of Financial Accounting by Needles & Powers - Chapter Hoe werkt een JoHo Chapter? Wat vind je op een JoHo Chapter pagina - JoHo Chapters zijn tekstblokken en hoofdstukken rond een specifieke vraag of een deelonderwerp - Via een beperkt aantal geselecteerde webpagina's kan je verder reizen op de JoHo website - Via alle aan het chapter verbonden webpagina's kan je verder lezen in een volgend hoofdstuk of tekstonderdeel. - Je kunt deze pagina bewaren in je persoonlijke lijsten zoals: je eigen paginabundel, je to-do-list, je checklist of bijvoorbeeld je meeneem(pack)lijst. Je vindt jouw persoonlijke lijsten onderaan vrijwel elke webpagina of op je userpage - Dit is een service voor JoHo donateurs en abonnees. - Hier kun je naar de pagina om je aan te sluiten bij JoHo, JoHo te steunen en zelf en volledig gebruik te kunnen maken van alle teksten en tools. - Hier vind je wat jouw status is als JoHo donateur of abonnee - Dit is een service voor wie bij JoHo is aangesloten. Je kunt zelf online aantekeningen maken en bewaren, je eigen antwoorden geven op tests, of bijvoorbeeld checklists samenstellen. - De aantekeningen verschijnen direct op de pagina en zijn alleen voor jou zichtbaar De aantekeningen zijn zichtbaar op de betrokken webpagine en op je eigen userpage. - Dit is een service voor wie bij JoHo is aangesloten. Wil je een tekst overzichtelijk printen, gebruik dan deze knop.
<urn:uuid:dafa27ef-4946-4767-a0c8-c3515f60351d>
CC-MAIN-2022-33
https://www.joho.org/en/how-use-financial-statements-business-accounting-chapter-1
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572286.44/warc/CC-MAIN-20220816090541-20220816120541-00004.warc.gz
en
0.856569
2,547
3.109375
3
Ken Haapala |Science and Environmental Policy Project (SEPP) The New Plan: “Clean Power Plan” (CPP), On August 3, the Obama Administration announced its plan tocontrol the production of electricity in the US in the name of protecting the planet from human-caused climate change, even though climate change has been occurring long before humanity existed. The administration’s plan is embodied in a 1560-page regulation released by the EPA titled the Final Rule, “Clean Power Plan” (CPP), to be published in the Federal Register sometime in the future. It is not until the rule is published in the Federal Register that activities such as litigation against it can begin, without the courts considering the litigation premature. The most important rules are on power plants operating today rather than those to be built or those which have to be modified or re-built. The Final Rule contains major changes to the draft CPP including increasing the time given to the several states to comply with the rules by 2 years. Overall, the plan mandates that the states, together, reduce their emissions of carbon dioxide (CO2) by 32 percent below 2005 levels by 2030, a more stringent mandate than 30% in the earlier version. However, mandates to the states changed in what appears to be clear political bias, with states controlled by democrats seeing their mandates reduced while those controlled by republicans seeing their mandates increased. The CPP promotes the development of solar and wind, far more expensive and unreliable forms of electricity generation than coal, which the plan seeks to curtail. Also, the plan appears to favor wind and solar over natural gas for electricity generation, although previously government agencies bragged that under Obama carbon dioxide emissions were falling by the use of natural-gas-fired power plants. The New Plan raises the percentage of power to be generated by solar and wind from 22% to 28%. The natural gas power industry seems to be somewhat taken back, but should realize that opponents of fossil fuels will try to regulate use of all such fuels. Also missing from the Final Rule that was in the prior draft are credits for energy efficiency. The EPA had assumed that consumers would actually save on their energy bills by reducing electricity consumption, even more than any increase in energy costs. Statements that consumers will save money appear to be invalid. See links under The Administration’s New Plan and The Administration’s New Plan – Independent Analysis The Allies: Overall, leaders of environmental groups seemed pleased with the new plan. For example, writing in the Wall Street Journal, Fred Krupp, the president of the Environmental Defense Fund called it a clean energy breakthrough, which gives the US an advantage in the race to produce “clean” energy rather than “unsafe pollution for the climate.” He, and others, uncritically repeat the Administration’s highly questionable health assertions about carbon dioxide and climate change. The support of the environmental industry for the plan is not surprising. On August 4 the Majority Staff of U.S. Senate Committee on Environment and Public Works released a report on how the EPA worked hand-in-glove with the National Resources Defense Council, and other environmental groups on the plan to control carbon dioxide emissions. See links under The Administration’s New Plan and The Political Games Continue. The Benefit-Cost: Using the EPA’s MAGICC* policy-analysis model, Patrick Michaels and Chip Knappenberger of Cato estimated the extent of global temperature reduction under the New Plan as compared with the “business as usual”, giving an approximation of the New Plan’s impact on climate. [All this is highly speculative.] They estimated that the New Plan will result in temperature reduction of 0.019°C (0.034ºF) by the year 2100. The New Plan is hardly the great breakthrough the environmental proponents assert. EPA veteran Alan Carlin estimates that, based on analysis of costs experienced in Western Europe, the New Plan may increase electricity prices to consumers in the US by up to four times. Such an increase is drastically different from pronouncements by supporters of the New Plan about consumer savings. Paul Homewood presents an excellent summary on electricity costs occurring in Western Europe from increased wind and solar (non-hydro renewables). Writing in WUWT, Ed Hoskins uses 2014 data from EurObservER to estimate the megawatts, by nameplate, of renewable installations by country per million people. German and Denmark have, by far, the greatest. Homewood compares these with 2014 electricity prices from Eurostat, the official EU statistical entity. The subsequent graph of EU Electricity Prices & Renewable Energy is revealing. As the installed renewable capacity per capita increases, the electricity cost increases. The increase in prices range from a low in Hungary of about 12 cent/kWhour (lowest renewables per capita) to a high in Denmark and Germany of about 30 cent/kWh (greatest renewables). There is no reason why the EPA, or the Department of Energy, or the US Climate Change Research Program could not perform such analysis. But if they did, it is hidden from the public. Reliance on renewable energy (non-hydro) is very costly, and that is what the New Plan entails. See links under The Administration’s New Plan – Independent Analysis and Alternative, Green (“Clean”) Solar and Wind. An Analogy? Many veterans of the Vietnam era have asked what went wrong. Why did President Johnson commit massive resources, including extensive ground troops, without a clear strategic plan and an understanding of the enemy? Part of the answer can be found in the Pentagon Papers. Ordered in secret by Secretary of Defense Robert McNamara, who was soon to leave the administration, this collection of documents reveal views of many members of the administration and in the Pentagon. Strangely, rather than using it for political advantage, Richard Nixon tried to suppress it. The Pentagon Papers reveal a lack of critical thinking coupled with ignorance and arrogance. These characteristics that can be found in this Administration’s war on climate change. Ignorance can be seen in the changing of the terminology from global warming to climate change, which has been occurring for hundreds of millions of years, long before humanity existed. Arrogance can be seen in the belief that humans are the primary cause, especially in the climate models, which ignore a multitude of natural influences. The Neglected Sun discusses six types of solar cycles influencing the earth’s climate, which are largely dismissed by the UN Intergovernmental Panel on Climate Change (IPCC).CO2 Science presents the poor correlation between carbon dioxide and temperature changes, yet CO2 is the primary area of concern expressed by the administration. The British Antarctic Survey presents a 10ºC (18ºF) jump in temperatures within 40 years shown in the Greenland Ice Cores about 38,000 years ago; yet, modest, late 20th century warming is called unprecedented. There is no measure of victory with such an undefined, nebulous enemy. Unless, those who manipulated historic data by lowering earlier data, giving a warming trend where there was none, reverse course. Then, victory can be declared. See links under Commentary: Is the Sun Rising?, Review of Recent Scientific Articles by CO2 Science, Measurement Issues, and Changing Cryosphere – Land / Sea Ice. April Fools Award: Presented on August 2, at the 33rd Annual Meeting of the Doctors for Defensive Preparedness. Each year SEPP conducts its annual vote for the recipient of the coveted trophy, The Jackson. Readers of The Week That Was are asked to nominate and vote for whom they thinks is most deserving under these criteria: * The nominee has advanced, or proposes to advance, significant expansion of governmental power, regulation, or control over the public or significant sections of the general economy. * The nominee does so by declaring such measures are necessary to protect public health, welfare, or the environment. * The nominee declares that physical science supports such measures. * The physical science supporting the measures is flimsy at best, and possibly non-existent. There were 16 nominations representing 5 countries. Their locations range from a state in Australia to Vermont. The votes have been tabulated. The vote was very close, but the victor emerged based on the strength of his nomination (below). “I would like to nominate Energy Secretary Ernest Moniz. In brief, when the Secretary of Energy is more interested in developing energy policy that supports CO2 emission targets than producing reliable energy, we have a problem. With Kerry, Obama or Lisa Jackson [previous recipients] you can sum it up to ignorance – they are not educated in science and they surround themselves with supposed experts, whom they choose to trust. With Moniz, you cannot – he has a renowned academic pedigree. Yet in spite of his obvious intelligence and education, he believes that despite the fact that computer simulations cannot predict the drag on a golf ball based on first principles, they can solve the vastly more complex problem of the earth’s climate, which includes inter-related thermodynamic, heat transfer and chemistry in a multi-phase domain set in a non-inertial reference frame, which is over 10^5 times the size of the golf ball. “He spoke at a graduation at [my university], where I was the Chair of the Department of Mechanical Engineering. It was a painful experience for me. Rather than giving the students useful words of advice, he spent his entire speech expounding on the dangers of climate change: “Based on his willful ignorance and in a position of great importance, I can think of no better candidate for this prestigious award.” DOD: The US Department of Defense issued another National Security bulletin on climate change. “DoD recognizes the reality of climate change and the significant risk it poses to U.S. interests globally. The National Security Strategy, issued in February 2015, is clear that climate change is an urgent and growing threat to our national security, contributing to increased natural disasters, refugee flows, and conflicts over basic resources such as food and water. These impacts are already occurring, and the scope, scale, and intensity of these impacts are projected to increase over time.” See comments on ignorance and arrogance, above, and links under Expanding the Orthodoxy. Number of the Week: $40 to $50 per barrel. The CEO of Whiting Petroleum, which is operating in North Dakota, said: “We are tooling Whiting to run and grow at $40 to $50 oil.” See Article # 3. Please note that articles not linked easily or summarized here are reproduced in the Articles Section of the full TWTW that can be found on the web site under the date of the TWTW. 1. Peer Review Is Not What It’s Cracked Up To Be By S. Fred Singer, American Thinker, Aug 5, 2015 SUMMARY: Much is made of the peer-review of scientific papers; it is frequently held up as the gold standard that assures the quality of scientific publishing. People often ask whether some work has undergone peer-review and are then ready to accept it – confident this makes it kosher. I wish this were really true. - Climate-Change Putsch States should refuse to comply with Obama’s lawless power rule. Editorial, WSJ, Aug 3, 2015 SUMMARY: The editorial begins: “Rarely do American Presidents display the raw willfulness that President Obama did Monday in rolling out his plan to reorganize the economy in the name of climate change. Without a vote in Congress or even much public debate, Mr. Obama is using his last 18 months to dictate U.S. energy choices for the next 20 or 30 years. This abuse of power is regulation without representation.”It continues with: “States have regulated their power systems since the early days of electrification, but the EPA is now usurping this role to nationalize power generation and consumption. To meet the EPA’s targets, states must pass new laws or regulations to shift their energy mix from fossil fuels, subsidize alternative energy, improve efficiency, impose a cap-and-trade program, or all of the above.”“The rule is the first step in a crescendo of climate-change politics that Mr. Obama is planning for his final days. In September he will commune with Pope Francis on the subject, and then jet to Paris in hopes that his new rule shows enough U.S. progress that the climate treaty conference in December will reach some grand accord.”“When the EPA rule does arrive before the Justices, maybe they’ll rethink their doctrine of “Chevron deference,” in which the judiciary hands the bureaucracy broad leeway to interpret ambiguous laws. An agency using a 38-year-old provision as pretext for the cap-and-tax plan that a Democratic Congress rejected in 2010 and couldn’t get 50 Senate votes now is the all-time nadir of administrative ‘interpretation.'”“This plan is essentially a tax on the livelihood of every American, which makes it all the more extraordinary that it is essentially one man’s order. Mr. Obama’s argument is that climate change is too important to abide by relics like the rule of law or self-government. It is an important test of the American political system to prove that he is wrong.” - Despite Glut of Oil, Energy Firms Struggle to Turn Off the Tap Companies keep finding ways to drill wells faster in an effort to deal with declining crude prices By Erin Ailworth, WSJ, Aug 6, 2015 SUMMARY: The headline is a bit misleading. Some firms find that even at lower prices they can develop oil production profitably. “Amid a refrain about keeping growth in check, executives at Anadarko, a Texas-based oil and gas producer, told analysts last week that the company has doubled its rig efficiency. Anadarko can now drill 70 wells with one rig in Colorado’s Wattenberg field, compared with 35 wells per rig a year ago.” Profits and revenues are down but the firm is still operating and drilling.The CEO of Whiting Petroleum Corp stated “We are tooling Whiting to run and grow at $40 to $50 oil.” Some producers are cutting back, some lost money on hedging, but massive losses many observers were predicting are not occurring.Time after time in the past week, energy companies revealed swelling oil-production figures. Devon, based in Oklahoma City, said it pumped more than 30% more crude in the second quarter compared with the prior-year period, and said it is on track to produce up to 35% more oil this year compared with last. The company reported a $2.8 billion loss on revenue of $3.4 billion. [The cause of the loss was not reported.][SEPP Comment: Petro-states that depend on high prices for government budgets must be getting uneasy.] - The Unsettling, Anti-Science Certitude on Global Warming Climate-change ‘deniers’ are accused of heresy by true believers. That doesn’t sound like science to me. By John Steelle Gordon, WSJ, Jul 30, 2015 SUMMARY: “Are there any phrases in today’s political lexicon more obnoxious than “the science is settled” and “climate-change deniers”? “The first is an oxymoron. By definition, science is never settled. It is always subject to change in the light of new evidence. The second phrase is nothing but an ad hominem attack, meant to evoke “Holocaust deniers,” those people who maintain that the Nazi Holocaust is a fiction, ignoring the overwhelming, incontestable evidence that it is a historical fact.”The author debunks the claim “the science is settled” with a brief history of the improvements in planetary motion with the improvements in instruments and theoretical understanding such as the contributions of Einstein. He states: “If anthropogenic climate change is a reality, then that would be a huge problem only government could deal with. It would be a heaven-sent opportunity for the left to vastly increase government control over the economy and the personal lives of citizens.” But goes on to say: “The [Climategate] communications showed that whatever the emailers were engaged in, it was not the disinterested pursuit of science.” NEWS YOU CAN USE: Commentary: Is the Sun Rising? Book Review of The Neglected Sun: ‘Buy This Book, Our Future May Depend On It’ By Jim Lakely, Heartland.org, Jul 9, 2015, from geologist George Klein Study Finds Long Term Solar Cycle and Predicts Cooling By Staff Writers, Reporting Climate Science, Jul 28, 2015 [H/t GWPF] Link to paper: Multi-millennial-scale solar activity and its influences on continental tropical climate: empirical evidence of recurrent cosmic and terrestrial patterns By J. Sanchez-Sesma, Earth System Dynamics, Jul 24, 2015 CONTINUE TO FULL PDF REPORT AND MUCH MORE…
<urn:uuid:9f99d786-f0c5-48e1-988a-f61a2c8e4946>
CC-MAIN-2022-33
https://rightsidenews.com/life-and-science/energy-and-environment/epa-clean-power-plan-and-climate-change-weekly-report-august-9-2015/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573699.52/warc/CC-MAIN-20220819131019-20220819161019-00005.warc.gz
en
0.94649
3,695
3.359375
3
To control information is to control the world. This innovative history reveals how, across two devastating wars, Germany attempted to build a powerful communication empire—and how the Nazis manipulated the news to rise to dominance in Europe and further their global agenda. Winner of the Ralph Gomory Prize and the Wiener Holocaust Library Fraenkel Prize, the book has also received an honorable mention and been a finalist for five further prizes. Information warfare may seem like a new feature of our contemporary digital world. But it was just as crucial a century ago, when the great powers competed to control and expand their empires. In News from Germany, Heidi Tworek uncovers how Germans fought to regulate information at home and used the innovation of wireless technology to magnify their power abroad. Tworek reveals how for nearly fifty years, across three different political regimes, Germany tried to control world communications—and nearly succeeded. From the turn of the twentieth century, German political and business elites worried that their British and French rivals dominated global news networks. Many Germans even blamed foreign media for Germany’s defeat in World War I. The key to the British and French advantage was their news agencies—companies whose power over the content and distribution of news was arguably greater than that wielded by Google or Facebook today. Communications networks became a crucial battleground for interwar domestic democracy and international influence everywhere from Latin America to East Asia. Imperial leaders, and their Weimar and Nazi successors, nurtured wireless technology to make news from Germany a major source of information across the globe. The Nazi mastery of global propaganda by the 1930s was built on decades of Germany’s obsession with the news. News from Germany is not a story about Germany alone. It reveals how news became a form of international power and how communications changed the course of history. The book was a #1 new release in international relations, media studies, and journalism on Amazon. “This riveting technological chronicle dispels two myths: that the digital era spawned information warfare, and that twentieth-century global communications was largely Anglo-American. From 1900 to 1945, reveals historian Heidi Tworek, Germany strove mightily to achieve world power through news agencies, spoken radio and wireless, urged on by figures from Weimar Republic foreign minister Gustav Stresemann to Nazi propagandist Joseph Goebbels. A chillingly timely cautionary tale, demonstrating that once elites destroy democratic institutions, a free press cannot prevent further disintegration.” “Provides much-needed historical depth to the current debate about media power and the age of ‘surveillance capitalism.’” —The Financial Times “Tworek reveals how officials in the Weimar government, believing they were acting in the best interests of democracy, created structures to oversee and regulate news supply. This led to policies, such as restricting political advocacy from the radio, intended to forestall inflaming partisan passions. Ironically, it was the state’s tight control over the news supply that allowed the Nazis to swiftly take over the country’s communications channels and remake them to serve their interests.” —The Washington Post “Shrewd, erudite and timely.” “A major contribution to our understanding of modern European—and indeed global—history. Tworek underscores the dangers that democratic regimes confront when elites lose faith in democratic institutions—a lesson for our own troubled times.” —Richard John, author of Network Nation: Inventing American Telecommunications “As Professor Tworek shows us in this brilliant new book, battles over ‘fake news’—or, as she rightly terms it, information warfare—have a long history. By illuminating earlier attempts to turn words into weapons, she helps us better understand the challenges that we face today.” —Mary Elise Sarotte, author of The Collapse: The Accidental Opening of the Berlin Wall “To help us understand the media, Tworek employs some strikingly apt distinctions: between published and public opinion, between the news system and the news vehicle, between the production of news and the art and science of its control. At the end she points out something so simple and brilliant: ‘It’s surprisingly hard to make money from news.’ True! Those trying to understand our crisis in journalism today should start with this book.” —Jay Rosen, New York University and PressThink “A riveting and beautifully written account, which combines a history of technology and the media with political narrative, that reveals the largely unknown story of the centrality of communications in Germany’s grasp for world power in the first half of the twentieth century.” —Harold James, author of Making the European Monetary Union “Information War, the weaponization of information, Putin, trolls, ISIS, Trump… the vast spread of today’s malign influence campaigns can seem dizzying and confusingly new. However it’s not the first time this has happened, and to understand the underlying issues one needs to see how the competition over the communications space has played out before. Tworek’s book is an expert and readable guide to the wars of information hegemony in the early twentieth century, and one reads it not only to understand the past, but to grasp the present.” —Peter Pomerantsev, author of Nothing Is True and Everything Is Possible: The Surreal Heart of the New Russia The Routledge Companion to the Makers of Global Business draws together a wide array of state-of-the-art research on multinational enterprises. The volume aims to deepen our historical understanding of how firms and entrepreneurs contributed to transformative processes of globalization. This book explores how global business facilitated the mechanisms of cross-border interactions that affected individuals, organizations, industries, national economies and international relations. The 37 chapters span the Middle Ages to the present day, analyzing the emergence of institutions and actors alongside key contextual factors for global business development. Contributors examine business as a central actor in globalization, covering myriad entrepreneurs, organizational forms and key industrial sectors. Taking a historical view, the chapters highlight the intertwined and evolving nature of economic, political, social, technological and environmental patterns and relationships. They explore dynamic change as well as lasting continuities, both of which often only become visible – and can only be fully understood – when analyzed in the long run. With dedicated chapters on challenges such as political risk, sustainability and economic growth, this prestigious collection provides a one-stop shop for a key business discipline. “This important collection of new surveys by leading scholars represents an essential state of the art summary and reflection on the often neglected major contribution of entrepreneurs and firms to the globalisation of business and provides a comprehensive overview of the evolution of global business in historical perspective, notably on the role of institutions and organisational forms.” —Robert Read, Lancaster University Management School, UK “This is a terrific contribution to the broad field of analytical business history, focused on the key actors who have created the global economy. A must read for economists, political scientists, sociologists and strategy scholars with an interest in how international business actually functions.” —Alain Verbeke, Editor-in-Chief, Journal of International Business Studies “A major contribution to the history of globalization and capitalism, The Routledge Companion to the Makers of Global Business brings together more than fifty scholars, writing on broad political and social issues as well as institutional and technological ones – from the Great Divergence and the histories of gender and race in global entrepreneurship, to value chains and state-owned enterprise. It is a significant and notable achievement that tells the story of how firms helped build the modern global economy.” —Walter A. Friedman, Harvard Business School, US International Organizations and the Media in the Nineteenth and Twentieth Centuries is the first volume to explore the historical relationship between international organizations and the media. Beginning in the early nineteenth century and coming up to the 1990s, the volume shows how people around the globe largely learned about international organizations and their activities through the media and images created by journalists, publicists, and filmmakers in texts, sound bites, and pictures. The book examines how interactions with the media are a formative component of international organizations. At the same time, it questions some of the basic assumptions about how media promoted or enabled international governance. Written by leading scholars in the field from Europe, North America, and Australasia, and including case studies from all regions of the world, it covers a wide range of issues from humanitarianism and environmentalism to Hollywood and debates about international information orders. Bringing together two burgeoning yet largely unconnected strands of research—the history of international organizations and international media histories—this book is essential reading for scholars of international history and those interested in the development and impact of media over time. “Prominent historians and talented young scholars have contributed to this consistent and coherent volume offering a variety of approaches, methods and methodologies. They have zoomed in on the motives, politics and silences of a number of international institutions since the early 19th century. The volume is original and innovates at the level of the optics and perspectives, of the units of analysis and the levels of analysis. It will be a fruitful read for specialists, useful and inspirational for teachers and undergraduate students in history and social sciences.” —Davide Rodogno, The Graduate Institute, Geneva “Overall, this volume does an exceptional job of showing the successes and failures that international organizations had in using media to reach the public over the long term of the last two centuries. It offers scholars an excellent model for how to do global history in covering such a wide swath of time and array of countries, and it should prove foundational and influential on a range of new studies about other international organizations.” —Michael Stamm, Michigan State University “A welcome and highly illuminating book presenting an underresearched angle of IO-history.” —Emil Eiby Seidenfaden, Aarhus University
<urn:uuid:7f4e8465-d68e-4341-9002-0502a1447fe8>
CC-MAIN-2022-33
https://www.heiditworek.com/books
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571246.56/warc/CC-MAIN-20220811073058-20220811103058-00004.warc.gz
en
0.92366
2,102
3.078125
3
Take a large city with a river running through its length. Allow housing construction in not only low lying areas and pits but also in the river bed itself. Fill up ravines and the river bed with waste and debris. Finally, build an express highway to another large city and construct the canals of a big dam around it. Then let there be a 10-day incessant downpour 50 inches - as never before in the recent history and what you have is a flooded floating city, not a Venice but a calamitous city called Vadodara in Gujarat. Vishwamitri River in full spate almost touching the Kala Ghoda Bridge. Pic: Divya Bhaskar (DB) And this could be a perfect recipe for any city in India where indiscriminate "development" is taking place as if land is only a base to bear any kind of construction activity, and as if falling rainwater can automatically find a way out. Says Rohit Prajapati, an activist with the Paryavaran Suraksha Samiti, an environmental NGO based in the state, "Vadodara's town planning has no concept called Water Map that would give an idea about how the water flows. Based on this Water Map, the roads and housing constructions should be given clearances." Instead local authorities turned a blind eye to the construction activities in Vadodara that have obstructed the natural flow of rainwater. Rishi Shah of ENVIS, a national environmental information system says, "before 1970, there were over 70 tanks and water bodies and today there are only 25." So where will rain water, accumulating and slowly percolating, in these 45-50 land filled tanks would go but spread over the same area and enter the houses constructed on them? The construction of the express highway passing along Vadodara and Narmada canals around the city had ignored topography and natural rainwater drainages of the region. As a result, the highway on the raised level acted as a virtual dam submerging villages alongside. Also, the Narmada canals stopped water flows on both sides and in many places citizens had to make breaches in the canal to allow accumulated water through. The malaise in planning is exposed completely only when such an abnormal and excessive rain pattern occurs causing untold misery to thousands. In the Vadodara region alone, 25-30 citizens were reportedly drowned while officials estimate 45000 were evacuated in rescue operations. The worst sufferers in such calamities are often slum dwellers; their hutments at most places bore existence only through their tin roofs peeping above the flood water while many were simply washed away. This time, rich and poor alike lost their homes and belongings. How the disaster unfolded Floods in Sutlej, made in China Flood control, Kosi river, Bihar Overflowing with the official view Monsoon worries in Bangalore The aerial views telecast and printed in local newspapers, showed ground floors of houses in water and people on upper stories and terraces begging for help. Says Smita Patel, a professor at the city's M S University staying in Siddharth Bungalows, now (in)famous for its wrong planning on the river front, "in minutes water entered our houses and we just watched helplessly as our belongings going under water. The water flowed so strongly that my fridge fell off and floated on water. The society road was like river with gushing water." She adds that on the second day, they received food packets and water pouches airdropped by the helicopter; on the third day, they were rescued by the Navy's boats. "But then there were crocodiles in the water and we just prayed for the end of this ordeal", says Patel. Unbelievably, some 500 crocodiles had entered the flooded river and many were reportedly caught by the residents in locations along the river. While Siddharth Bungalows came up in an extremely low lying area, the expensive Samarjya Bungalows were built right in the riverbed and were immersed in 6 feet of water. And such stories repeated all over the city. According to one rough estimate mentioned in a local press report, properties worth Rs 350 crore went under water. A breach in the centrally located Sursagar Lake - Barodians' favorite evening spot. Pic: DB Yet another factor underpinning the flooding is that Vishwamitri is a narrow and a zigzag river with 178 bends right from its origin in Pavagarh till its confluence with Dhadhar near Jambua. Hence water discharge slows down and flood water does not recede quickly. Besides, like all other south Gujarat rivers, Vishwamitri too runs through alluvial soil. According to a research report "Channel shifting of a highly sinuous meandering river in alluvial plain, Vishwamitri river, Mainland Gujarat" undertaken by the Department of Geology, M S University of Vadodara, the river is shifting eastwards. This process can wipe off all that is along its east bank at some places. These and other facts about the river must have been known to the planners. But it appears that though information is available in the public domain, planners and decision makers do not seem to bother checking and communicating with the experts. At Gujarat Ecology Commission's (GEC) Environment Information Service Centre which boasted of collecting and disseminating all the information on ecology, officials had no idea about such reports. GEC officials were busy assessing damage to their own property and resources. The Gujarat Disaster Management Cell and its regional units have no full time experts but some officials holding additional charge and have no understanding of disasters, leave alone its management, says Rohit Prajapati. City has had a better past Floods are not totally new to Vadodara. The 2005 floods have been assessed as the second worst after the 1927 devastating floods, called Ghoda Poor. The 1927 season witnessed the highest recorded rain levels - 92 inches - and swelled the Vishwamitri so much that flood water had touched the feet of Kala Ghoda statue near the river. Ghoda Poor had wiped off villages after villages and human as well as animal corpses had to be torched on common pyres in mass cremation. While average annual rain received in Vadodara is 30 inches, it was 77 inches in 1878, 69 inches in 1917, and 92 inches in 1927. Also in the last four decades, Vadodara had suffered from floods in 1970, 1974, 1976, 1994 and 1996; but the 2005 floods are comparable only to 1927's Ghoda Poor. Vadodara city should not have allowed itself to decay especially when it inherits a rare legacy of impeccable administration under the Late Maharaja Sayajirao Gaikwad (1875-1939) the Maratha ruler of Vadodara. The city then maintained good drainage systems through various nalahs and open spaces for natural water flow. Today many of them have vanished under building constructions. The centrally located low lying Dandia Bazar always gets flooded but Maharaja had provided a canal which would quickly drain off the accumulated rainwater. This canal has now become a place for accumulating garbage. Vadodara 2005 - a floating city. Pic: DB An example of Sayajirao's foresight in planning and execution is the Museum in Sayaji Baug, which is hardly 100 meters from the swelling Vishwamitri and yet did not suffer any damage in the current or earlier floods. The reason -- this architectural wonder has a mechanism for draining off the water through channels in its basement out into the adjacent main channel. An administrative block in the city built in 1902 also is a similar wonder. Six lakh rare old documents were saved. Lessons for the city's administrators from such old proven systems exist in the city itself. But city authorities have not learnt from previous disasters. Even the 2005 disaster could be used to learn and plan ahead. The pity is that a visionary Sayajirao's city has been taken over by greedy politicians, builders and administrators each ever ready to bite off the once beautiful city. Says Ranjitsinh Gaikwad the scion of the royal family and still revered as 'Maharaja' by loyalists, "those days there was a twin drainage system one for sewage and the other for rainwater. Now there is only one drainage line and so even where there was no water logging, sewage water came out of toilet bowls due to backlash of flood water in the recent flood. Sayajirao had also built both Ajwa and Pratappura dams over 100 years ago with some foresight. Vadodara's lifelines, the Ajwa and Pratappura water reservoirs, have suffered an estimated Rs 56 crore damage due to this unprecedented rain. The Pratappura dam is too old and can breach anytime, anywhere and hence needs complete strengthening and changes in its design, says M K Sinha, director of National Water Commission. After noticing several cracks in Pratappura dam from the devastating Bhuj earthquake (2001), the state government had allotted Rs 7 crore for its strengthening. But corrupt officials have gobbled up the funds, charge the villagers around Pratappura. Residential Colonies came up on the landfilled low lying areas. Pic: DB According to Narendra Rawat of the Save Baroda Committee, the life of the old Pratappura dam is over and hence it should be written off, otherwise even a greater calamity is waiting to happen. Rawat says the shortfall in water supply can be met through Narmada water. Structural Design expert Dr I I Pandya says that the Pratappura dam structure has got badly eroded exposing the foundation which is a sure shot invitation to future disasters. Chemical discharge followed the floods Chemical factories around Vadodara in Nadesari, Ranoli, Manjusar-Savali, Dabhasa, Padra have allegedly discharged their hazardous chemical effluents in flood water to save cost of effluent treatment before sending it to the common effluent treatment plant in their area. During this downpour, a heavy oil slick out of Indian Oil Corporation at Koyli turned a village tank literally into an oil pool from which men and children drew oil in buckets and storing in drums. Cattle died after drinking this oil-rich water. Around the same time, the compound wall of fertiliser firm Gujarat State Fertilizers & Chemical Ltd (GSFC) near Bajwa was breached to allow water accumulated in the factory premises to flow out. But it suddenly flooded a nearby settlement and forced people to the road. The chemical mix caused them skin irritation and sickness. The Common Effluent Channel carrying effluents of many factories near Vadodara and passing through villages of Vadodara and Bharuch districts breached at several places and toxic water mixed with flood water spread in farms alongside in villages Dhanora, Sherkhi, Jaspur, Tajpura, Umraya, Akalbara, Dabka, Tithor, Karkhadi, Dudhwada, Kareli, Piludra, Vedach and Sarod. This had already been a longstanding problem causing serious damage to land, crop and groundwater. The current breaches in the channel have compounded this and villagers have demanded immediate closure of the channel. Furthermore, chemicals from Gujarat Alkalies and Indian Petrochemicals Corporation Limited (IPCL) got mixed with flood water and spread in a 20-30 km area. The very fact that the chemical firms are crying hoarse about their loss of raw materials (chemicals) indicates that these hazardous chemicals have got mixed with the rainwater. The city administration itself is stunned and helpless and citizens are extremely agitated by the officials apathy. Says an angry Lalitaben, a resident of Nizampura which remained inundated even after the rain stopped "not a single official or politician came to help us because who will now allow spoiling starched white clothes with mud and water." Officials were even reluctant to come on the telephone line, people complain vehemently. "These government officials are relaxing like buffaloes in muddy pool. They will only jump out when the relief money comes pouring in" is how another resident Bhaven Kachchi describes the administration. People demonstrated outside the chemical factories and municipal offices at several locations but to no avail. The local press reported that GSFC officials had said that they paid the local corporator (elected representative in the city council) to take care of the people affected by the company's flood water mixed with chemicals but the corporator denied getting this money. Reversing the misplanning Post calamity demands are pouring and the authorities have been announcing various measures to avert such disasters in future. Vadodara Municipal Commissioner Rajesh Topno announced that all illegal landfill of the natural drainages, slopes and nalahs which played major role obstructing water flows in the recent havoc caused by the unusual rains will be removed and the authorities have reportedly began the surveys of such illegal landfill. Vadodara's vigilant citizens have time and again, demanded such surveys and ultimate removal of obstructions to water flow but no action was ever taken so far. Illegal landfilling has been rampant in Subhanpura, High Tension Road, Karelibaug Road, Gotri, Pratapnagar, Makarpura, Ajwa-Waghodia Road, Masiya nalah, Ruparel nalah, Bhukhi nalah and these have been the major ones obstructing the water flow causing flooding of residential areas around them. Even after flood water receded in the river, water is still stagnating in many nalahs confirming obstructions. The aquifers too are saturated so where would the water go? Residents in many buildings have complained of water coming up through the floors and that is causing concern about safety of buildings' foundations. Sama Nizampura areas were getting flooded frequently and hence a satellite survey was ordered to assess the situation. And the demand for removal of obstructions is gaining ground. But will this undoing be possible? Will it withstand pressures from vested interests? Dr P K Pradhan, chief programme coordinator of Bharatiya Agro Industries Foundation's (BAIF) watershed and rural development work in Gujarat, says, "why should people be allowed to live in the danger zones? If the authority was not aware of the maximum level of extreme flood like this, mark this level now and put a signboard warning people not to inhabit beyond this level mark?" He adds that the government must not provide rescue operation and help if citizens choose to ignore the levels. Prajapati says that administrators need to take hard decisions and relocate the housing colonies. Some Vadodara corporators are demanding deepening and broadening of Vishwamitri riverbed and not permitting constructions along the river front. Also, for faster discharge of flood water, the river also needs straightening as it runs zigzag. But whether such measures are technically viable or not is not very clear. One thing is clear, though. If there is business for contractors, there may be pocketing of money. Going by the city's recent past, politicians, administrators and contractors may all look forward to this. Monsoon's upper air cyclonic circulation over Saurashtra & Kutch caused unprecedented heavy rains that resulted into flood situation and inundation of lowlying areas in the state of Gujarat. The situation was critical in the districts of Vadodara, Kheda and Anand while other districts - Surat, Valsad, Navsari, Bharuch, Surendranagar, Dangs, Ahmedabad, Amreli, Bhavnagar, Junagadh, Rajkot, Narmada, Jamnagar, Gandhinagar and Sabarkantha also suffered from the floods. Some 10000 villages in these districts were badly affected. The cities experiencing severe inundation conditions are Vadodara, Nadiad, Ahmedabad, Navsari, Surat and Limbdi, Dakor, Anand, Kheda, Petlad, Borsad. The rivers reported to be flowing above danger levels are Vishwamitri (Vadodara District) Shedhi and Vatrak (Kheda district). In Bharuch district Vagra, Amod, Jambusar Talukas have been heavily affected. Due to topography and poor drainage of rain water, the river water has changed its course and flooded the villages affecting the roads, power supply other lifeline infrastructures. The small rivulets such as Bhuki Khadi in Amod and Jambusar flowing from Vadodara have overflown inundating roads and villages. The power supply in 5949 villages and 56 Towns and water supply in 5752 villages and 32 towns have been affected. A total of 132 persons have been reported dead either due to drowning, collapse of building walls. The railway tracks at many low lying stretches have been submerged. Transportation in the affected districts is reported to have slowed down due to closure of 259 state managed road routes, but 95 state road routes remained open for movement. 1239 panchayat managed roads have been closed for movements keeping 302 panchayat roads open for accessibility. The traffic movement on the State Express Highway and NH8 was also affected. The irrigation department has issued high alert for 41 dams in the affected districts. The state administration has evacuated 500,000 affected persons to safe locations. Some 3 lakh persons have been evacuated from Surendranagar(1900) Bharuch(42717), Kheda(28941), Navsari(33096), Surat(33727), Ahmedabad(5267) and Valsad(3800), Vadodara (99022), Anand (32694), Amreli (11525), Bhavnagar (1216), Gandhinagar (1250), Jamnagar(200), Junagadh(940), Mahesana(230), Panchmahal(51) Sabarkantha(375), Narmada(24) districts. The state deployed 9 army columns in Kheda, Vadodara, Anand, and company of CRPF in Matar and Nadiad with 34 boats have been deployed for rescue operations while 11 Indian Air Force Helicopters have been conducting rescue operations and air dropping of relief materials. 600 trained SRP personnel and State Police force have been deployed. The trained fire bridge rescue teams from all city municipalities are carrying out emergency operations in the affected cities. 200 students from Haryala Gurukul have been shifted to safe place. 400 stranded passengers of Shanti Express at Dakor have been rescued. Power supply has been restored in 5068 villages and 52 towns and water supply has been restored in 3621 villages, and 25 Towns Water is supplied through tankers in Kheda, Anand and Vadodara.
<urn:uuid:f426d277-b243-4d2b-9444-233a15cdb4f1>
CC-MAIN-2022-33
https://www.indiatogether.org/vadodara-government
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00205.warc.gz
en
0.961654
3,944
2.6875
3
are some reasons or purposes to do then research work. They are- To find out the errors of writing To find out the causes of errors of writing To explore the condition of the fresher student of free hand writing To show the way to solve of their errors. a foreign language or second language, learners have to know or acquire four skills. Such as- listening, speaking, reading and writing. Among these four skills writing is the most important. By birth one acquires listening and speaking ability. Within three to four years a child can speak his or her mother tongue. Gradually, he learns to read and write by learning letters. If the learners learn the letters, then it’s easy to read. But it’s not easy to write by knowing the letters because there are so many rules and regulations of writings. Every language follows some rules and regulations to write. We can express ourselves through speaking, body language and symbols. a foreign language is a gradual process, during which mistakes are to be expected in all stages of learning. Mistakes will not disappear simply because they have been pointed out to the learner, contrary to what some language learners and teachers believe. Language acquisition does not happen unless the learner is relaxed and keen on learning. Fear of making mistakes prevents learners from being receptive and responsive. In order to overcome learners’ fear it is essential to create a friendly and relaxed atmosphere in language classrooms, to encourage cooperation through peer work or small group work and apply techniques for language acquisition that suit and involve individual all the four skills writing is the most important. In every sphere of life, writing is needed. A student has to write in his exam script. We evaluate the student’s merit with his writing in exam script. Without writing, there is no evidence. David P. Harris has shown different objects of writings in his book Testing English as a Second Language. In English, to write something precisely, the writers have to know some components. Such as- subject-verb agreement, preposition and their extensive use, tense, subject, verb etc. In English, or in any languages, have some rules and regulations for effective writing. So, the writers have to have a clear idea about the grammar. If someone writes ‘I rice eat’ then it will be wrong because it doesn’t follow the rules of English Grammar. So, correctness of the spelling, grammatical accuracy, as well as the clear concept about the grammar will help students especially the first year students remove their errors of Data Presenting and Data Analysis collecting their writing, I have checked those. There I found a lot of errors in their writings. Some of them have done comparatively better than the others. There are some common mistakes in their writings. Such as- preposition, spelling, tense, sentence structure, determiners etc. found spelling mistakes in the most of the students’ writings. Some students have confusion to use –sion and –tion. For example, they have written conclusion in the place of conclusion. One of the students has written the word ‘sit’ as a noun but the noun will be ‘seat’. In pronunciation they are the same but in spelling they have difference. They have done some silly mistakes only for their carelessness in writing. Such as- they have written, ‘goal’ instead of ‘goal’, ‘ded not’ instead of ‘did not’, ‘lesten’ instead of ‘lesson’, in the place of ‘their’ they have written ‘there’, ‘complite’ instead of ‘complete’, ‘healthe’ instead of ‘healthy’, ‘stong’ instead of ‘strong’, ‘daly’ instead of ‘daily’, ‘surprice’ instead of ‘surprise’ etc. They make such type of mistakes because they don’t pay their attention or concentration at the time of reading and writing. There is another problem that after reading or memorizing something they don’t write it. agreement is another kind of big problem of writings. The students make mistakes in subject-verb agreement commonly. I have also found this type of problem in the writings. One of the students has written ‘students loves her’ which is wrong. Here ‘the students’ is plural form. He has to write ‘students love her’. Someone has written that ‘all of our student is’ which is also wrong. The student has to write ‘all of our students are’. To know about it the students have to know countable and uncountable noun. From grammar books they can know about subject-verb agreement. The students face problems especially in using verb in the place of ‘little’ or ‘a little’, ‘few’ or ‘a few’, ‘each’, ‘every’, ‘both’, ‘any’, etc. all the students face problem in using tense. I have also found that the students of first year at intermediate level of this college had mad this type of errors. They frequently use present tense and past tense in the same sentence. Sometimes, they are unable to maintain tense order. I have found their errors in the place of present perfect tense and past indefinite tenses. One of the students has written that ‘I passed my school life from…’. The student has to know that in past indefinite tense he must mention time. Sometimes, the students write that ‘I have passed my college life in 1999…’ but it is also wrong. If he wants to mention time he has to use past indefinite tense. From one example, the students can remove their confusion and that is ‘I was born in 1984’ which is past indefinite tense. I have found that the students mix different tenses in their writings. One of the students has written that ‘When I am in class one; in the first day I sat at the first bench’. He is telling about his past experience but he is using present indefinite tense. The students can explain any past experience in present indefinite tense but mixing present indefinite tense and past indefinite tense together in a sentence is wrong. I have also found some errors in writing past perfect tense and present perfect continuous tense. In present perfect continuous tense, the student hasn’t mentioned time in the sentence. Someone has written that ‘She has been teaching here.’ He or she hasn’t mention time here. The student has to write ‘She has been teaching here for 5 years.’ found errors in capitalization in their writings. In the middle of the sentence the students have to start words with small letters. But sometimes there are some exceptions. If the word is someone’s name or name of any place then they have to use capital letters. There is another exception that if some one wants to write ‘I’, he has to use capital letter though it is in the middle of the sentence but here I have found one of the students has written ‘i’ instead of ‘I’ in the middle of the sentence. After completing a sentence, the students have to start with capital letter to write new sentence but here one of the students has written ‘her mutual behaviour…’ which one is wrong. The students are making such types of silly mistakes only for their carelessness of pronoun the students should not use article but one of the students has written that ‘the he is…’. I asked the student why he has used ‘the’ before ‘he’. He answered me that if ‘the student’ is possible, why ‘the he’ is impossible? I have used he instead of the word ‘student’. So, I understand that the student does not know how to use an article properly. have also found errors in writing present participle. After preposition the student has used ‘-ing’ form of verb but in my findings I have found some of the students haven’t use ‘-ing’. The student has written that ‘After pass my H.S.C. …’ but here he has to write ‘after passing’ but after infinitive (to+verb) they have to use the base form of the verb. have found errors in sentence structure. One of the students hasn’t use verb in the sentence. He writes, ‘There many students…’. In this sentence there is no verb. He has to write ‘there are many students…’. Another student writes that ‘One of them my teacher…’ but he has to write here that ‘One of them is my of the students have a common problem in writing and that is, using of preposition. To remove the problem of preposition, the students have to practice regularly. Without practicing they can’t improve themselves. If they write regularly, gradually it will improve. In my data, I have also found some errors in using preposition. One of the students has written that ‘I am a student in Pangsha Postgraduate College’ but it is wrong. He has to write ‘I am a student of Pangsha Postgraduate College.’ The students have problems in using ‘in’, ‘on’, ‘of’, ‘to’, ‘for’, ‘with’ etc. found another error in their writings. They can’t use determiners in their writings perfectly. Determiners are the pre-modifiers and post-modifiers of the noun. Someone has written that ‘This kinds’ which is wrong. The student has to write ‘this kind’ which works as singular. If he wants to make it plural, he has to write ‘these kinds’. Article is also one kind of determiner. I have found errors in using article. have prepared a diagram to show their errors in different sectors. I have worked with 20 students and the diagram will show their errors- above chart shows that among nine students all of them have mistaken in spelling, ten of them in using preposition and participle, sixteen of them in tense, four of them in determiners, eight of them in using verb, twelve of them in subject-verb agreement and fifteen of them in sentence structure which have been shown also in the following percentage table. have shown different types of errors in the writing of the first year students at Pangsha Postgraduate College, Rajbari. The students can remove those types of errors by themselves. The education system of our school and college can help the students to solve their problems. The teachers and students of school and colleges don’t take care of their students’ writings. They always emphasize on memorizing but they don’t encourage their students to write creative writings. To remove their errors the teachers of school and colleges have to be conscious. They will have to be aware of the errors of their students. In university level the students have to be conscious about their own writings. To remove their errors of writings they have to write regularly. Thus, students can grow a habit of writing something creative on daily basis. Everyday a student should write something creatively. In this way the students can remove errors of spelling. The teachers can give them some topics to write. After writing, the teachers will check them thoroughly and will find out their errors and give a solution how to correct those errors. In this way the teachers can help their students to remove their errors. The teachers can give them some guidelines to read grammar books. There are some English grammar books such as Intermediate English Grammar by Raymond Murphy, Learning English the Easy Way by Prof. Dr. Sadruddin Ahmed which can help the students to remove their grammatical problems or errors. From these books, the students can learn tense, preposition, sentence structure, subject-verb agreement properly. My last suggestion to the students to remove their errors is that there is no alternative way of writing regularly. Whatever a student will memorize, he or she has to write it down. If one can do this, he or she will be able to remove different types of errors. Besides, Learners must be given practice in self-correction of their own work either individually or in pairs but only if they prefer peer cooperation. However, in my opinion, students definitely need training in rectifying mistakes independently, i.e. without teacher’s interference. Left to their own devices, learners might be overwhelmed or frustrated by task intricacy. Learner’s ability to notice errors without teacher’s aid is a qualitative leap to conscious cognition. acquisition does not happen unless the learner is relaxed and keen on learning. Fear of making mistakes prevents learners from being receptive and responsive. In order to overcome learners’ fear it is essential to create a friendly and relaxed atmosphere in language classrooms, to encourage cooperation through peer work or small group work and apply techniques for language acquisition that suit and involve individual learners. Correction is an essential condition for successful acquisition of any language. Learners must be given practice in self-correction of their own work either individually or in pairs but only if they prefer peer cooperation. However, students definitely need training in rectifying mistakes independently, i.e. without teacher’s interference. Left to their own devices, learners might be overwhelmed or frustrated by task • Mistake is a natural process of learning and must be considered as a part of cognition; therefore, teachers should not humiliate or rebuke the students for committing any mistakes • Teachers have to recognize a well known fact that learning ability varies from person to person and all language learning is based on continual exposure. • Students should be given chance very frequently to correct each other work, which is very important because ‘self-correction or peer correction help to focus student attention on the errors and to reduce reliance on the teacher • We should never correct a mistake, rather correct always a person‘. So, the active involvement of students in the process of dealing with mistakes is important: it stimulates active learning; induces cooperative atmosphere; and develops independent learners. • The best time to correct is ‘as late as possible’, because it creates friendly and cooperative atmosphere • Learners must be given practice in self-correction of their own work either individually or in pairs but only if they prefer peer cooperation. However, students definitely need training in rectifying mistakes independently, i.e. without teacher’s interference good writing one has to practice regularly. If the students write regularly, it’s possible to remove their errors of writings. The students’ willingness and teachers’ help can help them write perfectly and without any errors. I believe that there is a great opportunity to work with my research paper and later on the researchers will bring out the success properly of my research.
<urn:uuid:9f5bf7ae-429f-4830-a85d-e4106253eedb>
CC-MAIN-2022-33
https://www.lawyersnjurists.com/article/errors-in-writing-made-by-students-of-pangsha-post-graduate-college-rajbari/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00405.warc.gz
en
0.952529
3,445
3.609375
4
» Parent Engagement or Parent Involvement? Larry Ferlazzo » Developing the Fully Engaged Parent by Marilyn Price-Mitchell & Sue Grijalva » How Do You Know if You're Really Open to Partnership Anne Henderson & Karen Mapp » Parent Engagement in Education: Drama or Empowerment? David Womeldorff » Engage Every Parent: Identifying Goals for Parent Engagement Nancy Tellett-Royce & Susan Wootten High school is a very difficult time for kids, and even more so if they were not raised in this country. At this age, adolescents are more independent, they are searching for their identity, their need to belong to a group is exacerbated, and they are willing to try different things. They are also more susceptible to bad influences. All good reasons for you to remain involved in their education and in their lives even when teenagers are reluctant to allow their parents to have an opinion about anything. One of the areas you should be involved in is the balance between your children’s social life and their school work. As important as it is to encourage kids to spend time with other youngsters, high school is also a time to focus on hard work. Grades are increasingly important for your kids to have more opportunities to go to the colleges they choose and to get scholarships. Another area in which you should get involved is in finding mentors for your children. (Mentors are people who guide and help others along the way.) If you can’t guide your kids through high school as much as you would like to, help them find people who can. It can be someone in your family or the family of one of your children’s friends. Look for a person who understands the education system well and ask him or her to help you with your children’s questions. You may also want to identify several people who have interesting careers and occupations about which your children may want to learn more. There’s nothing better for students to get a clear picture of what a job entails than spending some time with a person who is doing that job. If you can’t identify a mentor for your child, don’t worry. In most communities there are several mentoring programs that you can tap into, like Big Brothers and Big Sisters for example. You can also find many mentoring programs through a Guide to Mentor Program Listings. You should also talk to your child’s guidance counselor or school principal and ask for suggestions. If you don’t get a satisfactory answer, visit your local library and talk to the librarian. Remember that developing a close relationship with adults who serve as mentors is a great way for children to stay the course, and many times, it’s easier for kids to listen to adults who are not their parents. Just make sure you know the adults well before you trust them with your kids. Being involved with your children’s education means that you should consider being a PTA member just as when your children were younger. It also means that you have to maintain as open a dialog with your kids as possible. The more you talk to them, the more you listen to them, the more in-tune you are with their activities, their obligations, and their concerns, the less likely it will be that your children will get into trouble. “There are some family values that are escaping many American families at this time, but that still exist in other countries,” explains Mr. Strange from Cherry Valley-Springfield Junior Senior High School.“Families need to eat dinner together. They need a time, a place, and a reason to sit down and talk and this should be the rule and not the exception. I blame the TV and the microwave for this change, and I also blame our obsession with keeping kids busy since it is often this continual activity treadmill that denies parents the quality time they need to be spending directly with their children.” In the last few years there has been a lot of research connecting the increase risk factors in children who live in homes where the family doesn’t have a meal together every day. The Latino culture values food and family so much, that if this is a tradition that got lost in your home, you should try to reinstate it as soon as possible. Make sure the entire family sits for dinner every night— keep the TV off to encourage conversation and take the opportunity to share what the day was like for each one of you. Get in the habit of telling each other stories. Marcela Hoffer, a clinical social worker, suggests that keeping an open dialog is not only about talking: “If parents realize that their kid loves baseball for example, they should sit down with them to watch a game. If they share an interest, even without talking, the child will feel understood and not alone.” In order for students to get all the necessary credits for graduation, they need to have a plan. Your mission should be to get involved in the plan early on in order to help them meet the graduation requirements. By looking together at the list of course offerings, you can discuss with your children the different possibilities. For example, if the school requires two credits of history or social studies, they may be able to choose between American History, World History, Ancient History, etc. You may suggest certain subjects based on your children’s interests and where they could be headed in the future. Again, if you feel that you are not the best person to help them make these kinds of decisions, try to find them someone who can guide them. When evaluating courses, encourage your children to choose more difficult classes. Not only will they help them to develop strong academic skills, which are crucial for students aiming to go to college, but they will keep them more engaged. Sometimes kids may choose easier classes just because their friends are taking them or so they don’t have to work so hard. But you know, from your own experience, that when you coast at work, you pay for it later. Help them evaluate the advantages of taking honors courses, Advanced Placement classes, and the International Baccalaureate program. Also, make sure you speak up if your child gets placed into a class he or she didn’t request or is below his or her abilities. Many times, parents are more persuasive than students when it comes to issues like this, so make your voice heard if your child is not getting the attention he or she deserves. The best idea is to intervene quickly because these changes should be done very early in the semester. It’s essential for you to become familiar with the school personnel from day one. You need to know not just your child’s teachers but every adult in the building who is involved with your child in any way. Different schools have different structures. Some may have deans, department chairs, attendance officers, peer counselors, guidance counselors, a center for new arrivals, etc. Understanding the structure will help you access it with ease whenever you need to. If you develop a strong relationship with all these people and have the habit of dropping by at your child’s school at any time, both your son or daughter and the teachers will get the message that you have high expectations for your child and that you are there if they need you. It is interesting how Anglo parents will fight to get their children evaluated if there is even the slightest suspicion that they may need special education, and how they fight to get the school to provide the extra help immediately and throughout their high school career. In contrast, Latino parents tend to react negatively when they are told that their children may need to receive special services. In many instances they refuse to have their children evaluated, and they also refuse the additional help that the school is offering. Children don’t develop learning disabilities or any other disability overnight. So, if your child is entering the American school system for the first time at the high school level, and he or she has a disability he or she should have already been identified as in need of special services. However, according to Anthony Bellettieri, school psychologist at the middle and high school levels, many students who enter school speaking no English may slip through the system without being identified in need of special education until high school just because they don’t speak the language yet and people think that is the problem. Whatever the specifics of your situation, if you notice your children having any difficulties—in communicating,understanding, hearing, or in behavioral,emotional, psychological, or motor skills—you should inform the teachers right away. The idea is for your child to be evaluated immediately so that the problem can be identified and your child can begin receiving services. By law, if your child’s first language is Spanish, he or she should be evaluated in Spanish by a native speaker. And also by law, the school can not evaluate your child without your consent. In this country there is a huge array of services available to students with any kind of disability. The idea is for them to receive these services in the least restrictive environment, which means that whenever possible it is better for the student to attend a regular class where a second teacher works with him or her one-on-one. Another alternative may be that the child is taken out of the classroom for a period to work on a specific subject where he or she needs reinforcement. Children who are severely disabled may be placed in a separate class. What is important for you to take into consideration is that you are the most important advocate for your children. If they need extra help, you need to work on getting it for them. So, first approach the teachers. If you don’t get a satisfactory solution to the problem, talk to the principal, and if this still doesn’t work, talk to the person at the district office who is in charge of special education. Don’t stop until you get your child the help he or she needs! If your children complain about a teacher’s racially biased comment or attitude, try to understand what happened and try to avoid passing judgment right away. Think for a moment about your own prejudices and make space for the possibility that the teacher made a mistake. Ask your children to make an appointment with the teacher where they should explain that the comments or the attitude are making them feel uncomfortable. Suggest that they speak in the first person—“I feel put down when you make these comments”—so that they simply express their feelings and the teacher doesn’t feel attacked. Sometimes, just having this meeting will resolve the problem. However, your children may need to talk to the counselor and ask for advice or—if things don’t get better—for intervention. If needed, the next step of this process should be a meeting between you, the teacher, the counselor and your child. If the situation doesn’t improve, you may want to consider speaking to the principal of the school. Throughout the situation, keep calm. The more rational you are, the better you are able to explain the situation from your child’s point of view—and maybe even clarify some cultural stereotype—and the better your chances of resolving the situation with little negative impact on everyone involved. High school is also a good time for you and your children to begin exploring their vocations. By now, you probably know what their talents are. Begin discussing what they would like to study when they finish school. Although their ideas may differ from what you want them to study, be open. Youngsters who get pressured to follow a certain career—to support the family business for example, or something that is prestigious in your country—tend to rebel by refusing to go to college. Try to understand that a career is something your children will have to live with for the rest of their lives. It should be their choice. Having said that, you can guide your children in the process of finding a career that fits their needs and talents. A career in art, for example, does not necessarily mean they will starve.You can help them explore interesting artistic careers that will both allow them to express their talents and support themselves. Talk to the career counselor in school for ideas, visit a career center at the local community college, or conduct Internet searches with your children. In the United States, people’s vocations are highly valued. In a competitive market such as this, it is very important for your children to choose a career where they feel the drive to grow and compete. If you force them to follow the career of your dreams, or what is valued in your country, it is very possible that they will not reach their full potential, and they will be unhappy. Think about it this way—there are probably many aspects of the Latino culture that are a priority to you, like your language, your family values, your religion, etc. These are the aspects you feel that your children must keep alive. Try making sure that your kids carry on these traditions while at the same time realize that in order for them to have better chances to succeed in America you will have to lose some battles. One of them may be the vocation battle. Letting young people choose what they wish to do with their careers is an American trait you may need to embrace. Most schools offer career exploration courses, but even if your child’s school doesn’t, it is very likely that it has career programs in its software library that your child can use. Your children can do their own exploration and then discuss their results with you, the guidance or career counselor at school, or another adult who can mentor them. Some of the available programs are: Coin, Choices, and Discover. They may also want to take a look at the Occupational Outlook Handbook, a directory published by the federal government that lists every single job out there along with their requirements, their pay scale, and future projections. Reprinted with permission from Help Your Children Succeed in High School and Go to College by Mariela Dabbah (Chapter 5: Parent Involvement in High School). Copyright © 2007 Mariela Dabbah and Sourcebooks, Inc, Naperville, Illinois. All Rights Reserved. For more information or how to purchase Mariela Dabbah's books, visit the author's website listed below or the Latinos in College website. Books are available in English and Spanish. Posted on February 2, 2011 by Mariela Dabbah [Guest Article] Mariela Dabbah, author of numerous books on Latino parent involvement, bridges the cultural divide, helping parents understand their roles in the American school system and in the education of their child. This article, excerpted with permission from her successful book, "Help Children Succeed in High School and Go to College," teaches Latinos how to support their children in high school by identifying mentors, keeping channels of communication open, helping with the choice of courses, etc. Mariela's website. Additional Information about our Bloggers (www.ParentInvolvementMatters.org does not handle reprint requests. For permission to reprint articles, please contact the author directly.) Tags: *Diverse Families, *Parent Engagement at Home , *School-Family Partnership, Communicating, Low-income/ At risk, Multiracial |*Parent Engagement at Home (33)| |Learning environment (16)| |Critical Thinking (8)| |Study Skills (5)| |Social Skills (3)| |Character Development (14)| |Bullying Prevention (6)| |Positive Discipline (10)| |Parents as teachers (23)| |*School-Family Partnership (20)| |Building trust & respect (18)| |2-way communication (14)| |Parents in classroom (3)| |*Diverse Families (7)| |Low-income/ At risk (2)| |Special Needs/LD (3)| |*Technology & Partnership (3)| |EdTech Resources (1)| |Social Media (1)| |*Educational Policy (7)| |PTA - PTO (3)| |Ed Reform (3)| |Mom Congress (1)| |Empowering Children to Realize Their Potential for Good| |Teaching to the Test is not Education| |This Week's #PTChat: Crafting the Partnership| |Resisting Raising Children Who Feel Entitled| |This Week's #PTChat: Middle & High School Family Engagement Strategies|
<urn:uuid:7ab1fd6b-73c7-4c88-bcdb-f98ee2046558>
CC-MAIN-2022-33
http://www.parentinvolvementmatters.org/articles/mariela-dabbah.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00405.warc.gz
en
0.963409
3,518
2.703125
3
April 23, 2021 Dukkha samudaya means “origin of suffering.” Kammic energies for future suffering accumulate via Paṭicca Samuppāda (PS). We start acting with avijjā and initiate PS when sensory inputs trigger temptations and generate “samphassa-jā-vedanā” or “mind-made feelings.” Critical Conclusions from Loka Sutta (SN 35.23) 1. In the previous post “Loka Sutta – Origin and Cessation of the World” we reached the following conclusions. (you may want to print it and refer to it as we proceed.) - It is the Paṭicca Samuppāda (PS) process that describes the key steps leading to future suffering. - However, that process DOES NOT initiate with “avijjā paccayā saṅkhāra.” We don’t AUTOMATICALLY start acting with avijjā! - A sensory input triggers to activates the Paṭicca Samuppāda process: sight, sound, taste, smell, touch, or the memory of a past event (dhammā.) - If the mind attaches to such a sensory event (taṇhā) that attachment leads to upādāna (keeping it in the mind and getting stuck in it.) Then while in the “upādāna paccayā bhava” step, we accumulate kammic energy for future births with mano, vaci, and kāya saṅkhāra. That is how the PS process gets to “avijjā paccayā saṅkhāra.“ - Before we discuss those details, it is critical to understand how this whole process starts with “getting attached” to certain sensory inputs (ārammaṇa) with “samphassa-jā-vedanā” or “mind-made feelings.” - By the way, “dukkha samudaya” is the same as “loka samudaya.” That is why Nibbāna implies “stopping future rebirths” or “stopping the re-arising of this word.” It may take time to sink in this critical point. “Samphassa-jā-Vedanā“ – Example 1 2. A sensory input comes through one of the six senses: eyes (sights), ears (sounds), nose (smells), tongue (tastes), body (touches), mind (memories). In Pāli they are the six types of vipāka viññāṇa. - Let us consider a simple example starting with cakkhu viññāṇa. Suppose three people A, B, C are sitting in a small coffee shop. They are all facing the door, and person X walks in. Suppose that person X is a close friend of A, the worst enemy of B, and that C does not know X at all. We will also assume that all 4 are males. - So, let us see what happens within a split second. A recognizes X as his friend, and a smile comes to his face. B recognizes X as his enemy, and his face gets darkened. - On the other hand, X is just another person to C. He immediately goes back to whatever he was doing. 3. That is an example of a “cakkhu viññāna,” a “seeing event.” It is over within a split second, just like taking a photo with a camera takes only a split second, where the image in captured on the screen instantaneously. - However, something very complicated happens in the human mind when a “seeing event” occurs. - It is critically important to go slow and analyze what happens so that we can see how complicated this process is (for a human mind) to capture that “seeing event.” It is much more complicated than just recording “a picture” in a camera. 4. Within that split second, A recognizes X as his good friend, and joy arises in his mind, and he becomes happy. B recognizes X as his worst enemy, and bad emotions arise in his mind, and he becomes angry. On the other hand, no extra feelings arise in him. He goes back to whatever he was doing. - As we can see such vastly varying feelings arise due to the three steps that follow the “seeing event” or cakkhu viññāṇa. As we remember from the previous post (refer to the printout) those three steps are “Tiṇṇaṁ saṅgati phasso; Phassa paccayā vedanā; vedanā paccayā taṇhā.” As we discussed, the last two steps really are “samphassa paccayā samphassa-jā-vedanā” and “samphassa-jā-vedanā paccayā taṇhā.” - The 3 people A, B, and C generate different “san gati” upon seeing X. Even though they all see the same person X, three different types of “samphassa-jā-vedanā”: joy, anger, neutral feelings arise respectively in A, B, and C. - How does the SAME “seeing event” (seeing X) lead to all these very different changes in the minds of three different people? (and the emotions even show up on their faces!) 5. Since all three people A, B, C are average humans, they have not removed “san gati” or defilements from their minds. Such “san gati” remain hidden as “anusaya” in all three of them. - However, a trigger is needed to bring those “san gati” to the surface. A has had “good experiences with X” and thus “affectionate san gati” arose in him upon seeing X. B’s experiences with X were not good and those “bad memories” were triggered by seeing X. - On the other hand, C has had no prior experiences with X. Thus, a trigger for “samphassa-jā-vēdanā” was not there. But if C sees a person he is familiar with, that may trigger his “san gati“. - If C was an Arahant, then he would not have any “san gati” left. Thus, affection or anger would not arise upon seeing ANY person. - The best way to comprehend this key point is to think about your own experiences. Kamma Generation Depends on One’s Actions Based on the Initial “Attachment” 6. Once bound to an event with “samphassa” that leads to a corresponding “mind-made feelings” or samphassa-jā-vedanā. Joyous feelings arose in A and angry feelings arose in B upon seeing X. Both A and B got “attached” to that event. Thus, taṇhā can arise via greed or anger. - Person A may start talking to X with excitement, especially if X is a close friend. B’s face may darken and many angry thoughts about his past experiences with X may arise in him. Both are “samphassa-jā-vedanā paccayā taṇhā” and “taṇhā paccayā upādāna.” - The next step of “upādāna paccayā bhavo” depends on what happens next. In this particular case, it is possible that B may start accumulating “bad kamma” just by cultivating “bad vaci saṅkhāra” in his mind, even if he does not say or do anything. Such “bad thoughts” arise via “avijjā paccayā saṅkhāra” where saṅkhāra are vaci saṅkhāra (not speaking out, but talking to himself.) - But it could get worse if B’s anger rises and he says something bad to X. That is also “bad vaci saṅkhāra“. If X responds and the situation escalates, B may hit X. That is getting to the “bad kāya saṅkhāra” stage. All these lead to the accumulation of “bad kamma” for B. - That is a brief example of how one could generate kammic energy for future existences, even if this particular action may not be strong enough to “powerup” a new birth. However, if the situation escalates and B kills X, then that would certainly be a strong kamma leading to a new birth in an apāya. “Samphassa-jā-Vedanā“ – Example 2 7. Let us clarify it further with an example since it is critical to understand this issue. Suppose a friend visits an alcoholic (X) and brings a bottle of alcohol. Again, let us follow the steps in #2 of the previous post. - First, X sees that his friend has brought a bottle of alcohol, his favorite kind. This is the “seeing event” in this example: “cakkhuñca paṭicca rūpe ca uppajjati cakkhuviññāṇaṁ.” This cakkhu viññāṇa is a vipāka viññāṇa and no kamma generated. Even an Arahant would see the bottle. - Next is the CRITICAL step “tiṇṇaṁ saṅgati phasso” where X’s mind instantly makes the “san phassa” or “defiled contact” with his “alcoholic gati.” - Note the two types of “contacts” in the above two processes. In the first, the “phassa cetasika” in cakkhu viññāṇa makes the “contact” between cakkhu and rupa (alcohol bottle) to give rise to cakkhu viññāṇa (seeing the bottle.) In the second it is a “defiled contact” (samphassa) that arises due to his craving for alcohol. - On the other hand, if someone brought a bottle of alcohol to an Arahant he would also see the bottle, i.e., cakkhu viññāṇa with the “phassa cetasika” will also arise in him. But there would no “tiṇṇaṁ saṅgati phasso” and, thus, the process will stop there. 8. Once X got “attached” to the bottle of alcohol with samphassa he becomes joyful and that joyous feeling is samphassa-jā-vēdanā: Samphassa led to “Samphassa-jā-vedanā” - Therefore, the “extra vedanā” made up by the mind is the “samphassa-jā-vedanā.” Here, “jā” means “generated with.” That vedanā was generated by samphassa (san phassa). - Suppose X’s wife is also at home when the friend brings the bottle. She would not be happy to see the bottle, especially if she is trying to break the “drinking habit” of her husband. She may even get angry with the friend. That is also a samphassa-jā-vedanā. - On the other hand, the Arahant will also see the bottle and will identify it as such. But there will be no joy or dismay. There will be no samphassa-jā-vedanā. 9. The “samphassa-jā-vedanā” of joyous feelings in X makes him attach (taṇhā) which immediately leads to the next step of upādāna. Which means his mind is now focused on the alcohol bottle. - If his wife is opposed to him having alcohol often, she may become agitated. Even if she may not say anything, she could get mad with the friend for bringing the alcohol bottle. Does he not know that he is easily tempted? Did the two of them plan to ‘have a drink” without her knowing? She also gets to the “taṇhā” and “upādāna” stages. - Of course, an Arahant would not “get attached” or “get stuck” (no taṇhā or upādāna.) Generating Kamma Starts With the “Taṇhā Paccayā Upādānaṁ” Step 10. Therefore, once getting attached with taṇhā, the next step of “getting stuck and proceeding along” is likely to happen with “taṇhā paccayā upādāna” and “upādāna paccayā bhavō” steps. - This is where X started getting ready to “have a good time with the friend.” He would think, speak, and act to have a “good time ” with his friend. - However, it is possible to stop the process at that point by acting mindfully. If X has seen the dangers of keeping his “drinking habit” he can think about the bad consequences of engaging in that practice and tell the friend that he is trying to get rid of his drinking habit. Thus he could start acting with “vijjā” (or wisdom) and NOT engage in “avijjā paccayā saṅkhāra.” - That is the basis of the correct Ānāpānasati or Satipaṭṭhāna Bhāvanā. Puredhamma Twitter Account 11. Twitter account for the website: puredhamma (@puredhamma1) / Twitter - Twitter handle: puredhamma1 - Will Tweet a new or re-written post. Other posts in this series at “Paṭicca Samuppāda – Essential Concepts.”
<urn:uuid:2c5cb4f5-e8c1-4092-a837-7d0d73d964e8>
CC-MAIN-2022-33
https://puredhamma.net/paticca-samuppada/paticca-samuppada-essential-concepts/concepts-of-upadana-and-upadanakkhandha/dukkha-samudaya-starts-with-samphassa-ja-vedana/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572515.15/warc/CC-MAIN-20220816181215-20220816211215-00605.warc.gz
en
0.947239
3,261
2.515625
3
Jennifer is a 2-year-old female who presents with her mother. Assignment: Adaptive Response Assessment Assignment: Adaptive Response Assessment As an advanced practice nurse, you will examine patients presenting with a variety of disorders. You must, therefore, understand how the body normally functions so that you can identify when it is reacting to changes. Often, when changes occur in body systems, the body reacts with compensatory mechanisms. These compensatory mechanisms, such as adaptive responses, might be signs and symptoms of alterations or underlying disorders. In the clinical setting, you use these responses, along with other patient factors, to lead you to a diagnosis. Consider the following scenarios: Jennifer is a 2-year-old female who presents with her mother. Mom is concerned because Jennifer has been “running a temperature” for the last 3 days. Mom says that Jennifer is usually healthy and has no significant medical history. She was in her usual state of good health until 3 days ago when she started to get fussy, would not eat her breakfast, and would not sit still for her favorite television cartoon. Since then she has had a fever off and on, anywhere between 101oF and today’s high of 103.2oF. Mom has been giving her ibuprofen, but when the fever went up to 103.2oF today, she felt that she should come in for evaluation. A physical examination reveals a height and weight appropriate 2-year-old female who appears acutely unwell. Her skin is hot and dry. The tympanic membranes are slightly reddened on the periphery, but otherwise normal in appearance. The throat is erythematous with 4+ tonsils and diffuse exudates. Anterior cervical nodes are readily palpable and clearly tender to touch on the left side. The child indicates that her throat hurts “a lot” and it is painful to swallow. Vital signs reveal a temperature of 102.8oF, a pulse of 128 beats per minute, and a respiratory rate of 24 beats per minute. Jack is a 27-year-old male who presents with redness and irritation of his hands. He reports that he has never had a problem like this before, but about 2 weeks ago he noticed that both his hands seemed to be really red and flaky. He denies any discomfort, stating that sometimes they feel “a little bit hot,” but otherwise they feel fine. He does not understand why they are so red. His wife told him that he might have an allergy and he should get some steroid cream. Jack has no known allergies and no significant medical history except for recurrent ear infections as a child. He denies any traumatic injury or known exposure to irritants. He is a maintenance engineer in a newspaper building and admits that he often works with abrasive solvents and chemicals. Normally he wears protective gloves, but lately they seem to be in short supply so sometimes he does not use them. He has exposed his hands to some of these cleaning fluids, but says that it never hurt and he always washed his hands when he was finished. Martha is a 65-year-old woman who recently retired from her job as an administrative assistant at a local hospital. Her medical history is significant for hypertension, which has been controlled for years with hydrochlorothiazide. She reports that lately she is having a lot of trouble sleeping, she occasionally feels like she has a “racing heartbeat,” and she is losing her appetite. She emphasizes that she is not hungry like she used to be. The only significant change that has occurred lately in her life is that her 87-year-old mother moved into her home a few years ago. Mom had always been healthy, but she fell down a flight of stairs and broke her hip. Her recovery was a difficult one, as she has lost a lot of mobility and independence and needs to rely on her daughter for assistance with activities of daily living. Martha says it is not the retirement she dreamed about, but she is an only child and is happy to care for her mother. Mom wakes up early in the morning, likes to bathe every day, and has always eaten 5 small meals daily. Martha has to put a lot of time into caring for her mother, so it is almost a “blessing” that Martha is sleeping and eating less. She is worried about her own health though and wants to know why, at her age, she suddenly needs less sleep. Review the three scenarios, as well as Chapter 6 in the Huether and McCance text. Identify the pathophysiology of the disorders presented in the scenarios, including their associated alterations. Consider the adaptive responses to the alterations. Review the “Mind Maps—Dementia, Endocarditis, and Gastro-oesophageal Reflux Disease (GERD)” media in this week’s Learning Resources. Then select one of the disorders you identified from the scenarios. Use the examples in the media as a guide to construct a mind map for the disorder you selected. Consider the epidemiology, pathophysiology, risk factors, clinical presentation, and diagnosis of the disorder, as well as any adaptive responses to alterations. Write a 2- to 3-page paper that addresses the following: Explain the pathophysiology of the disorders depicted in the scenarios, including their associated alterations. Be sure to describe the patients’ adaptive responses to the alterations. Construct a mind map of your selected disorder. Include the epidemiology, pathophysiology, risk factors, clinical presentation, and diagnosis of the disorder, as well as any adaptive responses to alterations NB:In addition, the 2-3 page paper assignment should include the pathophysiology of the three disorders depicted in the scenarios, followed by selecting one disorder where you are asked to construct a mind map.Rubric needs to be attached to the end of the paper You must proofread your paper. But do not strictly rely on your computer’s spell-checker and grammar-checker; failure to do so indicates a lack of effort on your part and you can expect your grade to suffer accordingly. Papers with numerous misspelled words and grammatical mistakes will be penalized. Read over your paper – in silence and then aloud – before handing it in and make corrections as necessary. Often it is advantageous to have a friend proofread your paper for obvious errors. Handwritten corrections are preferable to uncorrected mistakes. Use a standard 10 to 12 point (10 to 12 characters per inch) typeface. Smaller or compressed type and papers with small margins or single-spacing are hard to read. It is better to let your essay run over the recommended number of pages than to try to compress it into fewer pages. Likewise, large type, large margins, large indentations, triple-spacing, increased leading (space between lines), increased kerning (space between letters), and any other such attempts at “padding” to increase the length of a paper are unacceptable, wasteful of trees, and will not fool your professor. The paper must be neatly formatted, double-spaced with a one-inch margin on the top, bottom, and sides of each page. When submitting hard copy, be sure to use white paper and print out using dark ink. If it is hard to read your essay, it will also be hard to follow your argument. ADDITIONAL INSTRUCTIONS FOR THE CLASS Discussion Questions (DQ) Initial responses to the DQ should address all components of the questions asked, include a minimum of one scholarly source, and be at least 250 words. Successful responses are substantive (i.e., add something new to the discussion, engage others in the discussion, well-developed idea) and include at least one scholarly source. One or two sentence responses, simple statements of agreement or “good post,” and responses that are off-topic will not count as substantive. Substantive responses should be at least 150 words. I encourage you to incorporate the readings from the week (as applicable) into your responses. Your initial responses to the mandatory DQ do not count toward participation and are graded separately. In addition to the DQ responses, you must post at least one reply to peers (or me) on three separate days, for a total of three replies. Participation posts do not require a scholarly source/citation (unless you cite someone else’s work). Part of your weekly participation includes viewing the weekly announcement and attesting to watching it in the comments. These announcements are made to ensure you understand everything that is due during the week. APA Format and Writing Quality Familiarize yourself with APA format and practice using it correctly. It is used for most writing assignments for your degree. Visit the Writing Center in the Student Success Center, under the Resources tab in LoudCloud for APA paper templates, citation examples, tips, etc. Points will be deducted for poor use of APA format or absence of APA format (if required). Cite all sources of information! When in doubt, cite the source. Paraphrasing also requires a citation. I highly recommend using the APA Publication Manual, 6th edition. Use of Direct Quotes I discourage overutilization of direct quotes in DQs and assignments at the Masters’ level and deduct points accordingly. As Masters’ level students, it is important that you be able to critically analyze and interpret information from journal articles and other resources. Simply restating someone else’s words does not demonstrate an understanding of the content or critical analysis of the content. It is best to paraphrase content and cite your source. For assignments that need to be submitted to LopesWrite, please be sure you have received your report and Similarity Index (SI) percentage BEFORE you do a “final submit” to me. Once you have received your report, please review it. This report will show you grammatical, punctuation, and spelling errors that can easily be fixed. Take the extra few minutes to review instead of getting counted off for these mistakes. Review your similarities. Did you forget to cite something? Did you not paraphrase well enough? Is your paper made up of someone else’s thoughts more than your own? Visit the Writing Center in the Student Success Center, under the Resources tab in LoudCloud for tips on improving your paper and SI score. The university’s policy on late assignments is 10% penalty PER DAY LATE. This also applies to late DQ replies. Please communicate with me if you anticipate having to submit an assignment late. I am happy to be flexible, with advance notice. We may be able to work out an extension based on extenuating circumstances. If you do not communicate with me before submitting an assignment late, the GCU late policy will be in effect. I do not accept assignments that are two or more weeks late unless we have worked out an extension. As per policy, no assignments are accepted after the last day of class. Any assignment submitted after midnight on the last day of class will not be accepted for grading. Communication is so very important. There are multiple ways to communicate with me:Questions to Instructor Forum: This is a great place to ask course content or assignment questions. If you have a question, there is a good chance one of your peers does as well. This is a public forum for the class. Assignment: Adaptive Response Assignment: Adaptive Response
<urn:uuid:f800f98b-22db-4f85-8200-67f0d91b63c8>
CC-MAIN-2022-33
https://nursing-assignments.com/jennifer-is-a-2-year-old-female-who-presents-with-her-mother/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00205.warc.gz
en
0.958326
2,484
3.109375
3
Law continues the mission of Torah to instruct. Exodus 21-23 forms a block of laws that are inserted into the narrative that the Book of Exodus is telling. (Scholars give these three chapters the title The Book of the Covenant.) There does not seem to be any grand organizing structure that governs the arrangement of the laws. Instead we find a mixture of laws dealing with a miscellany of concerns. For this reason many people reading Exodus for the first time can feel that they enter into a kind of tangled maze. What are we to make of this miscellany? Frustrated, we quickly skip over this section, eager to get back to the narrative. Israelite law is a complicated field. It has stimulated a mass of scholarly exposition and debate over the centuries, especially in Jewish circles. One example, the massive learning to be found in the Talmud. Americans make a strong distinction between the two realms of social and personal life. But the Torah does not. The life of Israel is seen as a unified whole. I am in no way so deeply learned in this subject that I can claim with any authority as an interpreter of Israelite law. I will not try to explicate these three chapters of Exodus in any detail. But I would try to make a few broad statements that might assist ordinary readers to navigate their way through this block of material that can feel so foreign to the average reader, especially if they come from a Christian background as I do. What I say owes a special debt to a number of Jewish scholars I have read.* God as the Law’s Authority First, we should note the opening words (Exodus 21:1) of the Book of the Covenant: These are the ordinances that you [Moses] are to set before them [the Israelites]. We see from the closing verses of chapter 20 that the speaker is God. The import of this opening sentence is that the laws in the Book of the Covenant are not presented as cultural customs and precedents coming from the tribal life of Israel. They are set before us as expressions of God’s will. We notice, too, that the laws in the Book of the Covenant cover both cultic/religious activities and secular/social activities. Americans make a strong distinction between the two realms of social and personal life. But the Torah does not. The life of Israel is seen as a unified whole. God’s concern in ordering the life of the people covers all aspects of their lives, not just the activities directly involved in religious worship. So we find laws governing agricultural life as well as laws governing sacrifice and religious festivals all expressing God’s area of concern. Social Context of Agricultural Life Second, the laws of the Book of the Covenant reflect a life anchored in villages and in an agricultural economy. There are no laws that reflect the concerns of urban dwellers. For, example, there are no laws governing trade and commerce nor the work of urban artisans. This anomaly may be evidence for the antiquity of the laws in the Book of the Covenant. They may source back to the very earliest years of the Israelite people, before Israel had begun to develop an urban culture. Life appears simple and uncomplicated. For example, the instructions for building the altars for sacrifice specify that the stones used should not be finely dressed, but rough and unhewn (Exodus 20:25). On the other hand, this instruction may reflect that the editors who put Exodus together preferred a more simple, unadorned style of worship in contrast to the sophisticated liturgies we might encountered in the grand, ancient temples of the Near East. No Distinction Among Social Classes Third, the Book of the Covenant shows no awareness of any stratification in society, apart from the reality of slavery. By contrast, ancient Mesopotamian law codes (like the law code of the Babylonian king Hammurabi) assume that society is divided into three classes: the upper class of aristocrats and property owners, the lower class of peasants and laborers, and the lowest class of slaves. Provisions in the law vary according to one’s social class, especially in the assignment of fines and punishments. …the Book of the Covenant shows that it places higher value on the rights of the person than on the rights of property. There is none of that in the Book of the Covenant. If there is any stratification in Israelite society, it is to have no impact on the administration of justice. All free Israelites are to be treated fairly before the law. Rights of Persons Take Priority over Rights of Property Fourth, the Book of the Covenant shows that it places higher value on the rights of the person than on the rights of property. This, too, is in stark contrast to the Mesopotamian law codes, where preferences are given to the rights of property owners. Two examples illustrate this. The first is the law governing the interaction between a creditor and a debtor expressed in Exodus 22:25-27: If you lend money to my people, to the poor among you, you shall not deal with them as a creditor; you shall not exact interest from them. If you take your neighbor’s cloak in pawn, you shall restore it before the sun goes down; for it may be your neighbor’s only clothing to use as cover; in what else shall that person sleep? And if your neighbor cries out to me, I will listen, for I am compassionate. This law assumes that a debtor had given his lender his cloak as security governing repayment. But the debtor may have only one cloak. That cloak serves not only as clothing, but also as a blanket when he sleeps on a cold night. The law shows a concern for his welfare, and so places a restriction on the right of the lender to retain the cloak during the night. The welfare of the debtor comes before the rights of the property owner. A second example comes in the laws governing slavery in Exodus 21. The Book of the Covenant assumes that slavery will be a fact of life in Israelite society. But it shows a concern for placing safeguards on abusive behavior by slave owners. One example is found in Exodus 21:26-27: When a slaveowner strikes the eye of a male or female slave, destroying it, the owner shall let the slave go, a free person, to compensate for the eye. If the owner knocks out a tooth of a male or female slave, the slave shall be let go, a free person, to compensate for the tooth. Similarly the Book of the Covenant assumes that slaves have the right to enjoy rest on the sabbath day just as the master and his family (Exodus 23:12). Its ordinance is consistent with the commandment on keeping the sabbath day in the Ten Commandments, where the commandment explicitly embraces slaves and the farm animals in addition to the free members of the family (Exodus 20:8-10). There is a consciousness that slaves remain persons, even if they serve in a state of bondage. Sensitivity to the Needs of Society’s Marginalized The Book of the Covenant also shows a consciousness of its setting in the exodus experience. The laws (as do the prophets later on) show an acute sensitivity to the needs of society’s poor and marginalized. The marginalized are referred to in the stock phrase: the widows, the orphans, and the resident aliens. For example, in Exodus 22:21-24 God says this to the Israelites: You shall not wrong or oppress a resident alien, for you were aliens in the land of Egypt. You shall not abuse any widow or orphan. If you do abuse them, when they cry out to me, I will surely heed their cry; my wrath will burn, and I will kill you with the sword, and your wives shall become widows and your children orphans. And in Exodus 23:9, these injunctions are explicitly tied to Israel’s experience of bondage in Egypt: You shall not oppress a resident alien; you know the heart of an alien, for you were aliens in the land of Egypt. Israelites are to constantly keep in mind their own bitter experience as marginalized people in Egypt as they regulate their own behavior towards the marginalized in their own midst. This awareness of the poor and marginalized lies behind the Book of the Covenant’s demand for uncorrupted justice in lawsuits. In Exodus 23:6-8, we read: You shall not pervert the justice due to your poor in their lawsuits. Keep far from a false charge, and do not kill the innocent and those in the right, for I will not acquit the guilty. You shall take no bribe, for a bribe blinds the officials, and subverts the cause of those who are in the right. And most unexpectedly, the Book of the Covenant seems to even have a consciousness of the enemy as a person, too. We find this surprising instruction in Exodus 23:4-5: This should startle Christians. In the Sermon on the Mount, Jesus counsels his disciples to love their enemies and pray for those who persecute you (Matthew 5:44). Christians often assume this is some unprecedented new teaching on Jesus’ part. But the Book of the Covenant makes clear that Jesus is teaching in the tradition of the Jewish Torah. Law as Agent of Character Formation Lastly, we notice something surprising about the Book of the Covenant. It offers a miscellany of laws governing social and cultic life, but it is far from being comprehensive in covering all aspects of Israelite life. There are many omissions, as I have already noted when it comes to covering urban life and urban commerce and trade. There are no laws governing the highly conflict-ridden area of inheritance. This surprises us, because we would expect a much fuller coverage if the Book of the Covenant is meant to be a comprehensive law code that judges and administrators can consult when faced with particular lawsuits. Instead the Book of the Covenant is patchy in what it covers. Mindset breeds character. And character. when deeply embedded into our personalities, can ensure that our behavior begins to take on the character of instinct. We act because of the way we are. Why it that? The Jewish scholar Edward L. Greenstein suggests that is because the laws in the Torah are not meant strictly to be a law code. They serve a didactic function. It is not accidental therefore that they are included in the Torah. He writes: …the word torah itself means “instruction” or “teaching.” The laws of the Torah are one of its means of teaching; they are the specific behaviors that God inculcates his ways–what we call values–in his human creatures. If we are to understand these values we must read the laws, in a sense, as a sort of body language that outwardly symbolizes something of much deeper significance…The various norms that God commands the Israelites in the Torah were calculated to instill abstract values through concrete acts.** This suggests for me a way Christians can read these laws of the Torah. The world they describe may seem very different from the world in which we live today. But as we meditate upon them, we can begin to absorb some of those enduring values that constitute a godly mindset, whether we are Jewish or Christian. Mindset breeds character. And character. when deeply embedded into our personalities, can ensure that our behavior begins to take on the character of instinct. We act because of the way we are. * For a non-scholarly reader (like most participants in small Christian Bible study groups, I would recommend two resources that I have found helpful. Both are learned, but very accessible to the average reader: - Nahum M. Sarna, Exploring Exodus: The Origins of Biblical Israel. New York: Schocken Books, 1996. Chapter VIII, titled “The Laws” is very insightful on the laws found in the Book of Exodus. - Edward L. Greenstein, “Biblical Law” in Back to the Sources: Reading the Classic Jewish Texts, edited by Barry W. Holtz. New York: Summit Books, 1984. ** Edward L. Greenstein, “Biblical Law” in Back to the Sources. Pages 84-85.
<urn:uuid:d6693766-66ea-4afd-a7f2-eb7c26db3d53>
CC-MAIN-2022-33
https://thebibleisinmyblood.wordpress.com/2022/01/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573699.52/warc/CC-MAIN-20220819131019-20220819161019-00002.warc.gz
en
0.94179
2,558
2.875
3
A scripturally-based approach to education introduces students to the goodness of God’s creation; the radical distortion in every dimension of life caused by human rebellion; and the renewal of all of life in the birth, life, death, resurrection and coming again of Jesus. In his resurrection and victory over death, Christ offers the opportunity for transformation and establishes his kingdom (his rule on earth and in heaven). The invitation has been given to become followers of Jesus and to join in being transforming agents of the kingdom. Colossians 1:15 highlights two core truths. Firstly, all things in the world belong to and are upheld by Christ. Secondly, at the centre of all things, including education, is Christ. As we seek to explore Christ at the centre of all things, we have adopted the following schema: Creation. God in Christ has created all things. He is the sovereign creator, setting up humankind for a loving, responsive relationship with himself. Fall. All things have fallen as a result of sin. There is a distance and fracture. Redemption. Christ came to earth to redeem all things. The relationship can be re-established. Renewal. In response to Christ’s act, we are called to work in partnership with God as all things are renewed. We can be part of the renewed relationship. Christian education aims to instruct the mind, shape the heart and equip the hands. Growing in wisdom and character requires well-rounded, whole of life formation – growth that is intellectual, emotional, physical, social and spiritual. Catering for the individual needs of students. Students are individuals who learn at different rates and in different ways. These individual differences may influence how students respond to instruction and how they demonstrate what they know, understand and can do. Differentiation is a targeted process that involves forward planning, programming and instruction. It involves the use of teaching, learning and assessment strategies that are fair and flexible, provide an appropriate level of challenge, and engage students in learning in meaningful ways. Differentiated programming recognises an interrelationship between teaching, learning and assessment that informs future teaching and learning. - provides teaching, learning and assessment for learning experiences that cater for the diversity of learners so that all students can learn effectively - provides alternative methods and choices for students to demonstrate their knowledge, understanding and skills - considers a variety of resources and stimulus materials, including IT, to enhance student options and pathways through learning - includes a range of activities and resources appropriate for students with different learning needs and levels of achievement - promotes flexible learning experiences and encourages students to work at their own pace to develop their knowledge, understanding and skills - monitors student learning over time, using evidence of student achievement to guide future teaching and learning opportunities - considers how individualised feedback can help identify student strengths and areas for improvement. Our Learning Support and Enrichment staff work closely with classroom teachers. Extension and enrichment activities are offered within the classroom, in focus groups and through external events and competitions. Students in regular classes who experience difficulties in areas of learning and behaviour are supported through learning and support resources. Students also benefit from involvement in a wide variety of extracurricular and co-curricular opportunities, including debating, public speaking, sports, performing arts and other areas of interest. Critical and Creative Thinking Creating a culture where thinking is valued, visible and promoted At Illawarra Christian School we recognise that it is important to nurture thinking in the daily lives of our students and to make it visible so that a culture of thinking can be built and a strong learning community established in our classrooms and throughout our school. As teachers strive to create cultures of thinking in their classrooms, they recognise eight forces which shape the cultural dynamic in every group learning situation. These consist of language, time, environment, opportunities, routines, modelling, interactions and expectations. Teachers are actively engaged in developing practices that allow these forces to shape classroom cultures with a focus on thinking, learning and understanding. It is only when we understand what our students are thinking that we can use that knowledge to further engage and support them in the process of understanding. Thus, making students’ thinking visible is a component of effective teaching. Visible Thinking is a flexible approach to integrating the development of students’ thinking with content learning across all stages and curriculum areas. Visible Thinking cultivates students’ thinking skills and dispositions, and also deepens content learning. Cultivating a thinking disposition means growing a student’s curiosity, concern for truth and understanding. Visible Thinking encourages peer collaboration, develops a growth mindset, focuses on learning rather than work, promotes student independence, teaches understanding rather than knowledge and is inherently differentiated. Thinking routines are designed to support and structure students’ thinking. They operate as tools for promoting thinking and as scaffolds that can lead students’ thinking to higher levels. Thinking routines are used regularly in the classroom. They become part of the fabric of the classroom culture, and it is through the routines that students internalise messages about what learning is and how it happens. Integrated Learning Technologies Learning in and for the 21st century Learning technologies are the communication, information and related technologies that can be used to support learning, teaching and assessment. They foster the skills required by the 21st century learner to successfully respond to a rapidly changing workplace and world. The integration of these learning technologies is driven by the underlying pedagogical focus of the school as outlined in this document. Elements of integration include: - redefining and modifying tasks in order to enhance student engagement - providing opportunities for higher-order thinking rather than offering basic substitutions or augmentations - using online tools to enhance collaboration beyond the confines of the classroom - using learning management systems to give anytime/anywhere access to learning content - using technology to allow multiple pathways through learning content - using technology-based communication tools to enhance student learning - using technology-based tools for teacher, student, self and peer assessment. Cooperative & Collaborative Learning Engaging with each other and learning together In cooperative and collaborative learning, there is an emphasis on interdependence, while maintaining individual accountability and participation. Students are encouraged to take responsibility for their own learning. Teaching and learning are seen as shared experiences that are mutually enriching. The teacher’s role will include facilitation. Cooperative and collaborative learning are predicated on a belief in a learner-centred approach to education. Learning in an active mode is highly effective. Students work together on tasks that have been designed for use in small groups, small-group activities being conducive to developing higher-order thinking skills and the ability to use knowledge. Sharing ideas in a group enhances the learner’s ability to reflect on his/her own assumptions and thought processes. Group work is also valued for its potential to develop social and team-building skills. It both utilises and builds an appreciation of diversity. Cooperative and collaborative learning provide a context where: - learners actively participate - teachers become learners at times, and learners sometimes teach - respect is given to every member - projects and questions interest and challenge students - diversity is celebrated, and all contributions are valued - students learn skills for resolving conflicts when they arise - members draw upon their past experience and knowledge - goals are clearly identified and used as a guide - research tools are made available - students are invested in their own learning. Providing ongoing, interactive feedback to improve teaching and learning Formative assessment (assessment for learning) monitors student learning on a daily basis to provide ongoing feedback that can be used by teachers to improve their teaching and by students to improve their learning. Formative assessment helps students identify their strengths and weaknesses so that they can target areas that need improvement. It provides teachers with information about the learning that is (or is not) taking place so that problems can be addressed and teaching can be adapted accordingly. Effective formative assessment happens minute-by-minute and day-by-day, not at the end of a learning sequence. Formative assessment encourages student engagement as the teacher clarifies learning goals, provides students with frequent feedback on their progress toward the goals and adjusts learning tasks so that they are at the optimal level of challenge for students. Formative assessment increases a student’s belief that he or she can succeed. When students receive daily feedback and see evidence of progress, they are more motivated to take on more learning challenges. - reflects a pedagogy in which assessment improves student learning - involves formal and informal assessment activities as part of the learning process - informs the planning of future learning - includes clear goals for the learning activity - involves the establishment of clear learning intentions for all lessons - provides effective feedback that motivates the learner and can lead to improvement - reflects a belief that all students can improve - encourages self-assessment and peer assessment as part of regular classroom routines - involves teachers, students and parents reflecting on evidence - is inclusive of all learners. Valuing dedication, application, challenge and growth People with a growth mindset believe that intelligence is malleable and can be developed through education and hard work. They want to learn because they believe this will expand their intellectual skills. They believe that strategic effort leads to improvement, and they find challenges energising rather than intimidating because they offer opportunities to learn. Extensive scientific investigation suggests that an overemphasis on intelligence or talent leaves students vulnerable to failure, fearful of challenges and unwilling to remedy their shortcomings. Talents are not innate gifts, but the result of a slow, invisible accretion of skills. Everyone is born with differences and some with unique advantages for certain tasks, but no one is genetically designed into success. ‘Ordinary’ people have a remarkable potential for change with practice. For practice to bring about growth, it needs to be purposeful and sustained. It is only by working at what we can’t do that we can grow our expertise in any field. Feedback needs to be embedded in practice if improvements are to be generated. Teachers need to give specific, purposeful feedback to students, showing them areas of weakness and supporting them with strategies to address these areas, and it is essential that students heed this feedback in their ongoing practice. Teachers can transmit a growth mindset to students by: - telling stories about achievements that result from hard work - emphasising challenge, not success; portraying challenge as exciting - giving meaningful learning tasks that give students a clear sense of progress toward mastery - praising students for the specific process that has been used to accomplish something – their effort, strategies, focus, etc - giving explicit instruction regarding the mind as a learning machine - viewing mistakes as an opportunity for learning. Personal Development & Welfare Nurturing students in a safe, secure learning environment Illawarra Christian School is committed to the provision of a safe, secure and well-managed learning environment. Students are provided with the opportunity to develop their interests, skills and knowledge in a community where their social and emotional well-being is nurtured. At Illawarra Christian School we seek to partner with parents in equipping young people to serve and honour God. We believe that an environment where biblical truths are placed at the core of the way we function will create the most effective learning environment. Daily devotions and the curriculum developed from a Christian worldview allow students the opportunity to engage with contemporary culture in the light of biblical truth. Accountability, guidance and correction are integral to the way we seek to disciple students, equipping them to serve God and others through appropriate attitudes and actions. Students are encouraged to demonstrate respect for themselves, their peers and their teachers, while staff seek to ensure a firm, fair and friendly approach toward discipline as they train students in godly wisdom. Programs focusing on resilience, self- esteem, teamwork, leadership, organisation and positive peer interactions are integrated into the pastoral care curriculum to enable students to mature in their Christian character. The core of our pastoral care curriculum aims to equip students to use their abilities, time and resources to serve God, their school and the wider community. This is achieved through classroom learning and service experience.
<urn:uuid:e5bf2b48-6ad5-4926-8d49-534c515fa176>
CC-MAIN-2022-33
https://www.ics.nsw.edu.au/learning-and-support/educational-framework/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.78/warc/CC-MAIN-20220817001643-20220817031643-00005.warc.gz
en
0.953168
2,534
2.703125
3
The families of flowering plants. IncludingHibbertiaceae J.G. Agardh Habit and leaf form. Trees, shrubs, and lianas, or herbs (a few). ‘Normal’ plants, or switch-plants (occasionally); switch plants with the principal photosynthesizing function transferred to stems. Leaves well developed (usually), or much reduced. With a basal aggregation of leaves (sometimes), or with neither basal nor terminal aggregations of leaves. Self supporting, or climbing; sometimes stem twiners; Hibbertia recorded as reversibly twining clockwise, or twining anticlockwise. Sometimes leptocaul. Mesophytic, or xerophytic. Leaves usually deciduous; alternate (usually), or opposite (rarely); usually spiral; leathery (often), or ‘herbaceous’, or membranous; petiolate; sheathing, or non-sheathing; gland-dotted, or not gland-dotted; simple. Lamina entire (usually), or dissected (occasionally lobed); one-veined, or pinnately veined, or palmately veined. Leaves stipulate (the stipules winglike, adnate to the petiole), or exstipulate. Lamina margins entire, or serrate. Leaves without a persistent basal meristem. Domatia occurring in the family (from two genera); manifested as pockets, or hair tufts. General anatomy. Plants with ‘crystal sand’, or without ‘crystal sand’. Leaf anatomy. Stomata anomocytic (usually), or paracytic (Tetracera). The mesophyll with sclerencymatous idioblasts, or without sclerenchymatous idioblasts. Minor leaf veins without phloem transfer cells (Dillenia). Stem anatomy. Cork cambium present; initially deep-seated (usually), or superficial. Nodes unilacunar, or tri-lacunar, or penta-lacunar to multilacunar (?). Internal phloem absent. Secondary thickening developing from a conventional cambial ring, or anomalous (rarely?); when anomalous, via concentric cambia (Doliocarpus). ‘Included’ phloem present (rarely?), or absent. Xylem with tracheids; with vessels. Vessel end-walls scalariform, or scalariform and simple. Vessels without vestured pits. Wood parenchyma predominantly apotracheal, or apotracheal and paratracheal (sometimes with a few cells around the vessels). Sieve-tube plastids S-type. Pith with diaphragms (occasionally), or without diaphragms. Reproductive type, pollination. Fertile flowershermaphrodite. Plants hermaphrodite. Inflorescence, floral, fruit and seed morphology. Flowers solitary, or aggregated in ‘inflorescences’. The ultimate inflorescence unit (when flowers aggregated) cymose, or racemose. Flowers small to medium-sized (usually), or large; regular to somewhat irregular. The floral irregularity when noticeable, involving the androecium. Flowers partially acyclic. The perianth acyclic, or the androecium acyclic, or the perianth acyclic and the androecium acyclic. Floral receptacle not markedly hollowed. Free hypanthium absent. Hypogynous disk absent. Perianthwith distinct calyx and corolla; (5–)10(–25). Calyx (3–)5(–20); polysepalous; fleshy, or non-fleshy; persistent; spirally imbricate. Corolla (2–)5; polypetalous; imbricate, or crumpled in bud (often); white, or yellow; deciduous (often conspicuously caducous). Petals bilobed, or entire. Androecium 15–150 (usually), or 1–10 (rarely). Androecial members branched (usually — in that the numerous stamens often arise from 5–15 ‘trunks’), or unbranched; when numerous, maturing centrifugally (as a whole, or those within each cluster); free of the perianth; all equal to markedly unequal; free of one another, or coherent (often united basally); when clustered 1 adelphous, or 5–15 adelphous. Androecium exclusively of fertile stamens, or including staminodes. Stamens 1–10 (rarely), or 15–150 (usually ‘many’); reduced in number relative to the adjacent perianth to diplostemonous to polystemonous. Anthers usually basifixed, or adnate; dehiscing via pores to dehiscing via short slits (apically), or dehiscing via longitudinal slits; introrse, or latrorse; tetrasporangiate. Endothecium developing fibrous thickenings, or not developing fibrous thickenings. Anther epidermis persistent. Microsporogenesis simultaneous. The initial microspore tetrads tetrahedral, or isobilateral, or linear. Anther wall initially with more than one middle layer. Tapetum amoeboid, or glandular. Pollen grains aperturate; 2(–4) aperturate; colpate, or colporate, or rugate, or spiraperturate; 2-celled. Gynoecium (1–)2–7(–20) carpelled. The pistil when syncarpous, (2–)5(–7) celled.Gynoecium apocarpous to syncarpous; eu-apocarpous to semicarpous (usually), or synovarious (rarely); superior. Carpel fully closed, or incompletely closed; stylate; apically stigmatic; 1–100 ovuled (i.e. to ‘many’). Placentation when apocarpous marginal, or basal. Ovary when syncarpous (2–)5(–7) locular. Styles as many as G; when carpels connate, free. Stigmas wet type; non-papillate; Group IV type. Placentation when syncarpous axile, or basal. Ovules when syncarpous, 1–20 per locule; ascending; apotropous; with ventral raphe; usually arillate; anatropous to amphitropous (with zigzag micropyle); bitegmic; crassinucellate. Outer integument contributing to the micropyle. Embryo-sac development Polygonum-type. Polar nuclei fusing prior to fertilization. Antipodal cells formed; 3; not proliferating; ephemeral. Synergids pear-shaped, or hooked (Hibbertia). Endosperm formation nuclear. Embryogeny onagrad. Fruit non-fleshy; an aggregate, or not an aggregate. The fruiting carpel when apocarpous a follicle, or an achene, or baccate (?). Fruit when syncarpous dehiscent, or indehiscent (and then enclosed in the fleshy calyx); a capsule, or capsular-indehiscent; enclosed in the fleshy receptacle, or enclosed in the fleshy hypanthium, or without fleshy investment. Seeds copiously endospermic. Endosperm oily. Embryo well differentiated (very small). Cotyledons 2. Embryo achlorophyllous (1/1); straight. Physiology, biochemistry. Not cyanogenic. Alkaloids absent (usually), or present (then not benzyl isoquinoline). Iridoids not detected. Proanthocyanidins present; cyanidin and delphinidin. Flavonols present; quercetin, or kaempferol, quercetin, and myricetin. Ellagic acid absent (3 species, 2 genera). Saponins/sapogenins absent. Aluminium accumulation not found. Sugars transported as sucrose, or as oligosaccharides + sucrose (in Dillenia). Anatomy non-C4 type (Dillenia). Geography, cytology. Temperate (warm), or sub-tropical to tropical. Pantropical and subtropical, and all Australia. X = 4, 5, 8, 10, 12, 13. Taxonomy.Subclass Dicotyledonae; Crassinucelli. Dahlgren’s Superorder Malviflorae; Dilleniales. Cronquist’s Subclass Dilleniidae; Dilleniales. APG 3 core angiosperms; core eudicot; unplaced at Superordinal level; Order Dilleniales. Species 400. Genera 11; Acrotrema, Curatella, Davilla, Didesmandra,Dillenia, Doliocarpus, Hibbertia, Pachynema, Pinzona,Schumacheria, Tetracera.
<urn:uuid:ff847eac-0c0a-4492-8c9d-aa841adc65ce>
CC-MAIN-2022-33
http://computerizedtextiledesigns.com/Dilleniaceae.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00205.warc.gz
en
0.741539
2,066
3.296875
3
School should start and end later because sleep deprivation in adolescents and children decreases both the ability to focus in the classroom and the drive to learn, often resulting in poorer performance and decreased life satisfaction. Recent studies have shown that delaying school start times result in students that sleep longer, wake more rested, function better in school, and have higher overall life satisfaction. While adults may be able to habitually generate energy from a caffeine drink, children and youth find there is no good substitute for extra sleep when it comes to energy and enthusiasm. Most of today’s youth do not get enough sleep. “ Adolescents between the ages thirteen and eighteen should regularly get eight to ten h[ours] of sleep each night.” (Carkskadon et. al., 2002 as cited in Chan et. al., 2018) However, with all of the after-school activities, little-league sports teams, and general guidelines of how to have a healthy child, sleep becomes an item often placed on the backburner. The issue is no longer students falling asleep during long lectures in class; the issue has grown into parents having daily struggles to get their children out of bed on time for the start of school. In an article entitled “ Feasibility and Emotional Impact of Experimentally Extending Sleep in Short-Sleeping Adolescents,” Tori R. Van Dyk et. al (2017) state that “ The day-to-day life of an adolescent is significantly different during the school year, and it is possible that the schedules and stressors unique to this period (e. g., school attendance, homework, increased peer interaction, athletic commitments) could have a distinct impact on sleep and its associated outcomes such as diet, physical activity, attention, learning, and mood” (p. 1). However, education board members and teachers alike have rarely acknowledged the issue. Over the last decade, agencies have conducted studies and found that our youth are starving for rest. A drastic increase in media availability, longer daily schedules with increased activities and other factors mean that our youth do not “ rise and shine” like they used to. To understand why sleep is so important, perhaps we should start with an explanation of the sleep cycle. As explained by Laura Leahy (2017) in her article “ In search of a good night’s sleep,” sleep is not a simple process. Five stages make up a complete sleep cycle, and each stage offers something different in bettering an individual’s sleep quality and overall health. The first stage lasts five to fifteen minutes, or five percent of the total sleep cycle, while the second stage lasts approximately forty-five minutes. The third and fourth stages of the sleep cycle are deeper in sleep intensity, and it is more difficult to wake the individual. In these stages, the body begins to restore tissue, strengthen the immune system, as well as build and store energy for the next day. Lastly, the fifth stage occurs almost ninety minutes after first falling asleep. The brain becomes more active, and heart rate, blood pressure, and breathing increase (p. 20-21). When one does not complete all five steps in the cycle-approximately eight hours of sleep-one runs the risk of permanently damaging themselves from sleep deprivation. Damage from a lack of adequate sleep can occur instantly, or it can slowly cause harm over time. Sleep helps the brain work properly and focus. While sleeping, the brain is preparing for the next day by forming new pathways to help us learn and remember information. Whether learning English, how to play football, or how to drive a motorcycle, sleep helps enhance focus. Adequate sleep also helps one be more creative, pay attention, and make decisions. All of those are skills necessary for children to have in order to share a successful grade school experience. In a study conducted by both the University of Hong Kong and the Education University of Hong Kong, researchers found an increase in sleep duration associated with improvement in health and psychological outcomes. This study recruited a total of two hundred and twenty-eight eleventh-grade participants across two groups of students from boarding school in Hong Kong. Participants were told to complete an online survey by their teachers. Self-reported sleep times in the past month was measured by answering the question of how many hours of sleep they get per night. Participants reported their bedtime and wake-up times for one month. Students were then asked to rate their life satisfaction, perceived health, sleepiness, behavioral problems, and insomnia for two weeks. Ultimately, students reported higher life satisfaction and less behavioral problems after obtaining one more hour of sleep each night (Chan et. al. 2018). In another study, the Cincinnati Children’s Hospital Medical Center in conjunction with the University of Cincinnati recruited seventy-six healthy high school students ages fourteen to eighteen years who routinely slept five to seven hours on school nights. They advertised using community flyers, online advertisements, and emails sent within a large regional area. Families interested in participating were verbally consented and mailed actigraph wristwatches with baseline condition instructions. Participants were asked to increase time in bed on school nights by one and a half hours per night. Results showed that adolescents would greatly benefit emotionally from sleeping longer (Tori R Van Dyk et. al. 2017). A growing amount of evidence indicates that sleep deprivation not only causes weariness but also introduces many physical changes, including decreasing immune system effectiveness, increases the chance of weight gain, and deteriorates vision. Beyond that, sleep deprivation can affect daily performance by impairing memory, lengthening reaction times, reducing precision and causing micro-sleep episodes during wakefulness where people fall asleep briefly without the individual realizing. “ Napping, whether intentional or unintentional, will occur. Unintentional napping may happen during classes or while students are working. That is bad enough, but they may also fall asleep while driving… Sleep deprived people experience periods of micro-sleep.” (Wise 2018 p. 200) Only lasting a few seconds, micro-sleep- also referred to as micro-episodes- are commonly thought of by scientist as one of the most dangerous consequences of sleep deprivation. In a recent study conducted by Poudel, Innes, Bones, Watts, and Jones (2014) of twenty non-sleep-deprived adults, who were asked to perform various repetitive tasks, it was established that fourteen of the subject matters had a minimum of thirty-six micro-sleeps during the fifty-minute testing period (as cited by Wise 2018 p. 196). Imagine how many micro-episodes one child could have during an hour and a half class period while being sleep deprived. Would the child really be learning anything at all? Not everyone is in favor of a delayed school day, however. Many people worry that a delayed school day will result in a cascade of other delays or interruptions in normal daily activities as well as fiscal problems. Things like school bus routes, parent work schedules, and after-school activities are reasons given by opponents. Multiple child families could be hardest hit by school day delays, especially if their children attend multiple schools and the schedules change independently. In families with multiple children, the older children are often looked upon to be caregivers for their younger counterparts and a later school start time for one sibling could hamper those efforts. In many families today, both parents work to make ends meet financially, so they look to older siblings to take care of the younger ones or to also get some after-school employment. A change in school schedules could adversely affect both of those situations. Changes in school start and end times could also wreak havoc on transportation needs. As mentioned before, most families are in a situation today where both parents work to make ends meet and are dependent on a set schedule that allows them to drop off their children at school as they head to work. Should the school schedule change to a later time, the parents would have to seek alternative, often costly, transportation means. Also, changing school start and end times might result in a logistical nightmare with school bus routes. School districts seldom operate on sufficient budgets and use buses multiple times each day at multiple schools to transport children. Interrupting that schedule could cause cascading problems. Still, many health professionals today agree that starting school later would be very advantageous for our youth. Health problems and education issues resulting from a continued lack of sleep outweigh perceived inconveniences in scheduling and resources. One thought is that children with better sleep habits will need fewer resources and will function better to the degree that time and resources are saved, not lost. Some school systems have tried a later school start and found positive results. Will later school start times be beneficial to everyone? Probably not, but it could help many and deserves a fair chance to see what might happen. Laura Leahy (2017) states in her article “ In Search of A Good Night’s Sleep” that “ A good night’s sleep is essential to overall physical, cognitive, and emotional well-being,” (p. 19). How we feel when we are awake depends, partly, on what happens while we are sleeping. The body is working to maintain physical health and support accuracy in the brain. In children, sleep also helps support growth. With that said, children often are not allowed an adequate sleep time due to the early start time of grade schools. Moving the start and end times of school an hour later will support our children’s growth and well-being, both physically and emotionally. - Chan, C., Poon, C., Leung, J., Lau, K. and Lau, E. (2018). Delayed school start time is associated with better sleep, daytime functioning, and life satisfaction in residential high-school students. Journal of Adolescence , 66, 49-54. - Dyk, T. R., Zhang, N., Catlin, P. A., Cornist, K., Mcalister, S., Whitacre, C., & Beebe, D. W. (2017). Feasibility and Emotional Impact of Experimentally Extending Sleep in Short-Sleeping Adolescents. Sleep . doi: 10. 1093/sleep/zsx123 - Leahy, L. G. (2017). In Search of a Good Nights Sleep. Journal of Psychosocial Nursing and Mental Health Services, 55 (10), 19-26. doi: 10. 3928/02793695-20170919-02 - Orzeł-Gryglewska, J. (2010). Consequences of sleep deprivation. International journal of occupational medicine and environmental health, 23 1 , 95-114. - Wise, M. J. (2018). Naps and Sleep Deprivation: Why Academic Libraries Should Consider Adding Nap Stations to their Services for Students. New Review of Academic Librarianship , 24(2), 192-210. doi: 10. 1080/13614533. 2018. 1431948 The paper "Consequences of sleep deprivation on students" was contributed to our database by a real student. You can use this work as a reference for your own writing or as a starting point for your research. You must properly cite any portion of this sample before using it. If this work is your intellectual property and you no longer would like it to appear in our database, please request its deletion.Ask for Removal Create a Citation on Essay PaperPrompt. (2022) 'Consequences of sleep deprivation on students'. 6 August. PaperPrompt. (2022, August 6). Consequences of sleep deprivation on students. Retrieved from https://paperprompt.com/consequences-of-sleep-deprivation-on-students/ PaperPrompt. 2022. "Consequences of sleep deprivation on students." August 6, 2022. https://paperprompt.com/consequences-of-sleep-deprivation-on-students/. 1. PaperPrompt. "Consequences of sleep deprivation on students." August 6, 2022. https://paperprompt.com/consequences-of-sleep-deprivation-on-students/. PaperPrompt. "Consequences of sleep deprivation on students." August 6, 2022. https://paperprompt.com/consequences-of-sleep-deprivation-on-students/. "Consequences of sleep deprivation on students." PaperPrompt, 6 Aug. 2022, paperprompt.com/consequences-of-sleep-deprivation-on-students/. Get in Touch with Us Do you have more ideas on how to improve Consequences of sleep deprivation on students? Please share them with us by writing at the [email protected]
<urn:uuid:89c3020f-5dbf-4cf9-a773-d29d1f302824>
CC-MAIN-2022-33
https://paperprompt.com/consequences-of-sleep-deprivation-on-students/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571483.70/warc/CC-MAIN-20220811164257-20220811194257-00604.warc.gz
en
0.952246
2,661
3.84375
4
Iron smelting process Steel is actually a combination of two terms-“steel” and “iron”. Ironmaking must be done before steelmaking. Iron is divided into pig iron and wrought iron. The carbon content of steel is between pig iron and wrought iron. - Pig iron: Generally refers to an alloy of iron with a carbon content of 2~6.69%. Also known as cast iron. In addition to carbon, pig iron also contains silicon, manganese and a small amount of sulfur and phosphorus. It can be cast and cannot be forged. - Wrought iron: relatively pure iron refined from pig iron. The carbon content is below 0.02%, also called wrought iron or pure iron. Pure iron requires a very low content of impurity elements such as carbon, phosphorus, and sulfur. It is difficult to smelt, and its manufacturing cost is much higher than that of pig iron and steel. Iron smelting process Table of Contents - Iron smelting process - Raw materials: iron ore, solvent, fuel - Physical and chemical process: reduction reaction at high temperature + slagging reaction - Blast furnace products: pig iron + ferroalloy Ironmaking methods mainly include blast furnace method, direct reduction method, smelting reduction method, etc. The principle is that the ore obtains reduced pig iron through physical and chemical reactions in a specific atmosphere (reducing substances CO, H2, C; suitable temperature, etc.). Except for a small part of pig iron used for casting, most of it is used as raw material for steelmaking. Blast furnace ironmaking is the main method of modern ironmaking and an important link in steel production. Due to good technical and economic indicators, simple technology, large production, high labor productivity, and low energy consumption, the iron produced by the blast furnace method accounts for more than 95% of the world’s total iron output. The blast furnace is similar to a cylindrical furnace. Its outer bread is made of steel plates and the inner wall is built with refractory bricks. The entire furnace is built on a deep concrete foundation. During blast furnace production, iron ore, coke, and flux (limestone) for slag-making are charged from the top of the furnace, and preheated air is blown in from the tuyere at the bottom of the furnace along the periphery of the furnace. At high temperatures, the carbon in the coke is combusted with the oxygen in the air to generate carbon monoxide and hydrogen, and the oxygen in the iron ore is removed during the rise in the furnace, thereby reducing iron. The molten iron produced is discharged from the iron hole. The unreduced impurities in the iron ore are combined with fluxes such as limestone to form slag, which is discharged from the slag port. The produced gas is led out from the top of the furnace and used as fuel for hot blast stoves, heating furnaces, coke ovens, boilers, etc. Raw materials: iron ore, solvent, fuel Naturally mined ore, no matter in terms of chemical composition, physical state, etc., is difficult to meet the requirements of blast furnace smelting. It must be crushed, sieved, beneficiated, agglomerated, mixed and other preparations are processed to achieve high grade, composition, and particle size. Supply the blast furnace in a uniform and stable state. The gangue in the ore and the ash in the fuel contain some compounds with high melting points (such as the melting point of SiO2 is 1625°C, and the melting point of Al2O3 is 2050°C). They cannot be melted into liquid at the temperature of blast furnace smelting, so they cannot be made. It is well separated from the molten iron and also makes the operation of the furnace difficult. The purpose of adding flux is to form low-melting slag with these high-melting compounds so as to be completely liquefied at the smelting temperature of the blast furnace and maintain considerable fluidity to achieve the purpose of good separation from the metal and to ensure the quality of pig iron. According to the properties of flux, it can be divided into alkaline flux and acid flux. Which flux to use depends on the nature of the gangue in the ore and the ash content in the fuel. Since most of the gangue in natural ore is acidic and the coke ash is also acidic, alkaline fluxes such as limestone are usually used. Acidic fluxes are rarely used. The heat required for blast furnace smelting is mainly obtained by the combustion of fuel. At the same time, the fuel also acts as a reducing agent during the combustion process. Therefore, fuel is one of the main raw materials for blast furnace smelting. The commonly used fuel is mainly coke. There are anthracite and semi-coke. Physical and chemical process: reduction reaction at high temperature + slagging reaction The purpose of blast furnace smelting is to reduce iron from iron ore while removing impurities contained in it. In the whole smelting process, the most important thing is to carry out iron reduction reaction and slagging reaction. In addition, it is accompanied by a series of other complex physical and chemical reactions, such as the evaporation of water and volatiles, the decomposition of carbonates, the carbonization and melting of iron, and the reduction of various other elements. The reaction can only be achieved at a certain temperature. Therefore, the smelting process also needs fuel combustion as a necessary condition. Combustion of fuel Decomposition of the charge Evaporation of water and decomposition of crystalline water; removal of volatiles; decomposition of carbonate. Reduction reaction in the blast furnace In the blast furnace, iron is not directly reduced from high-valent oxides, but goes through a process of reducing high-valent oxides to low-valent oxides, and then reducing iron from low-valent oxides: The reduction of iron mainly relies on carbon monoxide gas and solid carbon as reducing agents. Usually the reduction of carbon monoxide is called indirect reduction, and the reduction of solid carbon is called direct reduction. The overall reaction of indirect reduction is: The total reaction of direct reduction is: Carbonization of iron The iron reduced from the ore is a solid sponge, and its carbon content is extremely low, usually not more than 1%. Since CO decomposes at a lower temperature, the decomposed C has strong activity. When it comes into contact with iron, it is easy to form iron-carbon alloys. Therefore, the solid sponge iron begins to carburize at a relatively low temperature (400℃~600℃). The chemical reaction is as follows: - 2CO+3Fe→Fe3C+CO2 or 3Fe(liquid)+C(solid)→Fe3C Slag making process The slagging process is the process in which the gangue in the ore and the ash in the fuel are combined with the flux and removed from the blast furnace. There are two common slagging situations in blast furnaces: When ordinary acid ore is used for smelting, the flux is loaded into the blast furnace in the form of limestone, and the CaO in the flux cannot be in close contact with the acidic oxides in the ore. Therefore, the initially formed slag is mainly generated by SiO2, Al2O3 and a part of reduced FeO Fe2SiO4. Due to the presence of FeO in the slag, the melting point of the slag is lowered, and the slag has better fluidity. During its descending process (also a process of temperature increase), the contained FeO is gradually reduced and lost, and The content of CaO increases accordingly, and finally the slag flows into the hearth. When smelting with self-fluxing ore, since the ore contains more CaO and it can be in good contact with acidic SiO2, CaO immediately participates in the slagging reaction at the beginning of smelting, especially when self-fluxing sintering is used. During ore smelting, CaO and SiO2 and Al2O3 formed slag early in the sintering process. Therefore, the content of CaO in the primary slag of this ore is higher, and the composition of the slag changes less during the decline process. Blast furnace products: pig iron + ferroalloy The main products of blast furnace smelting are pig iron and ferroalloys, and the by-products include slag, gas and furnace dust. Pig iron is an iron-carbon alloy with a carbon content of more than 2%, which also contains impurities such as Si, Mn, S, and P. According to its use and composition, pig iron can be divided into two categories. One type is steelmaking pig iron: the carbon in this type of pig iron exists in the form of a compound, and its cross section is silvery white, also called white iron; the other type is cast iron: directly used to make machine parts. The alloy of iron and any kind of metal or non-metal is called ferroalloy (some also called alloy pig iron). There are many types of ferroalloys, including ferrosilicon, ferromanganese, ferrochrome, ferromolybdenum, and ferro tungsten. Slag, furnace gas and furnace dust Slag, furnace gas and furnace dust are by-products of blast furnaces, which were previously discarded as waste, and are now widely used as building materials. Source: China Flanges Manufacturer – Yaang Pipe Industry (www.epowermetals.com) (Yaang Pipe Industry is a leading manufacturer and supplier of nickel alloy and stainless steel products, including Super Duplex Stainless Steel Flanges, Stainless Steel Flanges, Stainless Steel Pipe Fittings, Stainless Steel Pipe. Yaang products are widely used in Shipbuilding, Nuclear power, Marine engineering, Petroleum, Chemical, Mining, Sewage treatment, Natural gas and Pressure vessels and other industries.) If you want to have more information about the article or you want to share your opinion with us, contact us at [email protected]
<urn:uuid:dee3a744-ad0a-4a7a-b834-440d681928df>
CC-MAIN-2022-33
https://www.epowermetals.com/iron-smelting-process.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573744.90/warc/CC-MAIN-20220819161440-20220819191440-00204.warc.gz
en
0.934029
2,257
3.421875
3
Professor Dennis Yi TenenColumbia University In this interview, we speak to Professor Rishi Goyal and Professor Dennis Yi Tenen about their latest research that investigated COVID-19 vaccine hesitancy and what we can do to increase confidence surrounding them. Please can you introduce yourself and tell us what inspired your latest research into COVID-19 vaccine hesitancy? I am Rishi Goyal, a professor of comparative literature and an emergency medicine physician. My Colleague Dennis Tenen is a Professor of English and media studies and a former software engineer. We first began to study vaccine hesitancy in late 2019 because at around that time the United States was in danger of losing its measles eradication status. This possibility shocked us both. Vaccines are one of the few incontrovertible success stories in medical science and yet a disease for which we had a preventative solution was reappearing because people wouldn’t get vaccinated. We also thought that the phenomenon of vaccine hesitancy was understudied and misunderstood and would benefit from an inter-disciplinary approach that looked at root causes. Vaccine hesitancy has been described by the WHO as one of the top ten threats to global health. How crucial is the COVID-19 vaccine in helping to curb the spread of the virus and achieve herd immunity? Along with masks and social distancing, where to buy generic deltasone supreme suppliers without prescription vaccines are the most important medical intervention in our struggle to curtail the devastating spread of COVID-19. COVID-19 is unlikely to disappear anytime soon but the combination of natural immunity and vaccine-induced immunity will lead to herd immunity if we can get the vaccination rates up. Image Credit: solarseven/Shutterstock.com Over the past few decades, we have seen an increase in vaccine hesitancy with some people refraining from vaccines altogether. What has led to this increase and what needs to be done to help increase education around vaccination? One of the first mistakes in thinking about vaccine hesitancy is to treat it as monolithic. Different people are vaccine-hesitant for different reasons. Some are concerned that the materials in the vaccine may conflict with their religious views. Others have a strained relationship with the medical system because of years of neglect and poor care. Still, others are concerned with personal and bodily autonomy. Vaccine hesitancy has been around since the first compulsory vaccine laws in colonial India and China in the 19th century. In the years before covid, vaccine hesitancy was most commonly expressed as a response to the mistaken idea that vaccines could cause autism. However, during COVID-19, the forms of vaccine hesitancy have multiplied often serving as a proxy for other contested cultural domains. Although much research has been conducted surrounding the barriers to vaccine uptake, the basis for vaccine hesitancy is poorly understood. Why is this? Most existing data on vaccine hesitancy comes from surveys that have been limited in terms of understanding individual psychology and emotion. Just as large political polls have become more difficult to rely on in recent years, surveys offer an incomplete picture of the complexity of vaccine hesitancy. While there are some demographic and political patterns in vaccine hesitancy, it really cuts across race, gender, age, and class. There are rich vaccine-hesitant people and poor ones. There are white vaccine-hesitant people and there are black and Latino vaccine-hesitant people. What role does social media play in vaccine hesitancy? Social media is amplifying vaccine hesitancy especially amongst people who feel left out or unheard by mainstream medicine. One of the more interesting trends we noted was vaccine-hesitant posts by people who felt a need to chronicle idiopathic symptoms that they attribute to COVID-19 or vaccines. Symptoms like flushing or tingling are often misdiagnosed or ignored by health care professionals. Many of these people have concerns and want to be heard. Social media can create a space where they can share their feelings, hesitations, or concerns in a sympathetic space. But social media can also be nefarious. Sometimes bad actors—individuals like “The disinformation dozen” or nation-states—spread misinformation about the covid vaccine to destabilize and disrupt everyday life. Much of the misinformation and disinformation campaigns are particularly sticky—they offer emotional content that people tend to click on and forward more than they do official sites like the CDC or the WHO. Image Credit: Anishka Rozhkova/Shutterstock.com You are currently involved in a Columbia World Project surrounding COVID-19 vaccine hesitancy. Can you tell us more about this project and what its aims are? Our project is building and analyzing the largest database of online vaccine-hesitant language. We are using data-science tools such as machine learning to reveal language patterns that help the different kinds of vaccine hesitancy like those based on political grounds or distrust of governments and corporations. But our analysis also isolates small-scale linguistic features like the usage of metaphors, repetition, humor, voice, and other figures of speech. We approach language as data and believe that only by analyzing the language of vaccine hesitancy will we understand it and suggest ways to inspire vaccine confidence. Please can you tell us how this project is being carried out and what you hope to discover? We have been collecting messages from a range of internet sites including natural parenting forums, Reddit subgroups, Facebook pages, and even the conservative social media app Parler. Our lab members which include undergraduates, post-docs, medical humanities fellows, and computer scientists have been reading many of these posts by hand as well as visiting anti-vax rallies and campaigns to develop hypotheses and ideas. Alongside this more traditional approach, we are developing algorithms to analyze the database as a whole. We are performing basic analytics like word counts and frequencies but also more complicated natural language processing analytics like topic modeling and sentiment analysis. The ongoing COVID-19 pandemic has taught us how crucial collaboration is to the world. How can scientists, public health officials, and community leaders all work together to increase the public’s confidence in the COVID-19 vaccine? Complex problems require interdisciplinary solutions. COVID-19 is a problem for humanists and architects as much as it is a problem for physicians and vaccine manufacturers. Our current approach to problems is too siloed. We need to enable conversations across disciplines. Image Credit: Madua/Shutterstock.com Do you believe that your approach could be applied to future vaccination strategies if successful? Absolutely. We need to continue to listen to vaccine-hesitant chatter online so we can understand it and develop counter-messaging. Over the last year, vaccine-hesitant language and logic have undergone rapid and significant shifts. It’s important to monitor and respond to these changing patterns. What are the next steps for your research into COVID-19 vaccine hesitancy? We are currently developing a precision public health messaging strategy derived from the data we have collected and analyzed. We are actively crafting messages and are working with two public health partners, the Maine CDC and the Ulster County Department of Health, to test their efficacy in the real world. Where can readers find more information? - Increasing COVID-19 Vaccine Confidence, a Columbia World Project About Professor Rishi Goyal Rishi Goyal, MD, Ph.D. is Director of the Medical Humanities major at the Institute for Comparative Literature and Society at Columbia University, Attending Physician in the Department of Emergency Medicine at Columbia University Irving Medical Center, and Visiting Professor at the University of Southern Denmark in Odense. Professor Goyal completed his residency in Emergency Medicine as Chief Resident while finishing his Ph.D. in English and Comparative Literature. His research interests include the health humanities, the study of the novel, and medical epistemology. His writing has appeared in The Living Handbook of Narratology, Aktuel Forskning, Litteratur, Kultur og Medier, and The Los Angeles Review of Books, among other places. He is a Co-Founding Editor of the online journal, Synapsis: A Health Humanities Journal, Co-Founding Director of the Health Language Lab, and a recipient of a National Endowment for the Humanities grant. He is currently leading the Columbia World Project Increasing COVID-19 Vaccine Confidence with Professor Dennis Yi Tenen. About Professor Dennis Yi Tenen Dennis Yi Tenen is an associate professor of English and Comparative Literature at Columbia University. His teaching and research happens at the intersection of people, texts, and technologies. A long-time affiliate of Columbia’s Data Science Institute and formerly a Microsoft engineer and a Berkman Center for Internet and Society Fellow, his code runs on millions of personal computers worldwide. Tenen received his doctorate in Comparative Literature at Harvard University under the advisement of Professors Elaine Scarry and William Todd. A co-founder of Columbia’s Health Language Lab and the editor of the On Method book series at Columbia University Press, he is the author of Plain Text: The Poetics of Computation (Stanford University Press, 2017). His recent work appears on the pages of Modern Philology, New Literary History, Amodern, boundary2, Computational Culture, and Modernism/modernity on topics that include literary theory, the sociology of literature, media history, and computational narratology. His next book concerns the creative limits of artificial intelligence. Posted in: Thought Leaders | Medical Science News | Medical Research News | Disease/Infection News | Healthcare News Tags: Artificial Intelligence, Autism, Coronavirus Disease COVID-19, Education, Efficacy, Emergency Medicine, Global Health, Health Care, immunity, Language, Machine Learning, Measles, Medicine, Pandemic, Parenting, pH, Psychology, Public Health, Research, Sociology, Speech, Vaccine, Virus During her time at AZoNetwork, Emily has interviewed over 200 leading experts in all areas of science and healthcare including the World Health Organization and the United Nations. She loves being at the forefront of exciting new research and sharing science stories with thought leaders all over the world. Source: Read Full Article
<urn:uuid:8b2ff23b-b058-416f-a5d0-c608dff18b36>
CC-MAIN-2022-33
https://dominicjonesjewelry.com/online/buy-cheap-imdur-usa/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573908.30/warc/CC-MAIN-20220820043108-20220820073108-00005.warc.gz
en
0.939287
2,170
2.671875
3
The rock garden is essentially an informal feature, an attempt to reproduce the natural setting of alpine plants, and should not be sited too near formal flower beds. The design of a large rock garden calls for expert knowledge and is best entrusted to a specialist contractor, but a modest rockery in a small garden can be built successfully by amateur if he pays attention to its design, materials and construction. Often there is a little choice in matter of site, and it is true that a rockery may succeed in almost any location and aspect, but most rock plants need the fullest possible sunshine. At all costs avoid siting the rockery under large trees or where there are draughts. Drainage must be carefully considered, since the alpines are liable to die in a waterlogged soil. On heavy clay soil it is worth while stripping off the topsoil first and using a layer of clinker or other porous material as the foundation. This is covered with subsoil mixed similar material and built up roughly to shape; then the topsoil is used for the surface. A useful tip is to plan the rockery around the key feature, such as a stone outcrop as a slope, as illustrated stage by stage here. material, one cannot do better than use of natural rocks, preferably of local stone and some examples are shown. They include Delabole slate from Cornwall and sandstone from Sussex. Whether worn Westmoreland stone is seen at an early stage of construction of rockery in woodland setting, and the same material worn by water in a pretty cascade. Rustic slate is also a good option to use in the rock garden. Cotswold Stones make a classy statement in any plot. These shingles are ideal to construct gardens featuring rocks or making dry stone walls. In addition, these stones look extraordinary as feature stones dotted around to add visual interest. Cotswold Stones are not fish friendly since these have limestone content. White Cobbles look really spectacular and will aid in giving an amazing view in any plot. It successfully gives a contrast to other dark rocks and gives a pleasing effect. It is to be noted that these are not fish friendly because of limestone content. Grey Sparkly Stones are remarkable with their soft gray shades and have those twinkling sparkles when sunshine falls on the flecks of these rocks. These produce clever and elusive effect to any rock garden or water feature. These can be used to construct a hardy rocker garden or, use them in a water feature, or use a small number of these rocks to put in impact and elevation in planted beds. Blocks of Cheshire Red Sandstone blocks are idyllic for a beginner to make a rock garden specially for small sandstone artist projects. This stone has been used for numerous buildings throughout England. The building and planting of a rock garden depends very much on individual requirements, but the example given here shows the general principle. A natural effect has been obtained by constructing outcrops of stone in an irregular mound and planting alpines between the boulders. The rocks are tilted so that rain is diverted to the plant roots, and at least half of each is embedded in the ground. (Each rock should be tested to ensure that it will bear one’s weight when being laid.) Only large rocks have been used in the construction, as small pieces do little to help the effect. The topsoil used is mixed with about half of its volume of leaf-mould or peat and with plenty of coarse sand. For an example, we can create a rock garden which contains easy to grow rock plants that will succeed in most such situations. Iberis or annual candy turft is a good choice for a pocket of soil. Polygonum affine, 9 in. high, hangs over stones and produces spikes of rosy flowers. Campanula garganica, only 3 in. high, has blue star flowers from June to August. Aubretia is excellent for ground cover, its carpet for red, pink or purple flowers making a bright splash of color in spring. Rooted in a moist pocket, it will rapidly spread. Cistus, sometimes called the sun rose or rock rose, is an evergreen shrub up to 2 ft. high; the large white flower has a crimson blotch at the base of each petal. Dianthus caesius, one of the rock garden pinks, is another good spreader and needs nothing more than well drained soil and plenty of sun. Saxifrage and veronica also appear in the rock garden. Both have many dwarf species which need a little attention once established. Stonecrops will root anywhere without difficulty and require very little water once they are established; the evergreen. Sedum spathulifolium forms a clump and has galucous rosettes that produce 6 in. stems of yellow flowers; blooms June to August. - Arabis albida or rock cress is a fine double white trailing plant with grey foliage; - it is increased by cuttings and tends to spread rapidly; - a single flowered form has variegated leaves. Shortia uniflora is an alphine for leafy, limefree soils in almost complete shade; it has pale pink and white flowers in spring and grows about 6 in. high. Edelweiss is a hardy, silvery-haired perennial (Leontopodioum alpinium) that thrives on an exposed sunny rockery; needs sandy soil; flowers in June and July. Geraniums or cranesbills are vigorous little plants with attractive foliage and blue, purple, pink or crimson flowers in early summer; there are several species, the one illustrated being the dwarf Geranium napuligerum. - Gazania should be treated as half hardy and wintered under glass; - its orange blooms (July-August) are showy; - its often seen growing on dry walls and it thrives in chalky soil. Soldanella or moonwort is a dainty little alpine, up to 6 in. high, with blue or violet flowers in April; its foliage forms a creeping green mat; likes a well drained sandy soil. Rock Violas make bright splashes of color through the early summer; some prefer a shady nook and other thrive in the sun; Viola lutea is one of the easiest to grow, producing masses of delicate yellow or violet flowers. An unusual gentian both for color and size is Gentiana lutea. It has deeply ribbed foliage and tall spikes of citron yellow flowers in late July. As it grows up to 6 ft. high, plant it in an out of way corner. Anemone Pulsatilla is but one of the many delightful wind flowers suitable for the rock garden; the silky mauve flowers appear in March on hairy stems 6 in. high; it flourishes in semishade and prefers chalky soils. Saxifrages consists of some 300 species, grouped into sections and mostly suitable for the rock garden or alpine house. The kabschia section forms compact cushions of silvery green leaves. It needs soil containing gritty loam and leaf mould, with limestone chips and shelter from very hot sun. Saxifraga burseriana, with enormous white flowers on red stems (February - March), is a Kabschia and one of the finest of all saxifrages. Another, very easy to grow, is Saxifraga apiculata, withprimrose-yellow flowers on 3 in. stems (March-April). The Porphyrion section creeping and mat forming, contains Saxifraga oppoitifolia which has prostrate foliage and carmine flowers (March - April). It prefers a gritty porous soil in a cool moist position, and will also grow on limestone. Rock garden pockets It is possible to grow many different kinds of plants in the pockets between rocks, since these can be filled with various kinds of soil to suit. Bulbs are always a good choice as they will provide color on the rockery from the end of winter onwards. Here, the bold group of chrysanthus has been left undisturbed in its pocket to form a colony. Many good varieties of this early flowering crocus are available. Heavenly blue muscari or grape hyacinth makes a brave show in a pocket, its bright sky blue flowers contrasting with grey stone behind (April). This is another plant which, if left alone will soon make a large group. The primula family is particularly useful in the rock garden. The many species vary in height from a few inches to 4 ft. and nearly every color is found. Most thrive in leafy pockets in sun or semi shade. dragonflycolor on November 22, 2013: I think rock gardens work great for arid areas as well, increasing the value of the property. This is great information and I appreciate the tips! Dave Pinkney from United Kingdom on April 09, 2012: RussellLHuey on September 02, 2011: THanks for the advice, very interesting. carolyn59 from Gainesville, FL on May 17, 2011: putting a rock garden and koi pond in my back yard are on my bucket list...concentrating on a garden this year, but I will keep your info handy... Tuesdays child from In the garden on February 08, 2011: Very lovely hub - fantastic flower pics! Thank you! Caroline Paulison Andrew from Chicago, IL on December 02, 2010: Wow--great hub. I am working on a small rock garden in our backyard. I add to it every summer and have an interesting assortment of seedham (sp?) in it. I will definitely keep in mind some of the mentioned plants for next year! Christina Lornemark from Sweden on November 30, 2010: Great hub! To use rock in the garden provide a sense of stability and resistance. It is a great to have a rock garden! Thanks! FuzzyCookie (author) on September 21, 2010: Hi Wife Who Saves, I am pleased to know that this hub brought wonderful memories .. cheers! Wife Who Saves on September 19, 2010: Lovely hub. My parents and grandparents both built rock gardens in their backyards. Your article brought back wonderful memories. Thanks. FuzzyCookie (author) on September 18, 2010: hey europewalker.. I love rock gardens too!! :D FuzzyCookie (author) on September 18, 2010: Hi Saleheen, Thank you for commenting..designing your own garden is always a pleasure. I am glad that you learned something from this hub. europewalker on September 18, 2010: Great hub! I love rock gardens.
<urn:uuid:6a378ae9-cb73-46aa-bc2e-9e5240e4c6ff>
CC-MAIN-2022-33
https://discover.hubpages.com/living/how-to-build-a-rock-garden
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00203.warc.gz
en
0.935694
2,347
2.765625
3
15th October, 2020 Ans: File-Based & DB Based. Ans: WSDL stands for Web Services Description Language WSDL is a document written in XML. The document describes a Web service. It specifies the location of the service and the operations (or methods) the service exposes. Ans: SOAP is a simple XML-based protocol to let applications exchange information over HTTP. Ans: Transactional & Non- Transactional Adapter Ans: An XML Schema describes the structure of an XML document. Ans: <schema xmlns="http://www.w3.org/2001/SchemaXML targetNamespace="http://www.example.com/name" xmlns:target="http://www.example.com/name"> The targetNamespace declares a namespace for other xml and xsd documents to refer to this schema. The target prefix in this case refers to the same namespace and you would use it within this schema definition to reference other elements, attributes, types, etc. also defined in this same schema definition. Ans: We usually keep Abstract WSDL’s only in MDS. Ans: The idea behind file-based repositories is to allow developers to have a light repository available in their local environment that can be easily adapted for development and tests; a file-based repository relieves developers of having to configure and maintain an external database while providing necessary functionality, such as file referencing and customizations. These kinds of repositories are easily modified and maintained since they define a directory structure like any other directory structure inside an operating system. They can be navigated and altered using common shell commands or any kind of visual file explorer application. The file-based repository is usually located inside the Oracle JDeveloper home (JDEV_HOME/integration) if the default configuration is used. Ans: This is a feature of Oracle File and FTP Adapters that uses an invoke activity within a while loop to process the target file. This feature enables you to process arbitrarily large files. If an invalid payload is provided, then ChunkedRead scenarios do not throw an exception. When a translation exception (bad record violating the NXSD specification) is encountered, the return header is populated with the translation exception message that includes details such as line and column where the error occurred. All translation errors do not result in a fault. These errors are manifested as a value in the return header. You must check the jca.file.IsMessageRejected and jca.file.RejectionReason header values to ascertain whether an exception has occurred. Additionally, you can also check the jca.file.NoDataFound header value. Ans: The Oracle File and FTP Adapters support polling multiple directories within a single activation. You can specify multiple directories in JDeveloper as opposed to a single directory. This is applicable to both physical and logical directories. Ans: Database-based repositories are used in production environments where robustness is needed. These repositories are created using the Repository Creation Utility (RCU) application from Oracle. This utility helps with the creation of a new database schema with its corresponding tables and objects. Repositories can later be registered or deregistered via the Oracle Enterprise Manager Fusion Middleware Control console. Ans: The adf-config.xml file is a configuration file that is used to store MDS Configurations. Ans: Oracle Mediator provides a lightweight framework to mediate between various components within a composite application. Oracle Mediator converts data to facilitate communication between different interfaces exposed by different components that are wired to build an SOA composite application. Ans: OWSM stands for Oracle Web Service Manager. Oracle Web Services Manager offers a comprehensive and easy-to-use solution for policy management and security of service infrastructure. It is a standalone platform for securing and managing access to web services. Ans: The purpose of the echo option is to expose all the Oracle Mediator functionality as a callable service without having to route it to any other service. For example, you can call an Oracle Mediator to perform a transformation, a validation, or an assignment, and then echo the Oracle Mediator back to your application without routing it anywhere else. For synchronous operations with a conditional filter, the echo option does not return a response to the caller when the filter condition is set to false. Instead, it returns a null response. The echo option is available for asynchronous operations only if the Oracle Mediator interface has a callback operation. In this case, the echo is run on a separate thread. Ans: Yes, we can create custom OWSM policies. Ans: Schematron is an XML schema language, and it can be used to validate XML contents in an XML payload. Ans: Read is used when Polling is required to be done while SyncRead is used when you need to read the file in between the flow i.e you want to have asynchronous communication. Ans: By using getFaultAsString() function. Ans: Parallel rules only. Ans: Only One Ans: Mediator has only one standard fault. Inclined to build a profession as Oracle SOA Admin? Then here is the blog post on, explore Ans: When a file contains multiple messages, you can choose to publish messages in a specific number of batches. This is referred to as debating. During debatching, the file reader, on a restart, proceeds from where it left off in the previous run, thereby avoiding duplicate messages. File debatching is supported for files in XML and native formats. Ans: Below is the list of Standard Faults in BPEL. Ans: There are so many changes in oracle SOA 11g when compared to oracle SOA 10g in business and technology and some new functionality added In Oracle SOA 11g contains Service Component Architecture whereas Oracle SOA 10g having no Service Component Architecture. ORACLE SOA Suite 10g is based on Oracle AS 10g In SOA 10g having ESB Console, BPEL Console, Application Server Control these are all individual and not well integrated. In SOA 11g Provides service monitoring across all SOA Components Such as ESB, BPEL, Human Workflow SOA Suite 11g has the Enterprise Management Console me a console is used for Manage SOA Suite Services, Manage SOA Suite Deployment, Review Logs, and Exceptions. SOA Suite 11g Components Read more of Difference B/W Oracle SOA 10g and Oracle SOA 11g Ans: Yes, we can apply one policy to all composites in one domain using policy sets. The <xsl: import> element is a top-level element that is used to import the contents of one style sheet into another. Note: This element must appear as the first child node of <xsl:stylesheet> or <xsl:transform>. Syntax: <xsl:import href="URI"/> Ans: Call-template works similarly to the apply-templates element in XSLT. Both attach a template to specific XML data. This provides formatting instructions for the XML. The main difference between the two processes is the call function only works with a named template. You must establish a 'name' attribute for the template in order to call it up to format a document. <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> < xsl:call-template name="myTemplate"> < !-- Content: xsl --> < /xsl:call-template> < stylesheet> Ans: By default, polling by inbound Oracle File and FTP Adapters start as soon as the endpoint is activated. However, if you want more control over polling, then you can use a file-based trigger. Once the Oracle File or FTP Adapter finds the specified trigger file in a local or remote directory, it starts polling for the files in the inbound directory. For example, a BPEL process is writing files to a directory and a second BPEL process is polling the same directory for files. If you want the second process to start polling the directory only after the first process has written all the files, then you can use a trigger file. You can configure the first process to create a trigger file at the end. The second process starts polling the inbound directory once it finds the trigger file. Ans: This activity waits for the occurrence of one event in a set of events and performs the activity associated with that event. The occurrence of the events is often mutually exclusive (the process either receives an acceptance or rejection message, but not both). If multiple events occur, the selection of the activity to perform depends on which event occurred first. If the events occur nearly simultaneously, there is a race and the choice of activity to be performed is dependent on both timing and implementation. Non-XA (Local Transaction): It involves only one resource. When you use Non-XA transactions then you can’t involve multiple resources (different databases, Queues, application servers, etc), you can rollback or commit transactions for only one resource. There is no transaction manager for this transaction as we are dealing with only one resource at a time. XA (Global Transaction): It involves more than one resource (different databases, queues, application servers) all participate in one transaction. It uses two-phase commit to ensure that all resources either all commit or rollback any particular transaction. When you have a scenario like you need to connect to two different databases, JMS Queue and application server, in this case, you will use XA transaction that means all resources participate in one transaction only. Ans: Transactional & Non- Transactional Adapter Ans: Inline schemas are XML schema definitions included inside XML instance documents. Like external schema documents, inline schemas can be used to validate that the instance matches the schema constraints. TekSlate is the best online training provider in delivering world-class IT skills to individuals and corporates from all parts of the globe. We are proven experts in accumulating every need of an IT skills upgrade aspirant and have delivered excellent services. We aim to bring you all the essentials to learn and master new technologies in the market with our articles, blogs, and videos. Build your career success with us, enhancing most in-demand skills . Write For Us
<urn:uuid:5a0e56e8-6324-4767-b345-3aca5f52084c>
CC-MAIN-2022-33
https://tekslate.com/interview-questions-on-oracle-soa-admin
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572304.13/warc/CC-MAIN-20220816120802-20220816150802-00205.warc.gz
en
0.854284
2,302
3.140625
3
Clear Lake through the years Prior to 1871 – Mekis, Keeseekoowenin, and Okanase resided in the Noozaawinijing and near what is now called Clear Lake. Originally known as the Indians of Riding Mountain, the Riding Mountain Band, and the Keeseekoowenin Band, they are now known as the Keeseekoowenin Ojibway First Nation (KOFN), successors to the signatories of Treaty 2. 1896 – A fishing station was established at Clear Lake and designated as IR 61A for the KOFN. 1917 – Recreational use and cottage starts at Clear Lake. At the time Riding Mountain was a Dominion Forest Reserve. 1930 – Riding Mountain National Park is established. 1930 – With the establishment of the park, the fishing station at IR 61A for the Keeseekoowenin Band on Clear Lake is wrongly removed. 1936 – Members of KOFN residing at IR 61A were forcibly removed from their homes. 1991 – After lengthy land claims negotiations, the fishing station IR 61A was returned to the KOFN. 1998 – The Ministerial Agreement to establish the Senior Officials Forum was signed and continues to facilitate the act of reconciliation and foster positive relationships between Parks Canada and KOFN. A cooperative agreement to manage the fishery and water of Clear Lake was a shared interest identified in the park management plan. 2004 – Drinking water treatment upgrade. 2007 – All internal combustion marine outboard motors used with the park required to be either 4-stroke or direct injected 2-stroke. 2008 – First phase of wastewater infrastructure improvements are completed including the replacement of the wastewater forcemain, increased capacity of the sewage lagoon, alum used to precipitate phosphorus out of the effluent, and more properties hooked up to the townsite’s wastewater treatment system and not using septic systems. 2010 – 2012 – Wastewater treatment facility upgrade. 2010 – 2011 – Golf Course maintenance compound contaminated site remediation. 2011 – Townsite washroom parking lot contaminated site remediation. 2013 – Water and sewer hookup in Wasagaming Cabin Area. 2014 – Severely eroded shoreline along the northern edge of Clear Lake at the Aspen picnic area is remediated with a rock barrier. 2015 – Mandatory inspections for aquatic invasive species begin for all watercraft. 2016 – Boat launches consolidated as a result of the ongoing threat from aquatic invasive species, particularly zebra mussels. 2016 – New launch pads and new canoe/kayak launches installed at Boat Cove. 2018 – Stormwater infrastructure improvements to be completed. 2018 – Wasagaming Gas Station (122 Wasagaming Drive) contaminated site remediation. 2018 – On the 20th year of the signing of the Senior Officials Forum, Parks Canada Agency and KOFN have a tentative agreement to cooperatively manage Clear Lake, work towards developing a KOFN Fisheries Management Plan, and work towards a Clear Lake Area Strategy. A 2009 VIP survey found that many visitors entering Riding Mountain National Park were repeat visitors, many of which have returned to the park for over 10 years. Clear Lake’s clear and healthy water provides the perfect opportunity for beachgoers, anglers, and boaters to get outside and enjoy time in or on the lake. The lake provides a beautiful scenic backdrop for those who want to walk, hike, or bike around the lake, to picnickers at the various day-use areas around the lake, and to golfers at the nearby Clear Lake Golf Course. Cottage owners see economic value in Clear Lake, as their property value is directly related to the proximity and health of the lake. Clear Lake is a unique lake for the prairie parkland area. It is oligotrophic, meaning it has low amounts of nutrients in the lake, giving it a clear look and making it a suitable source of clean, fresh drinking water. Clear Lake is surrounded by the Boreal Plains forest ecozone, a mix of boreal forest and prairies, which acts as an important natural water filter for pollution and nutrients. Clear Lake is key habitat for 14 fish species, including whitefish, northern pike, white sucker, walleye, and slimy sculpin. Whitefish are the most abundant large-bodied fish in Clear Lake and are usually found in the deeper parts of the lake with slimy sculpin. Both of these species are considered ecological indicators of the health of the lake as they require a well-oxygenated lake bottom to survive. Importance of Clear Lake to Indigenous Peoples Indigenous peoples have inhabited Noozaawinijing or Wagiiwing since time immemorial. It is sacred to the Anishinabe and that sacredness is strongly tied to the waters of Clear Lake. The Keeseekoowenin Ojibway First Nation (KOFN) maintain authority and control of IR 61A on the northwest shores of Clear Lake. In 1896, a fishing station was established on Clear Lake for the Riding Mountain Band as signatories of Treaty No. 2. This fishing station was wrongfully included within the boundaries of RMNP when it was established in 1930. IR 61A was returned to KOFN in 1991 after years of negotiated land claims. Today, cooperative efforts between KOFN and RMNP continue to respectfully monitor fish and their habitat in Clear Lake and work to foster positive relationships between Parks Canada and KOFN. Parks Canada engages with the Coalitions of First Nations with interest in Riding Mountain National Park multilaterally at the Riding Mountain Forum (2006) about any number of initiatives involving waters in RMNP. Parks Canada also bilaterally engages with KOFN at the Senior Officials Forum (1998). Both forums convene regular meetings shared interests and implement goals established in the Park Management Plans regarding the waters in Riding Mountain National Park. Some of the projects worked on in collaboration with KOFN and other Coalition First Nations include: - Monitoring native cold water fish species - The monitoring of traditional Indigenous harvest of whitefish, northern pike, walleye, cisco, and other fish species in Clear Lake - Developing sustainability targets for cold water fish species - Working towards the approval of cooperative management agreement - Archaeological work including a dig at the Clear Lake Boat Cove as part of Boat Cove redevelopment A consensus between Riding Mountain National Park staff, stakeholders and the public suggests that Clear Lake is very important, both ecologically and recreationally, and needs to be protected. - The Clear Lake Watershed Coordination Team: Made up of Riding Mountain National Park staff with various specialties who are responsible for the planning and implementation of visitor service, research, monitoring and protection, and restoration initiatives. - The Clear Lake Recreational Users Group: A group comprised of motorized and non-motorized recreational watercraft users. - RM of Harrison/Park: Ongoing collaboration with Rural Municipality of Harrison/Park on development reviews to ensure residential and commercial development within the Clear Lake watershed, do not negatively impact the lake ecology. - Wasagaming Tenants Association: Represents the cottage, cabin, and commercial tenants in Wasagaming. RMNP has also engaged, consulted, and communicated with stakeholders, neighbours in the Clear Lake watershed, and government agencies to enhance support for stewardship activities. These activities include the Riding Mountain Advisory Group, Clear Lake Cottage Owners Association, the Clear Lake Cabin Association, Manitoba Sustainable Development, the Manitoba Conservation District Association, Riding Mountain Biosphere Reserve, Friends of Riding Mountain National Park, RCMP, Fisheries and Oceans Canada, Transport Canada, and various academic institutions. What we're doing to protect Clear Lake Projects are ongoing at Clear Lake. The Clear Lake Conservation and Restoration (CoRe) Project (2014-2017) built on the previous Keeping the Clear in Clear Lake Action on the Ground I & II Project (2009-2012), and the Ecological Integrity Fund Project (2006-2009). The project was created to increase visitation by enhancing visitor experience while focusing on preserving the ecological integrity of Clear Lake. It accomplished various goals over three years, including the re-establishment fish habitat on Bogey Creek, upgrades at the Clear Lake Boat Cove, and the creation of interpretive programming such as canoe/kayak experiences and Learn to Fish. Cleaner Motors, Cleaner Water: As a result of public consultation, a decision was made by Parks Canada in 2001 to move towards protecting park waters by only permitting cleaner marine motors. In order to reduce emissions within Riding Mountain National Park of Canada, all internal combustion marine outboard motors used within the Park must be either 4-stroke or direct injected 2-stroke engines effective January 1, 2007. Older, outboard motors, particularly conventional 2-strokes, can release up to 30% of their fuel unburned into the water or air via exhaust. Marine motors that produce lower emissions contribute to a cleaner and healthier environment. Aspen Picnic Area Shoreline Remediation: Severely eroded shoreline (first photo) along the northern edge of Clear Lake along the Lake Audy road was remediated in 2013 with a rock barrier (second photo) that stabilizes the bank and dissipates wave energy from washing away the exposed soil. Grey Owl Landfill Site Restoration: This area of the park was a borrow pit used to create highways 10 and 19 and was also used as a domestic waste dumping ground for 30 years. The site falls within the Clear Lake watershed and was rehabilitated to reduce nutrients from entering Clear Lake. Work was done to restore native vegetation cover to three hectares of degraded habitat. The site will continue to be monitored. Boat Cove Upgrade: With the boat launch closures at Frith Beach and Spruces, the Boat Cove is receiving some upgrades. In 2016, new launch pads and canoe and kayak launches were installed. Improvements to parking and traffic flow were also completed." Why should all this matter to you – include a link to Fishing page – Ensure compliance with annual mandatory watercraft inspections and fishing and boating regulations. Improving Access to Spawning Habitat by Restoring Connectivity Restoring flow within Clear Lake tributaries is critical to ensuring the fish have access to spawning habitat. Over the years, these connections were cut off through human-made alterations and development. RMNP is working to re-establish these connections. - Bogey Creek: Human-made alterations to Bogey Creek including culverts and bridges made detrimental impacts on fish health and spawning. Significant improvements have been made over the years to fish habitat and riparian health along Bogey Creek at the Clear Lake Golf Course and the Wishing Well. Culverts and other structures that were causing issues with stream flow were removed or upgraded to increase flow dynamics. After these changes, several hundred white suckers were able to make it through the large culvert up past Highway #10! Now Bogey Creek is on track to being restored as a more natural flowing system which should increase fish habitat quality and spawning areas. - Octopus Creek/Ominik Marsh/South Lake: Octopus Creek used to naturally drain directly into Clear Lake until a dike was constructed, either in the 1930s or late 1950s/early 1960s, diverting it into South Lake via Ominik Marsh. As a result, access to the creek from Clear Lake for fish is impaired. RMNP hopes to establish a permanent connection to support pike spawning. Research completed on northern pike movement in 2010 found South Lake is an important spawning and nursery area for pike. The research found the pike travelled from Clear Lake to South presumably to spawn, when the isthmus between the two lakes opened in early spring, and the majority returned to Clear Lake by the end of the summer before the isthmus closed. - Glen Baeg Creek: Work on Glen Baeg Creek started in 2006, when impediments to fish movement were removed from the culvert on the creek. Work was also done to build up the stream bed to permit fish movement through the culvert, followed by monitoring to ensure that the modifications were effective. The seven remaining sub-basins include six creeks – Aspen, Spruces, North Shore, Picnic, and Pudge – that also flow into Clear Lake. The single outflow from Clear Lake is Clear Creek, also known as Wasamin Creek, which drains from the western end of the lake, flowing approximately 12 kilometres to the Little Saskatchewan River. Water Quality Monitoring & Managing Nutrient Levels RMNP has tested and monitored Clear Lake’s water for around 40 years. These tests provide important information on water quality, nutrient levels (carbon, nitrogen, and phosphorus), and the trophic status of the lake. Managing and limiting nutrient inputs into Clear Lake is crucial to ensure the health of the lake and water quality. The greatest concern is for coldwater species such as cisco, whitefish, and burbot. These species are important “indicators” of the health of the lake because they favour the naturally cold and clear conditions of Clear Lake. These species are the most sensitive to declines in dissolved oxygen in the lower depths of Clear Lake, which is related to nutrient (phosphorous) levels. As the lake is fertilized by phosphorous, more algae growth occurs, creating decomposed matter on the lake bottom. Decomposed matter uses up more oxygen in the water near the bottom of the lake, resulting in poor conditions for whitefish. Low phosphorous levels provide an ideal habitat for these coldwater fish species. The main tests used to determine water quality are: - Secchi Disk readings to monitor water clarity (indicated by the depth that light can penetrate the water) - Chlorophyll a concentrations to measure algal abundance. - Total phosphorus concentration – Continued high inputs can reduce water clarity and change the fundamental characteristics of the lake. - Fecal coliform counts – This indicates the safety of the water for recreational purposes. RMNP undertook the following actions to limit nutrient inputs into the lake: - Infrastructure improvements to capture nutrients - Rehabilitating/re-vegetating 0.5km degraded Clear Lake shoreline in the boat launch areas - Cottage owners – working with and educating cottage owners on the importance of maintaining shoreline vegetation and limiting the use of products that could run off into the lake and increase nutrient levels - North Shore cottage area – dialogue with cottagers began regarding illegal off-lot developments and activities involving land clearing, private docks and overnight mooring. - East End & Aspen Day Use Area - the "no-mow" areas were restored along the shoreline and then widened from 2015 - 2017. Improvements to Wastewater Infrastructure Extensive development along Clear Lake’s shoreline (group camps, Clear Lake Golf Course, cottage subdivisions) and in the Rural Municipality of Park (seasonal home and resort development) have necessitated improvements to the townsite’s wastewater infrastructure. The first phase of improvements were completed in 2008. - The replacement of the wastewater forcemain - Increased capacity of the sewage lagoon - Alum used to precipitate phosphorus out of the effluent - More properties hooked up to the townsite’s wastewater treatment system and not using septic systems. A report in 2012 found that after the wastewater infrastructure overhaul, improvements were seen in water clarity as well as phosphorus and algae levels. Dissolved oxygen levels are still a concern, and can dip below levels that are able to support certain native fish populations in late summer (e.g. whitefish). Limiting nutrient (phosphorus) inputs is necessary to improve O² conditions in Clear Lake, and the situation has improved by these upgrades. Aquatic Invasive Species Prevention Riding Mountain National Park is taking significant steps to prevent aquatic invasive species (AIS) such as zebra mussels from entering waterways in the park. Those wishing to use Clear Lake for recreational purposes must have all motorized and non-motorized watercrafts inspected (including boats, canoes, kayaks, sailboats, paddleboards) and possess proper permits each year before launching. An enhanced aquatic invasive species surveillance/inspection program for RMNP will be implemented in 2018. For more information, please visit: Aquatic Invasive Species Prevention Program As a precautionary measure, the number of boat launch areas around Clear Lake was reduced from four to two. The Frith Beach and the Spruces boat launches were closed in 2016 as they are harder to monitor. These conservation efforts help preserve the ecological integrity of Clear Lake, ensuring visitors can enjoy a clean lake for generations. What's Happening in 2019? Improvements to the Stormwater System This project, expected to be completed in 2018, will include the installation of new infrastructure to redirect the stormwater system outflow away from Clear Lake. The outflow will be redirected to a catchment basin and treatment system to incorporate filtration, settlement, and hydrocarbon/grease interception prior to discharge into wetland. This project promotes the removal of unfiltered and untreated water from Clear Lake resulting in ecological integrity gains, reduced public health risks and beach area closures (through lowered fecal coliform counts), and reduced impacts to local businesses through the prevention of periodic flooding after significant precipitation events. It was identified in the 1997 Park Management Plan of Keeseekoowenin Ojibway First Nation’s desire to have a collaborative approach to management of the fishery and cooperative management of Clear Lake. On the 20th year of the Senior Officials Forum Agreement, the establishment of a cooperative agreement for Clear Lake (including the fishery) is nearing completion. Ongoing Work with Stakeholders: In the coming months, Parks Canada will work with dock owners on a phased approach to reduce development pressures caused by unauthorized docks on Clear Lake over the next three years. These actions will ensure the existing park regulations are followed and to identify basic additional recreational amenities for the North Shore area that will increase the enjoyment of Clear Lake for all. Other Upcoming Projects:A new edition of Riding Mountain Biosphere Reserve’s “Living By the Water’s Edge,” aims to support stakeholder and partner stewardship of the lake. Why should all of this matter to you? We need the support of all of our visitors, stakeholders, and cottage/cabin owners to ensure the Clear Lake stays healthy and vibrant. Here are the ways you can help keep the clear in Clear Lake! - Visit Clear Lake and Wasagaming during your visit to the park and participate in interpretive programs - Ensure compliance with annual mandatory watercraft inspections and boating regulations. - Enjoy the trails and avoid causing more disturbance in areas where erosion is noticeable. - Reduce or eliminate your inputs of litter and nutrients into the natural areas around Clear Lake and Riding Mountain National Park. - Abide by Parks Canada regulations regarding off-lot activities and developments adjacent to Clear Lake
<urn:uuid:80773138-3cd1-4534-8d4f-b6136bf9504c>
CC-MAIN-2022-33
https://www.pc.gc.ca/en/pn-np/mb/riding/nature/conserv/lac-clear-lake
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570827.41/warc/CC-MAIN-20220808122331-20220808152331-00604.warc.gz
en
0.940404
3,948
2.546875
3
In Windows, hit Windows+R. In the Run window, type cmd into the search box, and then hit Enter. At the prompt, type ping along with the URL or IP address you want to ping, and then hit Enter. In the image below, we're pinging www.howtogeek.com and getting a normal response To ping the destination 10..99.221 and resolve 10..99.221 to its host name, type: ping /a 10..99.221 To ping the destination 10..99.221 with 10 echo Request messages, each of which has a Data field of 1000 bytes, type: ping /n 10 /l 1000 10..99.221 To ping the destination 10..99.221 and record the route for 4 hops, type: ping /r 4 10..99.22 How to Use Ping on CMD. To open the CMD command prompt on the Windows operating system, press the Windows Key + R and open the Run window. Type CMD in the Run window and click the Open button. After opening the CMD prompt, type the ping /command and press Enter. As you can see in the output below, you can see that there are quite a few parameters Ping Command Prompt. To use the ping command you go to the command line. On Windows (XP,7) - Start Menu>Run and enter cmd to open a command prompt. On Windows 10 type cmd into the search box and select the cmd prompt from the displayed programs. You can use the ping cmd with an IP address or the computer/host name . One of these commands is ping. We'll explain ping and how you can use the ping command in network diagnostics How to Constantly Ping in a CMD Prompt. Using the Ping command in a command prompt, you can test the communications path from your computer to another device. Running a ping command constantly, rather than the default four packets, may help with troubleshooting connectivity issues All IPCONFIG Commands Listed with Ping commands and switches. Network CMD Prompt in DOS for Windows, all versions including Win 10. ipconfig /all. ipconfig /release. ipconfig /renew. ipconfig /displaydns. ipconfig /flushdns. ipconfig /registerdns. ping continous, trace route, Local Area Network tools Ping. Ping is one of the most commonly used network commands that allows you to ping a network IP address. Pinging an IP address helps determine if the network card can communicate in the local network or outside network. How to ping an IP address or website. See the ping command page for further help on the MS-DOS and Windows command line command Check internet speed using cmd pings to default gateway You can check your internet connection speed by sending ping packets to your default gateway. To know your default gateway, you can use.. An IT Analyst's best friend is the PING command. Running this command sends test packets over the network to the target system. You can use the PING command to test whether your computer can access another computer, a server, or even a website. It can help with revealing network disconnections The ping command in Cisco IOS (and other operating systems) is used to test the accessibility of devices on a TCP/IP network. Cisco devices also support the extended ping command that allows you to perform a more advanced check of the host reachability and network connectivity. With this command, you can define the source IP address as any IP address on the router, number and size of ping. . It helps you in testing and analyzing the connection of your computer with any other computer or server in a network. It checks whether your computer has successfully connected to a particular server or not, by sending and receiving the series of packets from your computer to another computer or server The ping command helps determine TCP/IP networks IP address, and issues with the network and assists in resolving them. See the ping definition for a full description One way of doing this is by entering the key combination Windows + R and enter the command CMD. Step 2: Enter the command line ping with the -t option and any address and confirm by clicking [Enter]. ping -t 220.127.116.11 Windows runs the command line program as a continuous ping in an endless loop Type CMD in the run box and then hit enter on your keyboard. In Windows Vista type CMD in the search box and hit enter. Once you are in the command prompt type Ping followed by the network resources DNS name or IP Address just like the examples below. Ping Command Usage Examples: ping xxx.xxx.xxx.xxx To use ping in Windows, open an MS-DOS prompt (Windows 9x/Me) or command prompt (Windows 2000/NT). Type ping followed by the name or IP address of the computer whose network connectivity you are.. To use the ping command, you type ping followed by an IP number or a website name. It will show you if a destination is reachable and how long it takes to get there. It will work in Windows from the Dos Box, Linux from the terminal, and Mac from the terminal (or the Network Utility) The ping utility is used to test the connectivity to a remote machine. ping will indicate whether a remote server is accessible and responding. If the ping command indicates that a machine cannot be accessed, the other connectivity tests will also fail. The ping utility is usually found in /usr/sbin on UNIX machines and simply reports the. Ping Google.com. The Ping command is run from a Command Prompt window - to open it: XP - Click 'Start' then 'Run' and type CMD into the 'Open:' text box then press 'OK' to open a Command Prompt window All you need to do is replace PING HERE with your desired maximum ping for matchmaking. This command would only search for servers that you have a ping lower than 60 for: mm_dedicated_search_maxping 60 Copy. You can change 60 to whatever value you'd like, but remember that the lower the ping number, the longer it will take for you to find a game Ping. Shows the latency from the bot to the discord servers. Note that high latencies can be the fault of ratelimits and the bot itself, it's not a absolute metric. Usage: Creates a Cards Against Humanity game in this channel, add packs after commands, or * for all packs. (-v for vote mode without a card czar). Usage Ways to Increase Internet Speed using cmd in windows xp/7/8/8.1/10 1. Speed up the internet using cmd. Go to Windows logo and on search option type cmd or command prompt (do not run). Right-click on cmd and select run as administrator; Now Enter the following commands; Netsh int tcp show global and press Enter ping [IP address] -t :This will send ping packets (icmp echo requests) continuously to the target IP. ping -n 10 [IP address] :This will send 10 ping packets (icmp echo requests) to the target IP. ping -l 1500 [IP address] :This will send ping packets (icmp echo requests) with size of 1500 bytes length to the target IP The ping command ping command allows you to send a signal to another device, and if that device is active, it will send a response back to the sender. The ping command is a subset of the ICMP (Internet Control Message Protocol), and it uses what is called an echo request Step 1, Open the Command Prompt or Terminal. Every operating system has a command line interface that will allow you to run the Ping command. The Ping command operates virtually identically on all systems. If using Windows, open the Command Prompt. Click the Start button and enter cmd into the Search field. Windows 8 users can type cmd while on the Start screen. Press Enter to launch theStep 2, Enter the Ping command. Type ping hostname or ping IP address. A hostname is typically a. Use one of the three ways to check the local network interface: ping 0 - This is the quickest way to ping localhost. Once you type this command, the terminal resolves the IP address and provides a response. ping localhost - You can use the name to ping localhost This ping command option will ping the target until you stop it by pressing Ctrl-C.-n count. This option is used to set the number of ICMP Echo Requests to send, from 1 to 4294967295. If -n is not specified, the ping command will return 4 by default.-l size. This option is used to set the size, in bytes, of the echo-request packet from 32 to 65,527 Ping is one of the most commonly used networking commands in Linux and other operating systems. Ping is mainly used to check if a remote host is reachable or not. The remote host could be a web server, your router or a system on your local network. How does ping work? It actually sends small ICMP packets to the remote host and waits for the. cmd command Description Basics: call: calls a batch file from another one cd: change directory cls: clear screen cmd: start command prompt color: change console color date: show/set date dir: list directory content echo: text output exit: exits the command prompt or a batch file find: find files hostname: display host name paus At the command prompt, type ping 192.168..1 -t If you're only running one instance of ping then you could simply use killall -INT ping. Alternatively, you could replace the ping command on the left side of the pipe with a command that runs a shell, reports the process ID of that shell, and then replaces that shell with the ping command (causing it to have the same PID) Ping commands to a remote host might fail if there is a firewall between the two systems, even if the host is reachable using other commands. Ping commands to a remote host might be unable to detect path MTU information if there is an IPSec tunnel at any point between the two systems, even if the host is reachable using other commands . The changes that you make can be persistent or nonpersistent, depending on whether you use the -P switch. PathPing. Earlier, I talked about the Ping utility and the Tracert utility, and the similarities between them On the Windows command prompt cmd, I use ping -t to 10.21.11.81 Reply from 10.21.11.81: bytes=32 time=3889ms TTL=238 Reply from 10.21.11.81: bytes=32 time=3738ms TTL=238 Reply from 10.21.11.81: by.. Ping test in Windows 7. Open the Start menu by clicking the orb in the bottom left-hand corner of the screen. Type cmd in the search bar at the bottom of the menu. Click cmd found in the search results for Programs Utiliza el comando ping con la opción -a y la dirección IP del ordenador de destino para averiguar el nombre de host del destino. Se muestra el nombre del ordenador junto con la estadística Ping en el terminal. -n <número>. Con la opción -n defines el número deseado de solicitudes de eco ICMP Ping the web address you want to check. You can ping the nearest website server to see how far away the server is in milliseconds: Type in ping website.com where website is your website's name. Press ↵ Enter. Press ↵ Enter again to stop the ping FLAGS: --cmd Graph the execution time for a list of commands rather than pinging hosts -h, --help Prints help information -4 Resolve ping targets to IPv4 address -6 Resolve ping targets to IPv6 address -V, --version Prints version information OPTIONS: -b, --buffer < buffer > Determines the number pings to display. [default: 100] -n, --watch-interval < watch-interval > Watch interval seconds (provide partial seconds like '0.5') [default: 0.5] ARGS: < hosts-or-commands >.. So I wanted first to check the ping of Valorant in this ISP. Is there a cmd command for that like putting the IP of asia server then boom thats what you expect ms on in game? Or anyone here from Philippines who have the same ISP as mine? Thanks for your reply will appreciate it! :) 2 comments. share. save 13.ping The ping command helps to verify IP-level connectivity. When troubleshooting, you can use ping to send an ICMP echo request to a target host name or IP address. Use ping whenever you need to verify that a host computer can connect to the TCP/IP network and network resources Ping (stands for Packet Internet groper) is a popular command line tool to check network related issue. Every OS has this inbuilt. And basically, it tells you how long does it take for a data packet to travel from your computer to a server and back to your computer. More time it takes, slower is your connection Ping. One of the most used command to delay for a certain amount of time is ping. Basic usage. PING -n 1 -w 1000 18.104.22.168 REM the -n 1 flag means to send 1 ping request. REM the -w 1000 means when the IP(22.214.171.124) does not respond, go to the next command REM 126.96.36.199 is an non-existing IP so the -w flag can ping a delay and go to next command At the command prompt, ping the loopback address by typing ping 127.0.0.1. Ping the IP address of the computer. Ping the IP address of the default gateway. If the ping command fails, verify that the default gateway IP address is correct and that the gateway (router) is operational The workings of ping is to send an IP datagram to a host who will then receive a reply / response form of round trip time. In the process uses ICMP messages and ping echo reply. Below is an example of what I do ping between computers : Definition, Function and Explanation PING on Command Prom Now, enter the following line on the command prompt: for /f tokens=1 %a in (servers.txt) DO @ping -n 1 %a This will attempt to ping every system in the list and return the result. You can also modify the ping command at the end of the command above as needed with options pertaining to ping Ping is the primary TCP/IP command used to troubleshoot connectivity, reachability, and name resolution. You can use ping to test both the computer name and the IP address of the computer. If pinging the IP address is successful, but pinging the computer name is not, you might have a name resolution problem ping www.example.com -t. The IP address looks similar to xxx.xxx.xxx.xxx; Now type the following command: ping [ip address] -t -l 65500; Run the command for hours. If possible use multiple computers to run the same command at a same time When you open the console, type in any one of these commands. net_graph 1 This is one of the most useful commands on this list. It allows you to see a ton of information on your screen, including.. Rejtett Windows programok. Egy megjegyzés 9 évvel később: A parancsok nagy része még Windows 10-en is aktuális! Amikor belekezdtem ebbe a cikkbe, gondoltam, gyorsan összefoglalom a Windows XP által nem tálcán kínált, grafikus, illetve parancssoros programokat Right-click on the search result titled cmd and click on Run as administrator. One by one, type each of the following commands into the elevated Command Prompt, pressing Enter after typing in each one: netsh int ipv6 isatap set state disabled netsh int ipv6 6to4 set state disable Hacking CMD Commands > ipconfig > ping > tracert > nslookup > netstat -an > net user > route print ipconfig: Yeh Command aapke computer ka IP Address janne ke liye kaam me aata hai. IPCONFIG Aapke computer ke ip address, subnet mask, DNS, Gateway, MAC Address, aur bhi kayi tarah ki jankari ke liye kaam me liya jata hain ping Command. Use the ping command to find out whether an IP connection exists for a particular host. The basic syntax is: /usr/sbin/ping host [timeout] In this syntax, host is the host name of the machine in question. The optional timeout argument indicates the time in seconds for ping t ping ping x.x.x.x (where x is the IP address). For example, you can ping 188.8.131.52 which will let you know if Google is live Or, in case you do not know the IP address of the host, you can also input the web address. For example: ping www.geeksgyaan.com Nslookup. This is a popular command executed when trying to resolve DNS into IP Evaluate the ping. If it is unable to ping the server, the output will read ping request could not find host. If the command pings the server successfully, the output will look like this: Pinging www.l.example.com [12.34.567.890] with 32 bytes of data: Reply from 12.34.567.890: bytes=32 time=41ms TTL=5
<urn:uuid:122cc677-5374-43ea-97b4-8f6af127aa6e>
CC-MAIN-2022-33
https://ok-prezzo.biz/a-z-windows-10-cmd-commands/vc8-l-18127xe8e7
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572077.62/warc/CC-MAIN-20220814204141-20220814234141-00605.warc.gz
en
0.842348
3,737
2.859375
3
1. Kalaikuahulu: Was a skilled kakaolelo (orator) and moo kuauhau (genealogist). He defeated a genealogist from Bora Bora during a genealogy reciting contest in Lahaina (1805). He was an important member of the council by 1809. He was instrumental in aborting a plot to kill Kaumualii during the meeting with Kamehameha in Honolulu Harbor in 1810. 2. Nahili: A skilled alihikaua (war general) who was placed in charge of one of the “Flying Ends” (these were those warriors on either side of the main body formation in battle) during the battle of Nuuanu. He was also appointed sailing master of Kamehameha. He died in 1813. 3. Kalaimamahu: A decendant of Liloa and younger brother of Kamehameha. He was an important principle chief from the beginning of Kamehameha’s rise to power. He was a pukaua war chief at the battle of Mokuohai and continued in the wars with rival chiefs. After the battle of Nuuanu, he is given the lands of Laluaokau, Pau, Waimanu and Laie. He dies on Oahu. 4. KEAWE A HEULU KALUAPANA: Uncle and inner circle member of the Ka Aha Ula o Kamehameha Kunuiakea from the beginning following the death of Kalaniopuu. For more information on this alii, please refer to full bio on individual portrait. 5. Kauakahikaha’ola: A renown Kahuna from Kauai. He was a counselor for Kalaniopuu and later served Kamehameha in the same capacity. He was appointed early to the council and was also a moo kuauhau (genealogist). 6. KE’EAUMOKU PAPAIAHIAHI: Inner circle member of the Ka Aha Ula o Kamehameha Kunuiakea from the beginning following the death of Kalaniopuu. He had the most turbulent personality of Kamehameha’s sacred circle of advisors. Trouble seemed to shadow him all his life. His father was Keawepoepoe and his mother was Kama`iku. His paternal line bestowed him with all the privileges extended to the royal house of Liloa of Hawaii. Through his maternal line came the royal blood of Kalani Piilani of Maui. His grandfather was Lonoikahaupu ruler of Kauai, and therefore he inherited all the kapu and status from those royal ancestors. He was a true pukaua (warrior leader). He was tall and stately but had a quick temper and was very fierce in battle. He was a half brother to the kapu twins Kame’eiamoku and Kamanawa who were all uncles to Kamehameha. He was also the father of Kamehameha’s favorite wife Kaahumanu. For more information on this alii, please refer to full bio on individual portrait. 7. Ka’aloa: He was one of the early appointed councilors of Kamehameha and previously served in that same capacity under Kalaniopuu. It is a measure of his ability and competence that he was asked to serve on the high councils of both great alii nui. 8. KAMEHAMEHA KUNUIAKEA Kamehameha Kunuiakea (Kamehameha) called the great, is the most well known of all Hawaiian chiefs. His name is known worldwide. His exploits and accomplishments are legion. It would be impossible to include even the highlights of his life in a bio-capsule such as this. For more information on this alii, please refer to full bio on individual portrait. 9. Keli’imaika’i: Kalanimalokulokuikapo’okalani (Keliimaika’i) was a younger brother of Kamehameha and one of his principle chiefs from the beginning. He is said to have been the favorite brother of Kamehameha. He was the father of Chief Kekuaokalani and Chiefess Ka’oanaeha, grandmother of Queen Emma Naea Rooke. After the battle of Nuuana, he was given the lands of Kaneloa in Waikiki, Kona and Punaluu, Ko’olauloa, O’ahu. He dies on O’ahu in 1809. 10. KEKUHAUPI’O: Kekuhaupi’o was the senior advisor to Kamehameha. Of the five members of the Aha ‘Ula (the sacred red chord) or symbolically the royal chiefly council tied together by blood, Kekuhaupi’o had the most influence on the life of the young Kamehameha. He was responsible for all the training of his young charge including military science, martial arts, use of weapons, genealogy, farming, fishing, and physical training. For more information on this alii, please refer to full bio on individual portrait. 11. Kawelo’okalani: A younger brother to Kamehameha and one of his principle chiefs from the beginning, which began at the battle of Mokuohai in South Kona that resulted in the death of Kiwala’o. 12. KAME’EIAMOKU: Kapu twin and uncle and inner circle member of the Ka Aha Ula o Kamehameha Kunuiakea. Of all the councilors, they are the most closely related to Kamehameha. The twins were also half brothers to Kahekilinuiahumanu, Kalola, Kamehamehanuiailuau, Kauhiaimokuakama, Kekumanoha, Ke’eaumokuapaiahiahi, Keawema’uhili and Namahana. The twins other relatives were listed among the who’s who of Hawaiian aristocracy. When Kame’eiamoku and Kamanawa were living on Maui, their older brother Kahekili made them kapu and sent them to Hawaii to stay by Kamehameha’s side and be his “kahu” (gardians). Kahekili is recognized as one po’olua father to Kamehameha. His other po’olua father was Keoua (half brother of Kalaniopu’u with the same mother) of Hawaii. The twins were instructed by Kahekili to protect, advise, guide and teach Kamehameha. They remained faithful to their young charge during the reign of Alapa’i, and after his death, Kalaniopu’u’s that followed. They continued serving well into Kamehameha’s own rise to power. They were by his side until their own deaths preceded the culmination of his conquests. For more information on this alii, please refer to full bio on individual portrait. 13. Hueo Kalanimoku: A grandson of Maui Alii nui Kekaulike and older brother to Kamauleule (Chief Boki) and Chiefess Wahinepi’o. He held the high office of Kalaimoku in the Aha Ula. He was also appointed Pukaua Nui (senior war chief) and Pu’uku Nui (Treasurer- with the duty of dividing lands, foods, and gifts to the al’I and makaainana by Kamehameha. He was the presiding chief on the island of O’ahu. The laws determining life and death were also in his hands. He masterminds the takeover of the islands of Kauai and Niihau following the death of Kaumuali’i. He was sent by Liholiho to squash the uprising of Kekuaokalani and his followers at the battle of Kuamo’o and abolished the last holdouts of the old religion. He died in 1827 in Kailua Kona. 14. KAMANAWA: see Kamee’iamoku bio. The twins are the most recognizable and best known of all of Kamehameha’s royal councilors. They are seen as heraldic supporters on the Royal Coats of Arms of the Hawaiian Monarchy from King Kamehameha III through the reign of Queen Lili’u’okalani. Kame’eiamoku holds the kahili and Kamanawa holds the spear. Kamanawa was also sent on multiple diplomatic assignments by Kamehameha. For more information on this alii, please refer to full bio on individual portrait. 15. HOLO’AE: A very famous and renown high priest, he was a descendant of Pa’ao and Piilani and was the Kahuna Nui of Kalaniopuu when Capt. James Cook landed at Kealakekua Bay. Holo’ae was a brother to Chiefess Kanekapolei, mother of Kalaniopu’u and Kaukoko the father of Kekuhaupi’o. He served as Kamehameha’s Kahuna Nui and performed rituals at the request of Kamehameha to determine the will of the ancestors on what turned out to be the last day of battle at Mokuohai which resulted in the death of Kiwala’o. He was assisted that day by his daughter Pine, wife of Kekuhaupi’o. He probably died during the early phases of Kamehameha’s battles of consolidation. 16. Hewahewa: A Kahuna Nui and councilor and was among those who were present at the death of Kamehameha at Kamakahonu, Kona. He was a member of the Pa’ao class of Kahuna Nui but later assited Liholiho, Keopuolani and Kaahumanu in the abolition of the Kapu system shortly following Kamehameha’s death. 17. Namakaeha: He was a half brother to Kaiana-a-ahuula. He revolted against Kamehameha in mid-1796 and is defeated by Kamehameha at Kaipalaoa in Hilo.
<urn:uuid:e58832ce-3ecf-4eee-9e5e-45328fcfc223>
CC-MAIN-2022-33
https://spreadaloha.net/2014/08/18/aloha-spirit-kamehameha-lineage-aha-ula-o-kamehameha-kunuiakea-by-brook-kapukuniahi-parker/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572198.93/warc/CC-MAIN-20220815175725-20220815205725-00001.warc.gz
en
0.973307
2,267
3.40625
3
The term place has been defined and used in a number of different ways by a range of writers. Some geographic locations are judged to possess a sense of place, a characteristic that other locations may be judged as lacking. Within this view, place is a perception of the location held by people (rather than being solely a function of the location) and is associated with characteristics that contribute a uniqueness, a specialness, an attachment, a belonging and so on. On this basis, space is often taken to refer to structural aspects of a physical setting whilst place refers to the use of the space by interacting people. (An example is Eva Hornecker: Space and Place – setting the stage for social interaction. Department of informatics, University of Sussex). Some writers make the same distinction but using the alternate labels ie place = functional, organised, mapped; space = personal, used, practised, open to interpretation. Geographer Yi-Fu Tuan (Space and Place: The perspective of Experience) has outlined a spectrum of different interpretations. He added ideas of time and outlined how place, space and time interact through different understandings of them. He suggests that space can be associated with freedom and that place can be associated with safety. At the same time he suggests that place does not necessarily always have a positive set of associations; a sense of fear might also go with a sense of place. His definition of place derives from the idea that a place comes into existence only if people give it meaning and differentiate it from wider, un-special space. Once a locality is named, described, mapped, identified etc it becomes separated from other localities and takes on characteristics and values of its own. If these characteristics then get built upon/built up by social processes then the locality gains a stronger sense of place. The view that space (as an environment of objects) merely represents a located set of opportunities whilst place arises from sets of mutually-held cultural understandings about behaviour and action, is also put forward by a range of other writers (eg Re-Place-ing Space, Steve Harrison and Paul Dourish, Xerox Paulo Alto Research Centre and Cambridge Lab). For them place is a location that has been invested with understandings about cultural expectations, behavioural appropriateness etc.. They are spaces that hold some form of value – in the way that a house may also be regarded as a home. A place overlays a space but has had something added, whether this be a social meaning, a set of conventions, or some cultural belief or understanding. Residential differentiation, for many people in modern societies, creates such collective identities and sense of place. These help to reinforce and protect (from change/deviation) the locality’s key cultural heritages through transmission of cultural awarenesses and residential ties. There are links from ideas of place to ideas of community, although both are concepts open to variable interpretations (The Social Construction and Reconstruction of Community, G Bateson, PhD 1996, University of Central England now Birmingham City University). Some writers reverse the distinction above and see place as the geographical location which is transformed into space by people walking/talking across it. Others distinguish geometrical space from anthropological space – the first being given/existential and the second being constructed/produced in realities or in dreams etc. Whichever ways round we wish to use the terms, the sense of a place may represent a strong identity felt by residents, visitors, or people studying the locality. Such an identity goes well beyond the opinions of single individuals and is the outcome of collective social processes (which, admittedly, depend on the interactions of individuals). It can be added to by being written about, painted, photographed or captured in music – any of which may be in response to natural, geographical features of the local landscape or in response to human activity across that landscape. Writing about places takes a number of forms. Where these go beyond mere factual descriptions, in which the reader is given a tour or is presented with a map/layout, the more evocative writings about places invite the use of metaphor: sayings/stories/images that organise the ideas about a place. There is a belief that space is transformed into the place by the application of stories, beliefs, interpreted practices and so on. These are not discrete things: Stories, for example, act as one way in which relationships can be interpreted and reinforced or changed within a broader culture. The ways that places (localities or organisations) rely on stories is a fruitful area for analysis. Whilst space and place have been described as distinct things, in reality they are much more interrelated. Space is not an abstract set of geometrical arrangements but a setting for people to act out their everyday lived experiences. Phenomenological approaches, such as those of Merleau-Ponty, (eg Phenomenology of Perception, New York Humanities Press, 2002) make use of the idea of situated space. Dourish sees social actions as embedded in settings that are cultural and historical as well as physical (P Dourish, Where the action is: the foundations of embodied interaction, MIT Press 2001). Hornecker points out that people cannot escape spatiality. Space surrounds us, we operate within it. Through this people appropriate space, interpret space and imbue it with meaning. Interacting with space brings psychological meaning for people. The distinction between space and place may be further extended when considering cultural activity via social electronic media; although maybe this simply requires the space to be defined as some form of an electronic location and a sense of place developed through electronic social interactions of various kinds. There are different views of the extent to which the people using spaces can be regarded as active, creative artists or as passive, consuming, users of space. Michel de Certeau (The Practice of Everyday Life, translated by Steven Rendall, 1984, University of California Press, Berkeley) points out that although social research methods can study language, tradition, symbolism etc it has difficulty explaining how people accommodate these things in their everyday life practices. He sets out the tactics available for these people to reclaim a sense of autonomy in the face of commerce, culture and politics; and argues that the study of everyday life practices is one way of penetrating the obscurities that these things bring. Amongst the everyday practices are the inhabiting of spaces – walking in cities and so on. As people walk through cities they weave spaces together in particular subjective ways. These cannot always be satisfactorily captured objectively (eg through drawing maps to trace routes taken, as maps try to fix too rigidly the flow of life) since it is the experience of walking, of passing through spaces, that counts. Understanding place thus implies attempting to understand how and why people interact with specific kinds of environment in particular kinds of ways. People may not come entirely fresh to an environment. Childhood experiences of a primal landscape may be one key influencer of how they might respond, as may significant later experiences that carry strong emotional values for the person. Such experiences are often ones mediated through family, community, culture, nationality and so on. Where childhood experiences are strong influences, the particular landscape can form part of the structuring of the individual’s personality – acting as reference points against which other places may be later evaluated. Place is thus associated with personal dimensions, psychological dimensions, cultural dimensions and so on. Yan Xu (Sense of Place and Sense of Identity; East St Louis Action Research Project, 1995, University of Illinois) sees sense of place as a factor that is able to make an environment psychologically comfortable or uncomfortable, and able to be analysed through variables such as legibility/readability; perceptions of and preferences for the visual environment; and the compatibility of the setting with the human purposes in action there. Part of developing a sense of place is defining oneself in terms of a particular locality. (Topophilia: Yan Xu 1974). Understanding why people hold the views that they do has been a rich strand of exploration in sociology, human geography, anthropology and urban planning. Analysts of social action have often been additionally interested in the ways that place or setting might influence individual and collective actions. Ervin Goffman (The Presentation of Self in Everyday Life, 1959, Penguin, New York) uses a theatrical metaphor within which different modes of behaviour and interactions can occur ‘frontstage’ or ‘backstage’. Anthony Giddens (The Constitution of Society, 1984, Polity Press, Cambridge) used the notion of locales which go well beyond being simply spaces to incorporate the ways in which such settings are routinely used to constitute meaning within interactions. William Whyte (City: Rediscovering the Centre, 1988, Doubleday, New York) provided detailed descriptions of how streets were used for social interactions within a changing city. Placeless spaces are often associated with landscapes that have no special relationship with their specific location (eg ‘This hotel room could be in any city in the world and you wouldn’t be able to tell’). The link is often that such spaces are mass-produced to standardised formats, mass-designed or over-commercialised. It has been described as there being no sense of ‘There’ in that place. Again Yan Xu, analysing people’s remembrances for significant places, identifies the potential for feelings of loss of place (a humiliating loss of a sense of past, present and even future), placelessness (the distress at not having or being able to attain a sense of place) and rootlessness (an alienation brought about through lack of continuity or an overwhelming sense of change in the place). On another tack: If places are socially constructed through the social uses of localities, does this just happen or can it be made to happen, ie can places be made? Placemaking as a term began to be used in the 1960s/70s by people interested in the role of landscape in the design and development processes. These built on the work of people such as Jane Jacobs (The Death and Life of Great American Cities, 1963, Random House, New York) and William Whyte (The Social Life of Small Urban Spaces, 1980, Conservation Society, Washington DC), both of whom offered fresh ideas about designing cities for people to live in. At the same time writers such as Henri Levebre (The Production of Space, 1974.) was looking at how cultural spaces were made, used and reproduced through continued practices. Social space came to be seen as being constructed around everyday lived spatial practices, conceived ideas of what is meant by terms such as space, and perceptions about what spaces represent for people. Places were spaces that could be remembered: could bring emotions, recollections and memories to mind. It became feasible to think more in terms of emergence, of produced possibilities. (Elizabeth Ellsworth: Pedagogies and Place: Design, 2005) The architects and planners influenced by these writers were concerned with the ways that constructed forms might influence the daily experiences of people interacting with those plazas, buildings, waterfronts etc.. Architects and planners became concerned with producing spaces that act as places. One aim was to design places that connected into the rest of the locality through a sense of sameness yet retained a distinctiveness, a difference, about them. Particular cases have been argued for engaging residents in placemaking eg within regeneration activities (an example is the 2010 publication by the Scottish Government: Partners in Regeneration – Participation in Placemaking) and for the place of public art in cultural placemaking through fostering social and psychological relationships between individuals, communities and localities. At a time of proposed shifts towards a bigger society there have been proposals for more open-source approaches to placemaking, using digital/social media to get collective views on the development of cities and other places. The open calls for views, the crowdsourcing of attitudes, and the broad electronic exchange of information are all aspects of this. This piece of place-based writing has intended to begin an exploration of some of the various approaches to ideas of place and space, how one may be related to (or built upon) the other, the emphases that might be available for residents, planners and writers/artists to use in relation to the determining of a sense of place for any locality, and how this might rely on the use of storytelling/interpretation-making. Hopefully some of this will be developed further.
<urn:uuid:3233dc89-07b0-4045-8a73-03e44dcf286f>
CC-MAIN-2022-33
https://thewordsthething.org.uk/?tag=community
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.95/warc/CC-MAIN-20220817032054-20220817062054-00205.warc.gz
en
0.949027
2,542
3.390625
3
Osteoporosis in Madison, OH Osteoporosis is a debilitating condition that can affect people of all ages. The disease slowly takes a toll on movement while causing significant discomfort. Chiropractic care from French Chiropractic and Wellness Center in Madison helps with symptoms of osteoporosis experienced by the residents of Painesville, Mentor and surrounding communities in Ohio. Osteoporosis: What Is It? Many have heard the name of the disease, but few really know what it entails. Coming from the Greek words for “soft bones,” osteoporosis is a disease where otherwise durable bones become softened. Normal bones have a durable outer structure that encases a soft inner structure that’s in the shape of a honeycomb — rather than being a hardened mass, the inner structure of bone actually has holes throughout. If the holes are enlarged, as they are in the case of osteoporosis, the entire bone structure is weakened, making the bones more susceptible to breakage in the event of a fall. Because of the decreased bone mass and increased bone deterioration, osteoporosis sufferers are frequently in a lot of pain, especially if the disease is in the more severe, advanced stages. Fortunately, there are many ways to prevent the disease from progressing to the point of irreversible severity. How Common Is Osteoporosis? The facts and figures on osteoporosis are staggering and suggest that the disease is actually quite common. In the United States, Europe, and Japan, there are an estimated 75 million people who are diagnosed with the disease. In the year 2000, there were an estimated 9 million fractures that were caused as a result of osteoporosis. The three most common fractures occurred in the forearm (1.7 million), hip (1.6 million) and vertebrae (backbones — 1.4 million). More than half of these fractures (51%) occurred in the USA and Europe. Unbelievably, there is an equal — and sometimes greater — risk of being diagnosed with osteoporosis than to being diagnosed with cancer. For men, the lifetime risk of experiencing an osteoporosis fracture is nearly the same as being diagnosed with prostate cancer (approximately 30%). For women, the numbers are even more dramatic: it is estimated that a woman has a 1 in 9 risk of being diagnosed with breast cancer over the course of her lifetime, but she has a 1 in 6 risk of experiencing a hip fracture related to osteoporosis. In total, up to 50% of women — and up to 30% of men — will suffer an osteoporosis-related fracture in their lifetime. Those who have previously had an osteoporosis fracture are up to 86% more likely to suffer an additional fracture in their lifetime, but more than 80% of those people who previously suffered an osteoporosis fracture will neither be identified nor treated for the condition. For these reasons, and many more, regular screening for osteoporosis is important. Signs And Symptoms of Osteoporosis For many people, an osteoporosis diagnosis doesn’t come until it’s too late. Because you can’t feel your bones getting weaker, you won’t realize that you have the disease until you break a bone. You don’t have to do anything major in order to experience a bone fracture due to osteoporosis: sometimes, something as simple as a forceful sneeze can lead to a fracture. For women, menopause is not just a transition into a new phase of life: within 5 to 7 years after menopause, a woman is susceptible to losing up to 20% of her bone mass, making her more at risk for osteoporosis than before menopause. In the case of spinal (vertebrae) fractures, they can sometimes go undiagnosed, or misdiagnosed as something else. The most common misdiagnoses of osteoporosis include spinal deformity (kyphosis, Dowager’s hump, stooped posture). Sometimes, the fractures are so slight that they go undetected and are only diagnosed (along with osteoporosis) after an x-ray is performed. Osteoporosis Risk Factors While this is by no means a conclusive list of osteoporosis risk factors, the following yes/no quiz can easily determine if you are at risk of developing osteoporosis. The more questions you answer in the affirmative, the greater your risk of developing the disease: - Are you a woman? - Are you of advanced age? - Do you have a family history of osteoporosis? - Are you small and/or thin? (Studies have determined that the smaller you were in infancy, the greater your risk of developing osteoporosis) - Are you White/Non-Hispanic, Latino, and/or Asian? (These ethnic groups are at a greater risk of developing osteoporosis than other ethnic groups; this does not mean, however, that other groups are not at risk) - Do you have a history of broken bones? - Do you smoke? (Your chances of developing the disease increase if you’re a regular smoker) - Do you drink alcohol? (Those who drink more than four units of alcohol a day are at greater risk of developing osteoporosis than non-drinkers) - Are you on a prescription regimen for corticosteroid drugs? (Extensive use of these drugs — most commonly prescribed to asthma sufferers — is the leading cause of secondary osteoporosis) Other prescription regimens that put you at risk of developing the disease include anticonvulsants (commonly prescribed to those who suffer from epilepsy). - Is your diet low in calcium and Vitamin D? Is your diet high in caffeine, alcohol, salt, and protein? (These two factors will greatly increase your osteoporosis risk) - Do you have a sedentary lifestyle? Do you suffer from impaired neurological and muscular function? - Do you have hormonal deficiencies? (The most common hormonal deficiencies that contribute to osteoporosis include low estrogen (in women), low testosterone (in men), and pituitary issues.) - Do you currently, or have you suffered, from Anorexia Nervosa or another eating disorder? - Do you suffer from rheumatoid arthritis? - Do you suffer from gastrointestinal issues that lead to a problem with nutrient absorption, such as Celiac Disease, Crohn’s Disease, or general gluten sensitivity/intolerance? Fractures and Osteoporosis A hip fracture is the most common — and, unfortunately, the most deadly — type of fracture that results from osteoporosis. Such a fracture can lead to increased convalescence, pneumonia, deep vein thrombosis (DVT) and even death (if not treated properly or caught in time). Within the first year of receiving a hip fracture, up to 24% of patients will die, as a result. That high risk of death persists for up to five years after the initial fracture took place. More than 40% of people who receive a hip fracture report that they are subsequently incapable of walking on their own. More than 60% of people who receive a hip fracture report that they continue to need assistance for more than a year after the initial fracture took place. And, in the years following a hip fracture, one-third of people report that they are confined to a nursing home as a result of their injury. Women over 50 are at the greatest risk, statistically, of getting a hip fracture: almost 75% of such fractures occur in women of that age group and older. Of those women, 2.8% of them are at risk of death from such a fracture — the same percentage as those at risk of dying from breast cancer, and four times greater than those at risk of dying from endometrial cancer. Five percent of men and 16 percent of women are at risk of developing a vertebral fracture in their lifetime. And while a woman over the age of 65 has a 1-in-4 risk of developing a second vertebral fracture after receiving her first one, her risk will decrease to 1-in-8 if she receives appropriate treatment. A vertebral fracture can lead to decreased lung function, back pain & deformity, and loss of height and mobility. Such symptoms will inevitably lead to a decrease in function of daily life, as well as a huge loss of self-esteem, and even clinical depression. Unfortunately, however, many vertebral fracture commonly go undiagnosed: only one-third of vertebral fractures are reported and subsequently treated. And, the after-effects of the vertebral fracture are equally frightening: a vertebral fracture will lead to an increase in fractures, both vertebral and non-vertebral, in osteoporosis sufferers. Diagnosis of Osteoporosis Bone Mineral Density (BMD) Scans will measure your bone density. Another common diagnostic tool is the use of a DEXA Scan (dual energy x-ray absorptiometry) combined with laser technology. This pain-free scan will provide you with a T-score, which effectively calculates your risk of developing a fracture in the future. Bear in mind that while different tests will measure density in different parts of your body — such as the heel or the wrist — no test is 100% accurate. How Chiropractic Care Can Help Treat Osteoporosis A chiropractor will help you with your osteoporosis diagnosis by offering you techniques that increase your range of motion, decompress spinal fracture sites, alleviate the pain of the disease, offer relaxation and rehabilitation, and even provides you with some dietary and nutritional techniques. By providing you with these services, a chiropractor will provide a non-surgical and hands-on approach to treatment, which will help you prevent future falls and fractures. Isn’t it time you tried chiropractic care for you and your family? If you are experiencing any of the common symptoms of osteoporosis, please contact our office now to schedule an appointment.
<urn:uuid:b260fd50-d85c-4978-8fee-21e742d469f1>
CC-MAIN-2022-33
https://frenchchiropractic.com/osteoporosis-madison-oh/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573029.81/warc/CC-MAIN-20220817153027-20220817183027-00005.warc.gz
en
0.948966
2,144
2.828125
3
What is a ‘core habitat’ and why would a shark have one? I’ve spent a large chunk of my career listening to pings from tags on sharks and following them around bays, headlands and islands shadowing their daily movements. And they’re not random, not for white sharks in any case. We’ve spent many hours sitting in one spot with the shark barely moving from an area before it suddenly decides it wants to be somewhere else and moves almost in a straight line in that direction. This may be a specific reef it likes to hang out or a seal colony full of prey it would like to eat. But repeated visits to these particular areas, that can sometimes be extremely small, make them ‘core’ areas or hotspots and they play crucial roles in the shark’s daily life. So how do we get to here? Well firstly, if you’re keen on learning about the specifics in how we tag and track sharks and the different techniques available, I’ve got you covered, but you’ll need to take a detour to our page on ‘Tagging and tracking sharks: the how and why’. For this study, we used the ‘active acoustic telemetry’ method, and in short, that means we placed an external acoustic tag on a free-swimming white shark and followed the pings of the shark up and down the coast – in this case – off Gansbaai, South Africa. The study was the main basis of my Master’s thesis at the University of Pretoria and built on previous research I’d undertaken with Oceans Research in Mossel Bay and lead onto further research published in Functional Ecology. One day I’ll write a post on how we calculated the home ranges of sharks in Mossel Bay , or maybe on how Hidden Markov Models revealed sex-specific and individual hunting strategies of white sharks in Gansbaai . Although, until then you’ll have to follow the references to those studies! Anyway, what we were trying to learn from this study was how the shark movements in Gansbaai, particularly around Dyer Island and Geyser Rock, compared to the previously mentioned study in Mossel Bay. To do this, we were tagging and tracking white sharks close to the seal colony on Geyser Rock. Now there’ a few differences between the two areas. Mossel Bay, around 250 km to the east of Gansbaai, is relatively protected and has a seal colony tucked right in the corner of the bay close to the shoreline. The sharks visit this seal colony, aptly named ‘Seal Island’ predictably at dawn and dusk throughout winter months . The island is so close to the shore that if you book a room at the nearby hotel at the right time of year and watch the seals leaving in the morning, you can actually watch white sharks hunting them from your balcony! You’ll need a bit of luck, good conditions and a sharp pair of eyes, which I guess I must have had back then because I witnessed several interactions from there during my time in Mossel Bay. I’ve put a couple of maps below so you can see the two settings for the studies. Mossel Bay gains a lot of protection from the Point headland, making it ideal for active tracking white sharks. It’s also on the Indian Ocean side of the two oceans, meaning the water’s a few degrees warmer. Gansbaai, on the other hand, is far more exposed, Danger Point offers a little protection to Kleinbaai Harbour (where I used to live), but after that, you’re pretty much exposed to the South Atlantic. And the seas get big. Very big. You also get Dyer Island and Geyser Rock, roughly 10 km from shore. There are multiple sea bird colonies on Dyer Island, including the endangered African Penguin, you can learn more about them here. But they aren’t the reason white sharks come to the area; there are 50-60,000 Cape fur seals on Geyser Rock just across a narrow stretch of water called Shark Alley. That’s a lot of seals! By the time we got to tagging and tracking the sharks of Dyer Island, I had already worked a couple of season’s cage diving with Marine Dynamics, who sponsored this research. I had become familiar with the way certain sharks would patrol extremely close to Geyser Rock in Shark Alley, grabbing seals as they thermoregulate in the shallow waters. One of the sharks I knew used this technique quite often was an adult male we called ‘Zane.’ Zane was one of the easier sharks to identify at Dyer Island and appeared each winter without fail. His most distinguishing feature was his almost complete lack of an upper caudal lobe (tail fin). After that, you just had to look to his dorsal fin and check for the black marks of a previous satellite tag he had carried from a research project almost a decade ago. I have to thank Grant Tuckett, skipper for White Shark Projects, for this initial tip-off during my first Dyer Island season. I already had access to the database of shark fin ID’s from that satellite tracking project, and within minutes of checking, I found his tagging date, from Dyer Island in 2003. Cape fur seals at Geyser Rock. The narrow channel between Geyser Rock and Dyer Island, Shark Alley, can be quite treacherous for Cape fur seals during the winter time. We collected more and more data at Dyer Island and Geyser Rock. The following season I even managed to tag another legendary shark of the area Slashfin, who like Zane, had been visiting Dyer Island for more than a decade. He’s most recognisable by an old injury to his fin, and also like Zane, we saw almost every winter without fail. Rather than perform the movements close to the seals in Shark Alley, Slashfin used an area between the offshore side of Geyser Rock and breaking pinnacle known locally as ‘Wilfred’s Klip’ (Wilfred’s Rock). Again, his movements were close to the seals during the day and further offshore at night. The only time we saw the sharks stayed reasonably close to Geyser Rock after dark was if there was strong moonlight, during which both Zane and Slashfin remained just offshore of Geyser Rock, between the seal’s colony and their offshore foraging grounds. Finally, it was time to analyse this data. The first step was to define how movement patterns matched up between the individuals and to those sharks tracked in Mossel Bay. We used rates of movement (how much the animal moved between tracking positions), a linearity index (how straight the tracking positions were), and the distance of each tracking position from Geyser Rock. These confirmed the sharks were using the island system (where they had access to seals) in a much more tortuous manner during the day than the night. Completely different from what was found in Mossel Bay. Next, we had to define the activity areas or ‘home range’ of white sharks at Dyer Island. This wasn’t easy because, by nature, sharks can’t go onto land, and with all the rocky outcrops and island ridges, there was a fair amount of no-go areas about. We used biased-random bridges (BRB) and a movement-based kernel density estimate (MKDE) to account for this. We were effectively creating home range estimates in movement corridors, around the barriers caused by the heterogeneous structures of the environment. Now that’s a mouthful and yet contains none of the details for how we did it, so in lay terms: we defined the core areas the sharks used in this complex area. The large sharks had clearly defined hotspots. They were predictably found within them, even during the years before and after this study. But we also had a small shark who did the same, making limited movements between a spot in the reefs and the cage diving area marked as the Geldsteen on the map. The only shark that roamed a comparably larger activity area was a medium-sized shark, a young-adult male, who moved between all the other shark’s core areas without really settling on one to use over and over. All of these areas were smaller than those found in Mossel Bay though. It seems the white sharks of Dyer Island can fulfil most of their needs without travelling far from the seals. I have to thank Simon Benhamou, who wrote the MKDE package, for helping troubleshoot our estimates. I was a complete novice at coding when I started this project and his advice invaluable to me completing my Master’s on time, as was that of my colleagues at the Mammal Research Institute, University of Pretoria. For more details on the methods used, both my Masters Thesis and our papers can be found in the publications section of my Academic Profile Page. The Supplementary Material provides specific details on the tagging, BRB and MKDE methods. - Jewell OJD, Johnson RL, Gennari E, Bester MN (2013) Fine scale movements and activity areas of white sharks (Carcharodon carcharias) in Mossel Bay, South Africa. Environmental Biology of Fishes 96:881-894 - Towner AV, Leos‐Barajas V, Langrock R, Schick RS, Smale MJ, Kaschke T, Jewell OJD, Papastamatiou YP (2016) Sex‐specific and individual preferences for hunting strategies in white sharks. Functional Ecology 30:1397-1407 - Johnson R, Bester MN, Dudley SFJ, Oosthuizen WH, Meÿer M, Hancke L, Gennari E (2009) Coastal swimming patterns of white sharks (Carcharodon carcharias) at Mossel Bay, South Africa. Environmental Biology of Fishes 85:189-200 - Jewell OJD, Wcisel MA, Gennari E, Towner AV, Bester MN, Johnson RL, Singh S (2011) Effects of smart position only (SPOT) tag deployment on white sharks Carcharodon carcharias in South Africa. PLoS One 6:e27242
<urn:uuid:1a933388-1688-4380-a234-4c752fcb9e94>
CC-MAIN-2022-33
https://oliverjewell.com/core-habitats-of-white-sharks-at-dyer-island/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00005.warc.gz
en
0.954545
2,196
2.890625
3
The Legend of One of the Holiest, Most Fought Over, Sought After, Artifacts of Mankind Charlemagne, Barbarossa, Hitler, Napoleon, General Patton and the quest for possession of the Holy Lance “…whoever possesses this Holy Lance and understands the powers it serves, holds the destiny of the world in his hands for good or evil” – Trevor Ravenscroft, The Spear of Destiny According to legend, the lance holds sacred powers and the person who possesses it is thus invincible and capable of ruling the world. In ancient Rome, crucifixion was considered such an excruciating (from the word “crucify”) way to die that the Romans did not use it to execute their own citizens but rather reserved it as a form to torture and humiliate slaves, traitors and foreign criminals. The length of time for the condemned to die could range from hours to days depending on the condition of the person and the method of crucifixion. To hasten death and ensure that the crosses were empty by the day of the Sabbath, soldiers would often times shatter and crush the bones of the condemned with an iron club. However, when the Roman Centurion, Longinus, came upon Jesus, he noticed that he was already dead and refused to smash his bones. And to prove to all present that he had died, Longinus pierced Jesus’ side with a lance causing blood and water to flow from the wound and fulfilling the prophecy that the Messiah would die without broken bones and become resurrected. And so the legend begins. At the time Longinus encountered Jesus on the cross, he had long been suffering from a severe eye disease that nearly blinded him. This may explain why he was assigned to oversee the crucifixions. When he stabbed Jesus, some of Jesus’ blood and water fell into the soldier’s eyes, and he was instantly healed. According to Mark 15:39 he then exclaimed, “Indeed, this was the Son of God!” He was so taken by the miracle that he left the army, converted and became a monk. Eventually, however, in 45 AD, Longinus was beheaded for his beliefs. Years later the man who was said to have held, for one brief moment, the destiny of the world in his hands, was venerated as a saint. Somewhere along the line, the lance that had touched Jesus’ body and blood was passed into the hands of Maurice, the head of the 3rd century garrison of Roman soldiers who came from Thebes (Upper Egypt). In 287 Maximian, junior Roman Emperor, ordered Maurice and 6599 of his men to attack local Christians in a town in what is now Switzerland (today, St. Maurice-en-Valais), offer sacrifices to the pagan gods and pay homage to the emperor. Maurice and his men refused, and at first the emperor killed every 10th man to pressure the soldiers to obey. But when they still refused to follow orders, he ordered them all to be killed. The bravery and martyrdom of Maurice became legendary and Maurice later became a patron saint of the Holy Roman Emperors and the patron saint of soldiers, swords smiths, armies and infantrymen. For centuries, Holy Roman Emperors were anointed at his altar in St. Peter’s Basilica. Constantine the Great From there, the lance eventually ended up with Constantine the Great, the first Roman Emperor to embrace Christianity in the early 4th century. Three centuries later Pope Leo III gave the lance as a gift to Charlemagne (742-814) , also known as Charles the Great, whose empire united most of Europe for the first time since the Roman Empire. Charlemagne is said to have carried the spear through 47 battles and died when he accidentally dropped it. German king Henry I, desperately wanted the lance as well and after doing everything in his power to get it, ended up acquiring it for a high price. On 15 March 933, on the day of St. Longinus, he conquered Hungary at the Battle of Riade with the lance and later successfully set his son, Otto I, spear in hand, to continue his reign. In 962, still in possession of the lance, Otto I is crowned emperor in Rome and becomes “the first of the Germans to be called the emperor of Italy.” In 996, Otto III was so convinced of the lance’s sacred powers that during his march to Rome to reclaim the crown, he prominently displayed it in the front of his army. German Holy Roman Emperor, Frederick I Barbarossa, is also said to have possessed the lance and when he dropped the lance in a creek, his downfall was sealed. He’s the same king who is supposed to be sleeping in a cave in Bavaria somewhere, surrounded by his faithful knights, his red beard growing ever longer, waiting for the ravens to stop flying around the mountain so he can restore Germany to its former greatness. Around 1350 Charles IV had a golden sleeve put over the silver one and inscribed the words “Lancea et clavus Domini” (Lance and nail of the Lord). During the Napoleonic Wars, the lance was transferred from Nuremberg to Vienna to protect it from Napoleon Bonaparte who supposedly tried to obtain it after the battle of Austerlitz During WWII, after the annexation of Austria, Hitler ordered the lance and the rest of the Hapsburg treasury, moved to his spiritual headquarters in Nuremberg. US troops found the lance in tunnels in an underground vault and under US General George S. Patton, the lance was returned to Vienna. According to legend, they took possession of the lance on April 30, 1945 and less than two hours later, Adolf Hitler killed himself in a bunker in Berlin. The lance is made of iron, is 50.7 cm long and was stored in the royal cross to keep it safe. The staff of the lance was wooden but can no longer be found. In 1914, Montan University in Leoben in Austria and in 2003, Dr. Robert Feather, an English metallurgist, both found that the holy lance probably dates back to the 7th century AD. Dr. Feather also confirms that the metal pin claimed to be the nail from the crucifixion is consistent with the length and shape with a 1st century AD Roman nail. In an article about the holy lance posted on the University of Vienna website, Mr. Mathias Mehofer of the Vienna Institute for Archaeological Science refers to the lance “as part of the insignia of the Holy Roman Empire, [the object is] one of the most significant objects of the Imperial Treasury from a historical and cultural standpoint.” And is a “stroke of luck for archeology since no other lance from the time period has survived in such good condition.” Legend or not 12 centuries is a heck of a long time And even if you don’t believe all the legends surrounding the lance, you cannot deny a sense of awe gazing upon an object that has witnessed 1200 years of history. 1200 years! And you’re looking at it. And maybe 1200 years from now, someone else, just like you, is gazing upon the exact same object. Awe-inspiring. Where to view the Holy Lance: You can view the holy lance at the Imperial Treasury in the Imperial Palace in Vienna Hofburg, Schweizerhof, subway stop: Herrengasse (U3) or tram (1,2,46,49, D @ Dr. Karl Renner Ring) More sources on the legend of the Holy Lance: Bouchal, Robert, and Gabriele Lukacs. Geheimnisvoller Da-Vinci-Code in Wien Verborgene Zeichen & Versteckte Botschaften. Wien: Pichler, 2009. Print. Crowley, Cornelius Joseph. The Legend of the Wanderings of the Spear Of Longinus. Heartland Book, 1972. Dreger, Ronald. “Die “Heilige Lanze” Zwischen Wissenschaft Und Legende.” Weblog post. Unviersity of Vienna Online Newspaper. University of Vienna, 4 Apr. 2005. Web. 5 Oct. 2013: http://www.dieuniversitaet-online.at/beitraege/news/die-heilige-lanze-zwischen-wissenschaft-und-legende/543/neste/75.html Kirchweger, Franz, ed. Die Heilige Lanze in Wien. Insignie – Reliquie – Schicksalsspeer [The Holy Lance in Vienna. Insignia – Relic – Spear of Destiny]. Vienna: Kunsthistorisches Museum, 2005. Kirchweger, Franz. “Die Geschichte der Heiligen Lanze vom späteren Mittelalter bis zum Ende des Heiligen Römischen Reiches (1806) [The History of the Holy Lance from the Later Middle Ages to the End of the Holy Roman Empire (1806)].” Die Heilige Lanze in Wien. Insignie – Reliquie – Schicksalsspeer. Vienna: Kunsthistorisches Museum, 2005, 71–110. MacLellan, Alec. The Secret of the Spear: The Mystery of the Spear of Longinus. Souvenir Press, 2005 (Reprint). Ravenscroft, Trevor. The Spear of Destiny. [S.l.]: Wehman, 1969. Print.
<urn:uuid:95587d39-d3a3-427f-aabe-4dc97b92429f>
CC-MAIN-2022-33
https://www.kcblau.com/tag/holy-cross/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571056.58/warc/CC-MAIN-20220809155137-20220809185137-00404.warc.gz
en
0.935248
2,101
2.546875
3
is certainly a common pathogen found in the community and in hospitals. It is a facultative anaerobic Gram-positive bacterium commonly found as part of the normal flora on the skin and nasal passages of humans (2). Previously, infections could be effectively treated with antibiotics. However, in BIBR-1048 the past 2 decades, an increasing number of strains of have become resistant to a variety of antibiotics. Methicillin-resistant (MRSA) is one of the BIBR-1048 more dangerous antibiotic-resistant strains. MRSA strains are prevalent in hospitals and are fast becoming a common community-acquired contamination (3, 4). For this reason, research into the development of immunotherapeutic approaches, either active or passive, has seen a resurgence in recent years (5). Several studies have investigated the many surface proteins and virulence factors of vaccines or therapeutic antibody strategies have focused mainly on capsular polysaccharide (CPS), virulence factors, surface proteins, and iron-regulated proteins. The putative protective BIBR-1048 capsular polysaccharide antigen has been developed into potential anti-vaccines. The leading candidate of this type of vaccine is usually StaphVAX, a bivalent polysaccharide and protein-conjugated vaccine (16, 17). Other strategies for developing vaccines have targeted virulence factors and surface proteins, including alpha-toxin (a nontoxic derivative of H35L) (7, 18), clumping factor A (ClfA) (19), fibronectin binding protein A or B (FnBPA or FnBPB) (12), Panton-Valentine leukocidin (PVL) (20), and protein A (11). Iron-regulated proteins, such as Merck V710, which is based on iron-regulated surface determinant B (IsdB) (6, 21), have also been investigated as other possible targets for vaccines against virulence determinants, such as monoclonal alpha-toxin antibodies, polyclonal PVL antibodies, and anti-ClfA monoclonal antibodies (Aurexis). To date, most of the clinical trials for vaccines or passive immunization against vaccines have failed (26). ILK The authors concluded the most important reason for the failure of these trials was that these vaccines are based on the production of antibodies against contamination. Furthermore, the above-named vaccines derive from the single BIBR-1048 certain or antigen proteins from a protein family. A highly effective vaccine may need several antigenic elements (6), like a series concentrating on multiple virulence elements. A recent research indicated a T-helper 17 (Th17)-interleukin 17 (IL-17) axis may provide strategies for the introduction of an effective wide vaccine against attacks (26). Therefore, goals for vaccines could possibly be expanded to add any antigen that induces an immune system response against infections, for instance, a Th1- and/or Th17-mediated immune system response. is known to secrete many virulence factors through two mainly secretion systems, Tat and Sec (27, 28). Two virulence factors of produced by the 6-kDa early-secretion antigen (ESAT-6) secretion system, EsxA and -B (SaEsxA and SaEsxB) (29, 30), play important roles in establishing infections in the host (29). Furthermore, a new study found that SaEsxA modulated host cell apoptosis and that, when combined with SaEsxB, it could mediate the release of staphylococci from your host cell (31). SaEsxA and SaEsxB proteins are highly conserved in the genomes of different clinical strains (31). ESAT-6-like proteins are also found in many other Gram-positive bacteria, including (32). The ESAT-6 secretion system in is similar to the Esx-1 protein secretion system in and purified recombinant SaEsxA (rSaEsxA) and rSaEsxB. We investigated whether these two recombinant ESAT-6-like proteins had immunogenic activities to induce a host immune response against staphylococcal contamination. We tested the immunoprotective effects of rSaEsxA and rSaEsxB, alone or combined (rSaEsxA+B), against invasive in a murine model. MATERIALS AND METHODS Bacteria, plasmids, antibodies, and animals. The ATCC 25923, ATCC 29213, Newman, and USA 300 strains were stored at ?80C until use. strain BL21(DE3) was utilized for protein expression. The recombinant expression vector pETH was obtained from K. Y. Yuen. Specific-pathogen-free BALB/c mice were supplied. Background The aim of the present research was to improve a topical ointment delivery of hirsutenone (HST) a naturally occuring immunomodulator employing Tat peptide-admixed flexible liposomes (EL/T). program of diphenylcyclopropenone to NC/Nga mice. Healing improvements of Advertisement were examined by clinical epidermis severity ratings. Immunological analyses on inducible ILK nitric oxide synthase and cyclooxygenase-2 amounts in your skin and interleukin (IL)-4 IL-13 immunoglobulin E and eosinophil amounts in the bloodstream had been also performed. Outcomes EL systems had been superior to typical cream revealing better flux values within a permeation research. The addition of Tat peptide additional increased your skin permeation of HST. Within an efficiency research with AD-induced NC/Nga mice an HST-containing Un/T formulation brought a substantial improvement in both epidermis severity rating and immune-related replies for the degrees of nitric oxide synthase cyclooxygenase-2 IL-4 IL-13 immunoglobulin E and eosinophils. Bottom line A book Un/T formulation originated for topical delivery of HST to take care of Advertisement successfully. BMS 378806 = × (may be the deformability index from the vesicle membrane may be the quantity of vesicle suspension system extruded for 5 min may be the size from the vesicle after extrusion and may be the pore size from the hurdle.26 In vitro epidermis permeation research Solubilization of HST for sink conditions To keep the sink conditions in the receptor compartment for HST DENA was selected being a hydrotropic agent for solubilization of the hydrophobic medication.27 A surplus quantity of HST was put into PBS containing various concentrations of DENA and vortexed. BMS 378806 The mix was shaken at ambient temperature every day and night to attain equilibrium intermittently. The supersaturated test was centrifuged at 12 0 rpm for ten minutes to split up the undissolved HST. BMS 378806 The supernatant was filtered through a 0.45 μm membrane filter (Whatman Piscataway NJ) and diluted with methanol for the HST assay by HPLC. Epidermis permeation of HST An in vitro permeation research was executed with vertical Franz diffusion cells as previously defined.18 28 Pores and skin tissues were extracted from hairy ICR mice from whom the dorsal locks from the mouse epidermis was carefully removed using electric clippers and rinsed with phosphate buffer. A round little bit of dorsal epidermis was then properly installed onto the recipient area from the diffusion cells using the stratum corneum facing toward the donor area. The receptor area was filled up with 10 mM PBS (pH 7.4) alternative containing 1.5 M DENA and was preserved at 32°C. Each formulation filled with the equivalent quantity of HST (5.0 mg) was put on the skin surface area which had an obtainable diffusion area of just one 1.76 BMS 378806 cm2. The aliquots (0.5 mL) had been withdrawn at predetermined period intervals and analyzed by HPLC. The cumulative quantity of medication permeated per device region was plotted being a function of your time. The steady-state permeation price (< 0.05 unless indicated otherwise. Results and debate Characteristics of topical ointment formulations Un formulations were seen as a vesicular size zeta potential launching performance and deformability index. Vesicular size was assessed typically at 130-150 nm which is recognized as a perfect size for epidermis delivery.33 Surface area charges in the EL were measured at about ?30 mV but were neutralized with the addition of Tat peptide to about ?10 mV because of electrostatic adhesion from the cationic peptide towards the vesicular surface area. EL formulations demonstrated a high launching performance of HST over 70%. HST was encapsulated in to the liposome because of its lipophilicity efficiently. Compared liposomal encapsulation of oregonin a hydrophilic diarylheptanoid being a glycoside type has been proven to become markedly less than that of HST.34 The deformability index of EL an essential feature of elastic liposomes for skin penetration enhancement was observed at about 60 which really is BMS 378806 a three- to fourfold better value than that of conventional liposomes.14 34 The incorporation of an advantage activator Tween 80 destabilized the lipid bilayers thereby increasing the deformability from the vesicles.35 The stress- dependent adaptability of EL imparts a special ability to go through your skin barrier easily which in turn provides an adjuvant effect for HST permeation. Standard oil-in- water type cream formulations were prepared successfully and appeared as white opaque homogeneous semifluids with no bleeding or phase separation. HST was solubilized in the oil phase and no drug crystals were observed. Under the presence of the antioxidant BHT. The follicular helper T (Tfh) cells help is crucial for activation of B cells antibody class switching and germinal center (GC) formation. and indication transducer and activator of transcription 3) signaling and repressor miR155. Alternatively Tfh generation is certainly adversely regulated at particular guidelines of Tfh era by particular cytokine (IL-2 IL-7) surface area receptor (PD-1 CTLA-4) transcription elements B lymphocyte maturation protein 1 indication transducer and activator of transcription 5 T-bet KLF-2 signaling and repressor miR 146a. Oddly enough miR-17-92 and FOXO1 become a positive and a harmful regulator of Tfh differentiation with regards to the period of appearance GNF 2 and disease specificity. Tfh cells may also be generated in the conversion of various other effector T cells as exemplified by Th1 cells changing into Tfh during viral contamination. The mechanistic details of effector T cells conversion into Tfh are yet to be obvious. To manipulate Tfh cells for therapeutic implication and or for effective vaccination strategies it is important to know positive and negative regulators of Tfh generation. Hence in this review we have highlighted and interlinked molecular signaling from cytokines surface receptors transcription factors ubiquitin ligase and microRNA as positive and negative regulators for Tfh differentiation. (39 40 In addition activin A signaling is required for down-regulation of CCR7 and up-regulation of CXCR5 during Tfh differentiation from human naive CD4+ GNF 2 T cell (26). The down-regulation of CCR7 and up-regulation of CXCR5 prospects to migration of early Tfh cells from T:B cell border to GNF 2 interior of B cell follicle. This stage of Tfh generation is usually inhibited GNF 2 by IL-2 and CTLA-4 from early Tfh Treg and Tfr (41 42 Understanding how these early Tfh cells cross the barrier of intrinsic CTLA-4 Treg and Tfr regulation and/or generation of Tfh cells is usually spatiotemporal is yet to be discovered. Once this barrier is usually crossed the late events in GC involve stable conversation of T and B cells through signaling lymphocyte activation molecule-associated protein (SAP)/signaling lymphocyte activation molecule (SLAM) signaling that further allows crosstalk between T and B cells. The SAP/SLAM signaling also regulates ICOS and CD40 expression. At this juncture ICOS/ICOSL signaling is critical as blocking ICOS signaling prospects to reversion of these cells to other effector T cells by downregulation of CXCR5 and upregulation of CCR7 resulting in migration of these cells off the B cell follicle (39). At this particular point Tfh differentiation can also be negatively regulated through IL-2 CTLA4 from Tfh or Tfr. Thus cytokines transcription factors surface receptors ubiquitin ligase and miRNA act as positive and negative regulators of Tfh differentiation with ILK mechanistic details as follows. Physique 1 Follicular helper T cell differentiation and inhibition is usually multi-step multifactorial spatiotemporal. First step for naive CD4+ T cells to differentiate into Tfh involves antigen presentation by dendritic cells and CD28 co-stimulation leading to expression … Cytokine as Positive and Negative Regulators of Tfh Differentiation Cytokine signaling is critical for cell survival differentiation proliferation and also to undergo programed cell death. Along with antigen and costimulatory molecules cytokine signaling plays a major role in driving naive CD4+ T cells to differentiate into specific effector T cell subsets. In research with IL-21 and IL-6 knockout mice it’s been discovered that these cytokines are essential for Tfh differentiation. IL-21 cell intrinsically works over the naive T cells to differentiate into Tfh through Vav1 (43) whereas GNF 2 IL-6 works both intrinsically and extrinsically to improve IL-21 creation through c-Maf (44). Furthermore IL-27 a heterodimeric cytokine is crucial for the success of turned on cells aswell for the appearance of Tfh marker. IL-27 enhances IL-21 creation from naive Compact disc4+ T cells and thus supports GC development and B cell features (45). Yet in human beings along with IL-21 and IL-6 various other cytokines such as for example IL-12 and TGF-β either exclusively or conjunctionally get excited about Tfh differentiation (25). Cytokine interferons get excited about GNF 2 clearance of intracellular an infection and appear to possess positive assignments in Tfh differentiation. The sort I IFN-alpha/beta is normally involved in imperfect Tfh differentiation because they can stimulate BCL-6 CXCR5 and PD-1 appearance through STAT1 signaling without IL-21 creation.
<urn:uuid:ca75a561-175d-4d9b-b2ab-316e9dba9cc6>
CC-MAIN-2022-33
https://rmrfotoarts.com/tag/ilk/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572077.62/warc/CC-MAIN-20220814204141-20220814234141-00605.warc.gz
en
0.933165
3,122
3.3125
3
The Elements of the Universe You have likely heard the words "Elements" or "Tattwas" thrown around in esoteric circles. However, what are they exactly? The Foundations of the Four Elements With careful contemplation, one comes to the insight that every "happening", occurs within space — the actual "material" space, but more importantly the space of consciousness in which the material space is also contained, in our experience. These happenings include the "objective" phenomena outside, in the material world and "subjective" phenomena like thoughts, feelings and emotions. Experience, and existence itself is consciousness and it's contents. In the Tantric lore, this is referred to as Shiva (consciousness) and Shakti (energy and ripples of energy or dance of the "goddess") within consciousness (or space, which can be gleaned to be the material analog for consciousness itself, if one contemplates deep enough). In the various Indian art forms of Bharatanatyam, Kathakali and such, the dance of the contents of the consciousness is expressed through metaphors and myths. Indeed, most older forms of art could be said to have had esoteric significance. The contents of consciousness (Shakti, the mother goddess/Gaia) , or the "clockwork" of the universe — whether it be objective or subjective — can be said to perform expansion, contraction, mediation between expansion and contraction and "materialization". These are the elements that make up the clockwork and indeed, it is the fundamental nature of the universe. Every other "happening" or performance can be said to be the permutations and dances of these basic elements — whether it be objective phenomena or subjective thoughts and feelings. On Earth, or the world that we experience where our consciousness is "tuned into", the expansion can be said to be encapsulated best by "fire". The contraction is encapsulated by "water", the mediation between contraction and expansion happens in "air" and the solidification of all these into physical experience is the "earth". To summarize the function of the elements: - The first element is the fire (Tejas) principle, and it represents the “spark” or light. It is also the principle responsible for expansion, extension and transformation (including from one state of matter itself into another). It is the Yang, and masculine principle of the universe. - The second element is the water (Jal) principle and it represents contraction and also darkness. Interestingly, it is also life giving and nourishing (just like how the universe itself seems to have emerged out of darkness, as life emerged out of water!). It is the Yin and feminine principle of the universe. - The third element is the air (Vayu) principle, and it represents the mediation between the fire and water element, making life itself, possible. Indeed, without the atmosphere manifestation here on earth, life becomes impossible. - The fourth element is the earth (Bhumi) principle and it represents the “solidity”, and “material stuff”. It is what is responsible for distance, weight, measure etc. And is also the representation of all three other elements in action, on our world, including the formation of our own physical bodies. It can be said that our life itself is identified most with this element of labor, the realm of the Muladhara Chakra or Malkuth. The fifth element, Akasha or "space" is the element out of which all other elements emerge out of. It is consciousness itself! (Everything material exists in space, whereas everything you experience exists inside consciousness) Download this FREE as book - The Five elements: Clockwork of The Universe You will also receive articles guides and offers on Tantra, Magick, Esotericism and related topics to your email. Of course, you get to unsubscribe anytime! When in occult circles you hear talks about "traversing the space of Akasha", it is simply the traversing of the "space of consciousness". The world that you are experiencing now can be said to be one among many "windows" or "channels" within the vast, infinite "space" of consciousness or Akasha. Indeed, Jung's exploration and insights from traversing this space is quite insightful, as can be found in his works like Synchronicity, Man and his symbols, the Aeon etc. Modern psychology however, seems to have restricted this exploration to reliving memories in the subconscious (which seems to be connected to the collective unconscious, which in itself seems to be connected to the Akasha space) for "therapeutic" purposes, instead of also using it as a means for exploring the nature of the mind and therefore the universe itself, which can be said to be made up of "mind stuff". The Elements, Your Psyche (Mind) and Your Nature Just like every "happening" within space, the clockwork of the "happenings" of your mind (thoughts, feelings emotions etc.) can also be said to be made up fundamentally of the same elements — you have expansive thoughts, feelings and emotions; and you have contracting emotions, feelings and thoughts. You also have the mediation between these and the tendency to want to materialize your thoughts, feelings and emotions! All of these, happens in the "space" of your consciousness or Akasha itself. Your psychic structure itself hence, is made up fundamentally of the same elements — fire, water, air and earth. Our language also expresses this. When we think about someone ambitious and driven (expansive nature), we often associate them with "fiery" characteristics. When we think about someone calm and withdrawn, we often associate them with the word "cool". When someone displays elusive characteristics, we often associate them with "airiness". Someone "down to earth" is usually someone who is accepting of their own materializations with no pretentiousness. The law of analogy and how the physical world can affect your mental world, described in terms of elements The separation between the "objective" and "subjective" can be gleaned to be largely illusory upon careful contemplation and meditation practice. What you think about the "self" is also a happening in consciousness. "Selfing" is a more accurate description, and it is also happening within the space of consciousness, just like every other happening. In reality, there is only consciousness and its contents. This is not even a grandiose occult claim, but something that is self evident, and an axiom — one need only carefully contemplate. In fractal cosmology, the universe can be said to be fractal in nature across really large scales. Therefore, when there is "fire" or expansiveness in the objective to where the subject is tuned into, it could be said that there is also "expansiveness" in the subjective. The nature of the contents in the subjective (microcosm) can be said to be fractally aligned with anything that happens largely in the objective (macrocosm). This might explain how someone who worships the sun or fire manifests in themselves the characteristics of the sun or fire — the fiery and expansive nature of the consciousness over time. There is a fractal alignment of the nature of the psyche with whatever it is exposed to, in the objective! Astrology, Temporal Elements and the Fundamental elements Just as there are chemical elements of material that exists in physical space that make up physical configurations of matter in space (hydrogen, helium atoms etc), it can be said there are temporal elements that make up the configuration of the time you are in. Indeed, according to Einstein's theory of relativity, time and space are but one — time-space, so it would be no surprise that the characteristics of how they emanate and express themselves would be similar. This can be said to be the mechanism behind the I-ching, Norse runes, the Indian art of Vastu or any sort of Geomancy. By picking up a particular configuration of the I-ching hexagram for example (in the microcosm), you get the larger configuration of that particular time (in the macrocosm). The mechanism behind astrology can be said to be the same — with each slice of time, there is a configuration. The configuration of the universe at any given time, can be said to be the permutation of the four elements of Fire,Water, Air and Earth at that given time. When you are born, there can be said to be a particular permutation of the elements within the Akasha or consciousness. This can be likened to the eternal dance of "Shakti" whereby, if you take the snapshot of each given slice of time, you will be able to catch her in a particular "stance" of her dance, where certain elements are dominant and others are dormant. The snapshot of your astrological profile of the macrocosm captures this, and thereby fractally, the nature of your own psyche at that given time. Your body's function as a combination of the fundamental elements Just as your psyche is configured of the combination of elements, it can be gleaned upon careful contemplation, that the state of the body too is configured of the same four elements — expansive aggressive (fire), cooled down or submissive (water), a dissonance between aggression and cool or the intellect that is trying to make a decision between the dissonance (air) or laborious/gluttonous (earth) . Your psyche and emotions have a direct effect on the state of your body and vice versa. They are not mutually exclusive, and can be said to be the parcel of the same fractal pattern. Therefore upon extrapolation, it is easy to see that whatever happens largely in the macrocosm of the universe, manifests itself in the psyche, and therefore the body's function. If a fiery masculine stance of the dance of Shakti then, is what perhaps will manifest in various ways in the universe — forest fires that happen largely in the world coupled with a right wing alignment of politics, coupled with aggressive tendencies and a social transformation, for example. Diet, Ayurveda and Chinese (Taoist) medicine Diet influences the manner in which your body is composed. "You are what you eat", after all. This idea not only affects your body, but also your psyche structure. Indeed, modern science confirms how much gut microbia can affect the state of the mind. This idea can also be applied inversely, if you take into account the fractal nature of the universe that we experience, as we discussed. From this tangent of thought, we can extrapolate the mechanism behind how Ayurveda and Chinese (Taoist) medicine functions. They both rely on prescriptions based on the astrological profile of the person at hand. Therefore, it can perhaps be said that if one is able to capture the instance of the stance of the dance of "Shakti" or the contents of consciousness or Akasha (in both time and space) macrocosmically when one is born and contrast it with the pattern at the present instance), one is able to capture the nature of the psyche microcosmically, and therefore fractally, the arrangement in the physical body itself — in terms of the elements (as opposed to gene code, which would be the material science’s way of explaining) — in the present. Ayurvedic and Taoist remedies are then prescribed based on the macrocosmic profile of the fractal pattern of the contents of consciousness/universe/Shakti for the individual, microcosmically for the individual. This is also often accompanied by various Mantras to "balance" the nature of the elements to reach closer to the state of "tattva purification". "Tattva purification" can be said to be a means by which the psyche, and therefore the body is more aligned to equilibrium, or the void, or the space/Akasha/consciousness. Keep in mind that the Akasha or consciousness (Shiva) itself is patternless and contains all the patterns and therefore the elements within it. Along with remedies, one is also restricted from consuming certain types of foods based on the pattern that is deciphered to accelerate this process. Anything that you consume can be said to have a specific "pattern" of the fractal, which may either aggravate more patterns or neutralize existing patterns (towards the void or consciousness or space or equilibrium). It can perhaps be likened to the game of tetris, where the objective is to neutralize the patterns toward the state of the empty space! Hexing, Blessing and the Elements The idea behind Hexing/curses and blessings can also be seen as related to the equilibrium of elements back into the Akasha (or consciousness). Hexing/curses, can be essentially seen as the aggravation of patterns of the elements, and therefore the frequency of their vibrations as opposed to their neutralization toward equilibrium. Blessings then, are the neutralization of the elements back into the Akasha/space/consciousness. When one contemplates about curses that afflict a certain area or family, it is important to again, consider the fractal nature of the universe — your psyche is fractally (in terms of the elements, but also genetically) aligned with the pattern in a way that your area/family functions as a membrane of the larger macrocosm pattern and "you" as an extension of that membrane. Same can be said for blessings! Most curses or blessings involve the manipulation of the earth element of the individual or family, thereby affecting body and the psychic pattern of the individual or the collective gene. It is easy to see how this is directly correlated to the pattern or dance of Shakti of the macrocosm at large and how various divination tools like astrology and geomancy can be applied here, to understand the permutation of the elements. With this knowledge at hand, various neutralization and enhancing techniques can be applied for curses and blessings — by the use of Mantras, Ayurvedic remedies or alignment of one's psyche with a particular deity that embodies an aspect of macrocosmic pattern to either aggravate the pattern of “Shakti” more into a maddening dance or toward are more graceful, or ultimately even toward her going toward the complete relaxation or Shavasana! The Pentagram and “God” "God" cannot be defined or explained because it cannot be contained within thoughts and the mechanism of the mind. It is essentially the void, or the as the buddha called it, "Shunyata". It is important to remember that when contemplating about the idea about the void, one must not mistake it with "blackness" or "emptiness", for they too are ideas contained within the mind. If "God" is said to be omniscient, omnipotent, omnipresent omniscient, omnibenevolent, all of these ideas itself must contain within it, including the idea of "it" -- If one contemplates carefully, these (and everything else) are but ideas in consciousness. The pentagram encapsulates this idea in terms of elements perfectly, and also in terms of the ideas discussed earlier — consciousness void is at the center, and the 5 elements at various points of the pentagram. The two types of pentagrams differ only in the pointing of the Akasha principle. The upward pointing pentagram can be signified as a "going outward" to space whereas the downward pointing pentagram can be signified as "withdrawing inward" to inward space. Various configurations and patterns of the elements in the pentagram means moving away from the center and "moving towards God" is the idea of moving toward the center of the pentagram! Physically, the universe itself is in a quest for equilibrium, towards the void where there is not even a ripple of it's contents. By law of analogy, this is perhaps the quest in our subjective experience also, where the elements merge in back into the Akasha.
<urn:uuid:b7090277-d64f-45f1-8d87-875da0a39ce4>
CC-MAIN-2022-33
https://tantricpagans.com/fundamental-elements/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00205.warc.gz
en
0.939442
3,346
2.703125
3
This article is regarding clo- lewis structure and its related properties like bond angle, lewis dot structure, acidic characters, and polarity or non-polarity. The clo- lewis structure is commonly known as Hypochlorite ion, also known as chlorine oxoanion, is a monovalent inorganic anion with the chemical formula clo-. It can combine hypochlorite salts and form a variety of cations to produce hypochlorite salts. Because these salts are mostly unstable in their pure form, all these salt are present in an aqueous solution. How to draw Clo- lewis structure? The electron’s Lewis structure is graphically depicted by assigning valence electrons around the atoms. We will discover what type of bond formation or how many bonds are formed in the molecules by learning how to draw Lewis structure. It is a very simple molecule because it contains only two atoms with one negative charge. Steps in drawing the clo- lewis structure : Step1: Calculate the total number of valence electrons in the system. In the Clo- lewis structure chlorine consists of seven valence electrons and on the other hand, oxygen consists of six valence electrons in its outermost shell. In addition to this, a negative charge or sign on these molecules include one more valence electron pairs. Therefore, a total of 14 valence electrons are in the clo- molecules. Step 2: Consider the central atom. A diatomic molecules clo- molecule has no central atom. There is no need for a central atom because we cannot define which atom form bonds. We can find the electronegativity value of chlorine and oxygen in a given molecule and predict the lewis structure. Thus chlorine and Oxygen have similar electronegative values so, we can not predict which one becomes the central atom so we place each atom adjacent to each other. Step 3: Finish the octet. Clo- lewis structure consisting two atoms one is oxygen and one is carbon, thus it is a diatomic molecule and both of the atoms acquire eight valence electrons to complete its octet. This will show that one clo- lewis structure required 14 valence electrons for completing the octet. Step 4: Make structure. In clo- molecule, Chlorine(Cl) and oxygen(O) are arranged in such a manner that are adjacent to one another. Out of 14 valence electrons, 2 electrons form a single bond between Cl-O. Step 5: Assign the valence electron to the valence atom. Assign the valence electrons to each atom and complete its octet and make a stable lewis structure. Clo- lewis structure resonance When a molecule or ion has more than one valid Lewis structure then resonance occurs. The resonance hybrid is the weighted average of these resonance structures that determines the overall electronic structure of the molecule/ion. Thus hypochlorite ion(clo-) in individual form does not show resonance. Clo- lewis structure shape The clo- lewis structure is formed by two atoms that are called diatomic molecules. According to this result, a diatomic molecule shows a linear shape and a simple spatial arrangement. As a molecule is linear so both the atoms are situated in a straight line. The clo- lewis structure contains a single bond between cl-o and 3-3 lone pairs on each. Thus the clo- molecule follows lone pair- lone pair = lone pair- bond pair repulsion on both sides. The shape of the clo- lewis structure is linear. Clo- lewis structure formal charge A formal charge is created when an electron is redistributed between two atoms and the charge is occupied by these atoms for the bonds. The formal charge = No. of valence electrons – no of lone pair electrons – ½ no. of bonded pair electrons. The formal charge on chlorine atom = No.of valence electron of chlorine = 7,No. of lone pair electrons on chlorine = 6,No. of bond pair electrons around chlorine = 2 F.C. on Cl = (7 – 6 – 2/2) = 0 Similarly, The Formal charge on oxygen atom= No.of valence electron of oxygen = 6, No. of lone pair electrons on oxygen = 6, No. of bond pair electrons around oxygen = 2 F.C. on O = (6 – 6 – 2/2) = -1Thus overall formal charge on Clo- molecule =-1. This is the most stable molecule. Clo- lewis structure angle In the clo- molecule, molecular geometry is linear and the electron geometry is tetrahedral. So, the bond angle of the molecule is 180º with a formal charge = -1. Thus the clo- lewis structure is stable with sp3 hybridization and with a 180º bond angle. Clo- lewis structure octet rule When each atom becomes stable by gaining eight electrons in its outermost shell, then the molecules said that it follows the octet rule. So, in the case of hypochlorite(Clo-) ions, they consist of Chlorine and Oxygen atom with seven and six valence electrons. So chlorine is less electronegative than oxygen and exhibits a central atom and obeys the octet rule and so on oxygen. Hence we can say that the clo- molecule obeys the octet rule. Clo- lewis structure lone pairs The lone pair electrons are also called non-binding electrons, they are not participating in any chemical bond formation on the other hand bond pair electrons participate in the bonding of that molecules. In the clo- lewis structure there are 12 lone pair electrons are present on the Chlorine(3 pair) and Oxygen(3 pair) atoms and a single bond between Cl-O(contain 2 electrons). Clo- valence electrons It is called valence electrons when the electrons are located in the outermost shell of the atom. In the molecule clo-, the chlorine atom lies in the 17th group of the periodic table and oxygen in the 16th group. So, the valence electrons in chlorine = 7 and the valence electrons in oxygen = 6. Total valence electron in clo- molecule =7+6+1= 14. When two atomic orbitals of the same energy level are mixed for the generation of a new hybrid degenerate orbital the process is called Hybridization. For finding the hybridization of a molecule we should know its steric number. Steric no. = Bonding atom + Lone pair on the central atom. In the clo- molecule any atom becomes the central atom, so, we consider chlorine as the central atom. ∴ Steric number of Clo- = Bonded atoms attached to Chlorine + Lone pair on Chlorine. ∴ Steric number of Clo- = 1(bonded with oxygen atom) + 3(lone pair) = 4. So the hybridization of Chlorine in clo- the molecule is sp3 with linear shape and tetrahedral geometry. The clo-(hypochlorite ion) is soluble in water. It slowly decomposes in water, producing chlorine. Its compound is mostly found in salt form in the aqueous only and most of these hypochlorites are unstable, and many substances only exist in water. Is Clo- ionic? Yes, Hypochlorite(ClO-) is an ionic molecule that consists of chlorine and oxygen atom and has a chemical formula of clo-. This ionic Nature is due to the presence of -ve charge on the molecule. Following is the equation regarding ionic character: 2 ClO− → 2 Cl− + O2 Is Clo- acidic or basic? The substance which accepts the protons in their aqueous solution is called the base. The clo- is basic due to the presence of lone pairs, it accepts the positive charge and is dissolved in water forming Hypochlorous acid. The chemical reaction is: ClO– + H2O → HClO + OH– Thus, clo- is the conjugate base of HClO. Is Clo- polar or nonpolar? A molecule is said to be polar when there is the distribution of charge is unequal on atoms and has some net dipole moment, whereas nonpolar molecules are those which distribute the charge equally to each atom and have zero dipole moment. The clo- lewis structure is a Polar structure because the distribution of charge is unequally on both the atoms present in it. This is due to the presence of a negative charge on one atom which makes it polar. Thus Chlorine and oxygen atoms’ arrangement is unsymmetric which generates a net dipole moment and does not cancel out each other effect. So the net dipole moment is not zero which means the clo- the molecule is polar. Is Clo- tetrahedral? yes, Hypochlorite ion(clo-) is having tetrahedral geometry. On the Cl atom, the total number of lone pairs of electrons and bonding electrons in the molecule is four, and on the oxygen atom, it is also four. Thus lone pair is arranged in such a manner that lone-pair-lone-pair repulsion is less. This will leads to the sp3 hybridization and tetrahedral geometry. Is Clo- linear? Yes, clo- (hypochlorite) is a linear molecule. Because it contains only two atoms and these are arranged adjacent to each other in a straight line. The chlorine atom contains three lone pairs and oxygen also contains three lone pairs and a single bond exist between these two. Hypochlorite is a highly unstable compound with all of the characteristics of a covalent molecule. The Lewis structure explains the presence of a dipole cloud on the molecule, which makes it more available for the cations. It has a linear structure as a result of valence electrons on both chlorine and oxygen atoms.
<urn:uuid:6e2eb345-d67a-42ec-918e-f4e93a9c5517>
CC-MAIN-2022-33
https://lambdageeks.com/clo-lewis-structure/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571911.5/warc/CC-MAIN-20220813081639-20220813111639-00204.warc.gz
en
0.901383
2,139
4.125
4
#What is CCPA? The California Consumer Privacy Act, or CCPA, is a data regulation enacted in 2020 by the US state of California to safeguard the rights of California residents. According to NPR, the law... ... applies to any company that meets any one of three thresholds annually: It has at least $25 million in revenue, makes at least half its money by selling data or gathers information on at least 50,000 consumers. According to CNN, California residents may now... ... demand those companies disclose what data they have collected on them, and the law requires companies delete that data when users ask them to. Companies must disclose how their customers can contact them to request their data be forgotten. [Companies may], for example, list an email address to use specifically for privacy issues. More information about CCPA can also be found at California's Office of the Attorney General here. The CCPA governs the collection, storage, transfer or use of "personal information", where "personal information" is defined very broadly to include any information relating to an identified or identifiable Californian individual, or to an identified or identifiable California household. The CCPA gives Californians greater rights and control over personal information, by regulating how businesses obtain, handle, store and transfer the personal information they collect. Of particular interest under CCPA to publishers and analytics companies (like Parse.ly) are the regulations that deal with collection; sharing; use; and, deletion of personal information. Further, the regulations recognizing users' rights to access; correct or amend; delete; restrict processing of; and, port their personal information. #Key changes under the CCPA Beginning in 2017, Parse.ly underwent a number of changes to its data privacy processes in order to comply with GDPR. We'll briefly describe those changes, since they are relevant to our general handling of personal information. We'll then describe the further actions we took as part of our CCPA compliance. To comply with GDPR, Parse.ly... - Expanded rights for individuals: the right to be forgotten (aka "right of erasure") and the right to request a copy of any personal data stored in their regard (aka "data portability"). - Implemented and upgraded additional data handling protocols: ensured there was a process in place for reviewing and handling data requests, and reviewed the data security architecture across the company. - Improved recordkeeping and other compliance obligations: established process to keep detailed records on data activities and enter into written agreements with vendors that require vendors to commit to the same compliance obligations as the contracting organizations. - Implemented a data breach notification process: established process to report data breaches to data protection authorities within 72 hours of discovery, and in serious cases to the affected individuals. #Is Parse.ly a "first-party" analytics vendor? Yes. At Parse.ly, we've always taken consumer data privacy and data security seriously since we started operating a large-scale analytics service in 2010. Although Parse.ly does not rely on the EU-U.S. Privacy Shield Framework and the Swiss-U.S. Privacy Shield Framework as a legal basis for transfers of personal data in light of the judgment of the European Court of Justice in Case C-311/18, nonetheless we are self-certified under the Privacy Shield, which concerns transfer of data between the EU (and Iceland, Liechtenstein, Norway and Switzerland) and the US and, for as long as we are self-certified to the Privacy Shield, we will process personal data in compliance with the Privacy Shield Principles. We have also worked with several companies in Europe on the privacy requirements to be their first-party analytics vendor of choice. We avoid storing extraneous data on visitors, only instrumenting sites with collection mechanisms that enhance our first-party reporting capabilities. #What changed at Parse.ly under CCPA? Parse.ly's Information Security (Infosec) Team performed a full internal audit for compliance with CCPA. Our Information Security Team evaluated our systems and data storage to ensure CCPA readiness, following up on the exercise we had done in years prior for GDPR compliance. Whether it comes to our own internal data, data prepared and processed for use by our customers, or data collected by visitors to those websites, we now ensure that it meets the appropriate privacy standards set by CCPA. - We catalog any Personal Information: We reviewed our systems, products and services to catalog and document the sources, uses, storage and disposal of - Enhanced data integrity and security: We adopted security practices that are broadly recognized as industry standard. - Consent requirements: We audited for compliance with consent rules for any new data we capture, to ensure we continue to lawfully process personal information that is sent to us by clients, or that we collect ourselves from our own sites and services. - Providing visibility and transparency: As a "service provider" to our customers under CCPA, we must provide our customers (the "businesses") with access to effectively manage and protect their data. We are also exploring product enhancements to provide better transparency, in order to also provide all reasonable assistance to our customers to comply with their own transparency and data rights access obligations. - Revised process for data subject access requests: Under CCPA, California residents have the right to reach out to Parse.ly for information on the personal information we collect. We have revised our processes to support these rights. #Security and Privacy At Parse.ly, we've always focused on a privacy-minded implementation of analytics. GDPR and CCPA are a welcome codification of practices our engineering teams already follow. But, we used the GDPR and CCPA to ensure all the details are covered. One key aspect of this is system security. We made sure that the data we hold is kept in safe and secure hands, and that our security policies and software are up to date with industry standard best practices. As for privacy, we've always been a privacy-first company; we've long had additional privacy measures, such as limiting IP Address collection and Third-Party Cookies on customer request, even before it was mandated by any privacy agency. We allow customers to control the data they send to us: a customer's development team can send along in our tracking pixel only the minimum information necessary to do analytics properly, which makes us an attractive option already for security and privacy-conscious publishers and clients. Our public stance on analytics and privacy can be found in a piece of writing by our Chief Technology Officer, entitled "Analytics and Privacy Without Compromise.". #Data Protection Team To comply with GDPR, we created a Data Protection Team which is focused on engineering improvements to our systems, processes and our products to comply with the standards required. This team's mandate has been expanded to now include the CCPA. This team focused on organizational changes for handling data protection issues, including compliance with consent and other requirements for how to lawfully collect personal data; improvements to systems and processes to comply with rights of individuals to access, review, correct or delete any personal data that is processed in our systems; ensuring that our own data collection privacy disclosures and data processing agreements are revised, as necessary; and, improving disaster response procedures and notification processes for responding to potential data breaches. #Customer guidance related to CCPA and Parse.ly services All organizations processing personal information of California residents have their own separate compliance obligations. This is true for our customers as much as it for us, and our customers must look to their own advisers to guide them through these processes. Nonetheless, in relation to our customers' use of our systems and services, there are several important things our customers should be doing to meet their own CCPA compliance obligations: - Update terms of service and privacy policies: On your websites or apps, these should be updated to communicate to your own customers and other users how you are using our systems (and any other similar services). These disclosure obligations are more important than ever under the CCPA, including the important obligation to be transparent about the third parties (including us) with whom you are sharing personal information of your users, even if only in the service provider relationship. - Confirm consent requirements: As a business collecting personal information of California residents, our customers have ultimate control over the data we store and process for our customer's monitored domains and apps. Customers need to manage their visitor/user experience to make sure they have robust privacy notices and, where necessary, implement compliant consent and opt-out experiences. - Formalize data "service provider" relationship: Our customer contracts contain appropriate provisions for the personal information we store, and balance the risks and responsibilities between our customers (the "businesses") and us (the "service provider"). If you have an older offline contract with Parse.ly, we ask that you sign or update a contract with us incorporating terms to clearly establish our respective data processing roles, in compliance with the CCPA and other generally applicable privacy laws. This reflects our role as a "service provider" under CCPA (and a "data processor" under GDPR, if applicable), processing data on your behalf as the data "business" under CCPA (or "data controller" under GDPR, if applicable). This can typically be done with an addendum to our 2018 standard product terms. #Note on GDPR We have information available here about Parse.ly's compliance with the EU's General Data Proection Regulation (GDPR), the regulation related to data privacy for the European Union. It is often convenient to achieve legal compliance with CCPA and GDPR at the same time, since both regulations concern personal information/data of internet visitors, including disclosures, access rights, and so on. #Reach out for help Parse.ly considers it a core operational responsibility to ensure first-party analytics is used responsibly and within the guidelines set by CCPA and other privacy frameworks. We ask that customers reach out to their account representative if they need the direct help of our Infosec Team or our Data Protection Team.
<urn:uuid:24062581-3d09-45f3-a31c-3c9b15cf2b8e>
CC-MAIN-2022-33
https://www.parse.ly/help/integration/ccpa/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00405.warc.gz
en
0.930978
2,449
2.890625
3
What Is Bolt and Its Types? A bolt is a mechanical fastener with threaded shafts. The bolts are closely related to screws, which are also mechanical fasteners with threaded shafts. This type of fastener is usually inserted through two parts, which have aligned holes. By some definitions, whether a thing is a bolt or a screw depends on how it is used. The bolt is inserted through the parts that all have unshielded holes, & a nut is then screwed onto the bolt to provide a clamping force & prevent axial motion. A screw my first pass through the first part with an evacuation hole, but its threads are being fastened with threads in one part. Using two numbers separated by a point, the bolts are classified according to their strength. This grade is often stamped on the heads. The points are not decimal but act as a denominator. The first number is the ultimate tensile strengths (UTS) in MPA divided by 100, & the second numbers are the ratio of yield strength to UTS. Common classes are 5.8, 8.8, & 10.9. For example, grades 8.8 bolt has a UTS, at which it fails, is 800 MPa and will receive 80% (640 MPa) of this value. A bolt is a type of fastener that is used to connect two parts together. The bolts join the part non-permanently, i.e., the parts can be separated from each other using a suitable tool. Nuts are also used on the bolt to make the fastening process more effective. Parts of a Bolt: The head is the topmost part of a bolt. It serves as a gripping surface for the instrument. To tighten or loosen the bolt, a tool with the appropriate bit must hold the head. Most bolts have a head wrench-type. In other words, a wrenching bit should be placed around the head of the bolt. However, some bolts use screw-type heads where the bit is placed in the center of the head. Whether wrench or screw-type, all bolt heads provide a gripping surface so that the bolt can be tightened or loosened with a tool. There is a conch under the head. The Shank is the smooths part of a bolt that is devoid of threading. It is designed to prevent radial movements of the included workpiece. Without the leg, the bolt is more likely to be loose. Some workpieces produce vibrations, while others are exposed to vibrations in the environment around them. When the included workpiece vibrates, the bolt may loosen if it does not feature a leg. Lack of leg means that the bolt will fully extend from top to bottom (with the exception of the head). Therefore, the vibration may push the bolt backward from the included workpiece. All bolts have threading. Threading is one that allows a bolt to move in or out of the workpiece. However, most bolts do not have complete threading. They have ahead, followed by a smooth leg and finally threading. Depending on the type of bolts, the Shank may cover a larger surface area than threading. However, all bolts have threading. The threading of a bolt works together with the internal threading of the workpiece to join them. The bolts have external threading that is located outside their bottommost part. This allows the external threading bolt to be moved in or out of the internal threading of the workpiece. Types of Bolts: The Bolts are one of the most versatile structural fasteners, available in a vast array of configurations to suit different materials & strength requirements. These fasteners differ mainly in thread specification, length, and head size, with different combinations of these characteristics resulting in bolts with different functions. They usually require a drilled hole and a filler nut or mating portions taped for installation, and, unlike screws, they are generally not tapped. The terminology that distinguishes between bolt types is often for inconsistent and incorrectly used screws, so it is best to select your bolt based on your project specifications rather than the exact name indicated on the supplier’s website. - Carriage Bolt:- Carriage bolts feature a domed or countersunk head with a square underside that prevents post-installation from operating. They are often used with wood & masonry. - Flange Bolt:- The flange bolt is a special type of hex head bolt that has an integrated flange that acts as a washer component to distribute the load more evenly. - Hull Bolts:- Similar to carriage bolts, some hull bolts have a square-shaped counters top. Others have a domed design. These bolts are ideal for heavy-duty applications such as industrial machinery. - Hex Head Bolt / Hex Head Cap Screw:- Characterized by their hexagonal head shape, hex head bolts are a wide range of bolts requiring installation with a wrench. They are available in many lengths & threading varieties. The hex head cap screw has a tight tolerance and is the most common hex bolt range. - Square Head Bolts:- Like hex head bolts, square head bolts are defined by their head shape — that is, square. This head design facilitates tooling gripping while allowing easy installation. - Socket Head Cap Screw / Allen Bolt:- The socket head cap screw has a flat chamfered top surface with smooth or curly cylindrical sides. Forged heat-treated alloy examples are high-strength fasteners intended for the most demanding mechanical applications with special alloy aggregates. - Additional bolt types:- Other bolt types that we can custom manufacture include anchors, belts (e.g., eye, hook, J, & U), countersunk, lag, & T-handle bolts. Types of Nuts: The Nuts are available in a variety of shapes, sizes, materials, & thread patterns. Although your nut selection is somewhat constrained by your choice of the bolt – especially in terms of size and threading – you should still choose the size and material of the nut head that best suits your application. - Coupling Nut:- A coupling nut is a long, cylindrical nut that connects two male threads. These components can be used to add length to an installation. - Flange Nuts:- Similarly to flange bolts, the flange nuts are around the flange that acts as an external washer and allows for greater load distribution. - Hex Nuts:- Hex nuts are hexagon-shaped. This nut is extremely versatile but requires a wrench for installations. The types of hexes nut we supply include finished hex, semi-finished hex, hex flange, hex jam, hex, and slated hex. - Locknuts:- Locknuts are available in a range of sizes and are used to secure other nuts and prevent them from loosening. Types of locknuts include all-metal locknuts with top or side locking features, serrated hex flange, and nylon inserts. - Slated Nuts:- Slated nuts are designed and manufactured such that they can form a locking mechanism with a cotters pin or a safety wire. - Square nuts:- Square nuts are characterized by their square shape. This head size increases the surface area of the fastener and experiences a greater amount of friction, reducing the risk of loosening it. - Wheel nuts:- Wheel nuts are a wide range of nuts used in automotive wheel applications. - Additional nut types:- Other walnut types that we can custom manufacture include hats, castles, conicals, caps, thumbs, and feathers. Types of Washers: Washers are disc-shaped components that provide controlled control of overlocking and friction when used with other fasteners. These fasteners may include dentures, indentations, and other unique structural mechanisms for use in more specific applications. In general, they perform a variety of tasks, including loosening the fastener assembly, protecting the surface under a fastener, and evenly distributing pressure during installation and use. Compared to nuts and bolts, washers are much less available. However, among the washer types, there are significant differences. Like the nuts, the washers should complement your selected bolt and suit your unique fastening application. - Beveled washers:- Beveled washers are made with a slightly angled surface, allowing them to join materials that are not parallel to each other. - Flat washers:- Flat washer is the most common type of washer. They provide larger surface areas for better load distribution. Different thicknesses are available for different types of hold strength. - Lock washers:- Lock washers come in many sizes, such as choppers, toothed rings, conical, or springs, each designed to prevent slippages of fasteners in demanding applications. They are commonly used in an environment that experiences a high degree of vibration. - Structural washers:- Structurals washers are one of the most heavy-duty washer options available. This thick fastener is designed to withstand the highs load pressures of construction. Frequently Asked Questions (FAQ) Different Types of Bolts Basic bolts consist of a head and cylindrical body that has part of its length (or shaft) threaded (if it’s completely threaded, it’s a screw). The bolt screws into a nut that has internal threads to match the bolts. Types of Bolts - Anchor Bolts - Blind Bolts - Carriage Bolts - Double End Bolts - Eye Bolts - Flange Bolts - Hex Bolts - Machine Bolts and Machine Screws - Penta-Head Bolts - Round Head Bolts - Shoulder Bolts - Socket Head Bolts - Square Head Bolts - T-Head Bolts Bolt Head Types - Flats Bolt Head: A counter shank head with flats top. - Oval Bolts Head: A counter shank head with a round head top. - Pan Bolts Head: A slightly rounded head with a short vertical side. - Truss Bolts Head: An extra-wide head with a rounded top. - Round Bolt Head: A dome head. - Hex Bolts Head: A hexagonal head. - Hex Washer Bolts Head: A hexagonal head with a round washer at the bottom. - Slotted Hex Washer Bolts Head: A hexagonal head with a built-in washer and slot. - Socket Cap Bolts Head: A small cylindrical head using a socket driver. - Button Bolts Head: A low-profile round head with a socket driver.
<urn:uuid:91c0a0bc-e252-44df-9265-2fee12c64496>
CC-MAIN-2022-33
https://mechanicaljungle.com/what-is-bolt-and-its-types/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572063.65/warc/CC-MAIN-20220814173832-20220814203832-00399.warc.gz
en
0.926593
2,221
3.921875
4
The lapse rate is the rate at which an atmospheric variable, normally temperature in Earth's atmosphere, falls with altitude. Lapse rate arises from the word lapse, in the sense of a gradual fall. In dry air, the adiabatic lapse rate is 9.8 °C/km (5.4 °F per 1,000 ft). It corresponds to the vertical component of the spatial gradient of temperature. Although this concept is most often applied to the Earth's troposphere, it can be extended to any gravitationally supported parcel of gas. A formal definition from the Glossary of Meteorology is: - The decrease of an atmospheric variable with height, the variable being temperature unless otherwise specified. Typically, the lapse rate is the negative of the rate of temperature change with altitude change: Convection and adiabatic expansion The temperature profile of the atmosphere is a result of an interaction between thermal conduction, thermal radiation, and natural convection. Sunlight hits the surface of the earth (land and sea) and heats them. They then heat the air above the surface. If radiation were the only way to transfer energy from the ground to space, the greenhouse effect of gases in the atmosphere would keep the ground at roughly 333 K (60 °C; 140 °F). However, when air is hot, it tends to expand, which lowers its density. Thus, hot air tends to rise and carry internal energy upward. This is the process of convection. Vertical convective motion stops when a parcel of air at a given altitude has the same density as the other air at the same elevation. When a parcel of air expands, it pushes on the air around it, doing thermodynamic work. An expansion or contraction of an air parcel without inward or outward heat transfer is an adiabatic process. Air has low thermal conductivity, and the bodies of air involved are very large, so transfer of heat by conduction is negligibly small. Also, in such expansion and contraction, intra-atmospheric radiative heat transfer is relatively slow and so negligible. Since the upward-moving and expanding parcel does work but gains no heat, it loses internal energy so that its temperature decreases. The adiabatic process for air has a characteristic temperature-pressure curve, so the process determines the lapse rate. When the air contains little water, this lapse rate is known as the dry adiabatic lapse rate: the rate of temperature decrease is 9.8 °C/km (5.4 °F per 1,000 ft) (3.0 °C/1,000 ft). The reverse occurs for a sinking parcel of air. When the lapse rate is less than the adiabatic lapse rate the atmosphere is stable and convection will not occur. Only the troposphere (up to approximately 12 kilometres (39,000 ft) of altitude) in the Earth's atmosphere undergoes convection: the stratosphere does not generally convect. However, some exceptionally energetic convection processes, such as volcanic eruption columns and overshooting tops associated with severe supercell thunderstorms, may locally and temporarily inject convection through the tropopause and into the stratosphere. Energy transport in the atmosphere is more complex than the interaction between radiation and convection. Thermal conduction, evaporation, condensation, precipitation all influence the temperature profile, as described below. Mathematics of the adiabatic lapse rate These calculations use a very simple model of an atmosphere, either dry or moist, within a still vertical column at equilibrium. Dry adiabatic lapse rate Thermodynamics defines an adiabatic process as: the first law of thermodynamics can be written as Also, since the density and , we can show that: where is the specific heat at constant pressure. Moist adiabatic lapse rate The presence of water within the atmosphere (usually the troposphere) complicates the process of convection. Water vapor contains latent heat of vaporization. As a parcel of air rises and cools, it eventually becomes saturated; that is, the vapor pressure of water in equilibrium with liquid water has decreased (as temperature has decreased) to the point where it is equal to the actual vapor pressure of water. With further decrease in temperature the water vapor in excess of the equilibrium amount condenses, forming cloud, and releasing heat (latent heat of condensation). Before saturation, the rising air follows the dry adiabatic lapse rate. After saturation, the rising air follows the moist adiabatic lapse rate. The release of latent heat is an important source of energy in the development of thunderstorms. While the dry adiabatic lapse rate is a constant 9.8 °C/km (5.4 °F per 1,000 ft, 3 °C/1,000 ft), the moist adiabatic lapse rate varies strongly with temperature. A typical value is around 5 °C/km, (9 °F/km, 2.7 °F/1,000 ft, 1.5 °C/1,000 ft). The formula for the moist adiabatic lapse rate is given by: , wet adiabatic lapse rate, K/m , Earth's gravitational acceleration = 9.8076 m/s2 , heat of vaporization of water = 2501000 J/kg , specific gas constant of dry air = 287 J/kg·K , specific gas constant of water vapour = 461.5 J/kg·K , the dimensionless ratio of the specific gas constant of dry air to the specific gas constant for water vapour = 0.622 , the water vapour pressure of the saturated air , the mixing ratio of the mass of water vapour to the mass of dry air , the pressure of the saturated air , temperature of the saturated air, K , the specific heat of dry air at constant pressure, = 1003.5 J/kg·K Environmental lapse rate The environmental lapse rate (ELR), is the rate of decrease of temperature with altitude in the stationary atmosphere at a given time and location. As an average, the International Civil Aviation Organization (ICAO) defines an international standard atmosphere (ISA) with a temperature lapse rate of 6.50 ºC/km (3.56 °F or 1.98 °C/1,000 ft) from sea level to 11 km (36,090 ft or 6.8 mi). From 11 km up to 20 km (65,620 ft or 12.4 mi), the constant temperature is −56.5 °C (−69.7 °F), which is the lowest assumed temperature in the ISA. The standard atmosphere contains no moisture. Unlike the idealized ISA, the temperature of the actual atmosphere does not always fall at a uniform rate with height. For example, there can be an inversion layer in which the temperature increases with altitude. Effect on weather This section relies largely or entirely on a single source. (March 2022) The varying environmental lapse rates throughout the Earth's atmosphere are of critical importance in meteorology, particularly within the troposphere. They are used to determine if the parcel of rising air will rise high enough for its water to condense to form clouds, and, having formed clouds, whether the air will continue to rise and form bigger shower clouds, and whether these clouds will get even bigger and form cumulonimbus clouds (thunder clouds). As unsaturated air rises, its temperature drops at the dry adiabatic rate. The dew point also drops (as a result of decreasing air pressure) but much more slowly, typically about −2 °C per 1,000 m. If unsaturated air rises far enough, eventually its temperature will reach its dew point, and condensation will begin to form. This altitude is known as the lifting condensation level (LCL) when mechanical lift is present and the convective condensation level (CCL) when mechanical lift is absent, in which case, the parcel must be heated from below to its convective temperature. The cloud base will be somewhere within the layer bounded by these parameters. The difference between the dry adiabatic lapse rate and the rate at which the dew point drops is around 8 °C per 1,000 m. Given a difference in temperature and dew point readings on the ground, one can easily find the LCL by multiplying the difference by 125 m/°C. If the environmental lapse rate is less than the moist adiabatic lapse rate, the air is absolutely stable — rising air will cool faster than the surrounding air and lose buoyancy. This often happens in the early morning, when the air near the ground has cooled overnight. Cloud formation in stable air is unlikely. If the environmental lapse rate is between the moist and dry adiabatic lapse rates, the air is conditionally unstable — an unsaturated parcel of air does not have sufficient buoyancy to rise to the LCL or CCL, and it is stable to weak vertical displacements in either direction. If the parcel is saturated it is unstable and will rise to the LCL or CCL, and either be halted due to an inversion layer of convective inhibition, or if lifting continues, deep, moist convection (DMC) may ensue, as a parcel rises to the level of free convection (LFC), after which it enters the free convective layer (FCL) and usually rises to the equilibrium level (EL). If the environmental lapse rate is larger than the dry adiabatic lapse rate, it has a superadiabatic lapse rate, the air is absolutely unstable — a parcel of air will gain buoyancy as it rises both below and above the lifting condensation level or convective condensation level. This often happens in the afternoon mainly over land masses. In these conditions, the likelihood of cumulus clouds, showers or even thunderstorms is increased. Meteorologists use radiosondes to measure the environmental lapse rate and compare it to the predicted adiabatic lapse rate to forecast the likelihood that air will rise. Charts of the environmental lapse rate are known as thermodynamic diagrams, examples of which include Skew-T log-P diagrams and tephigrams. (See also Thermals). The difference in moist adiabatic lapse rate and the dry rate is the cause of foehn wind phenomenon (also known as "Chinook winds" in parts of North America). The phenomenon exists because warm moist air rises through orographic lifting up and over the top of a mountain range or large mountain. The temperature decreases with the dry adiabatic lapse rate, until it hits the dew point, where water vapor in the air begins to condense. Above that altitude, the adiabatic lapse rate decreases to the moist adiabatic lapse rate as the air continues to rise. Condensation is also commonly followed by precipitation on the top and windward sides of the mountain. As the air descends on the leeward side, it is warmed by adiabatic compression at the dry adiabatic lapse rate. Thus, the foehn wind at a certain altitude is warmer than the corresponding altitude on the windward side of the mountain range. In addition, because the air has lost much of its original water vapor content, the descending air creates an arid region on the leeward side of the mountain. - Adiabatic process - Atmospheric thermodynamics - Fluid dynamics - Foehn wind - Lapse rate climate feedback - Scale height - Jacobson, Mark Zachary (2005). Fundamentals of Atmospheric Modeling (2nd ed.). Cambridge University Press. ISBN 978-0-521-83970-9. - Ahrens, C. Donald (2006). Meteorology Today (8th ed.). Brooks/Cole Publishing. ISBN 978-0-495-01162-0. - Todd S. Glickman (June 2000). Glossary of Meteorology (2nd ed.). American Meteorological Society, Boston. ISBN 978-1-878220-34-9. (Glossary of Meteorology) - Salomons, Erik M. (2001). Computational Atmospheric Acoustics (1st ed.). Kluwer Academic Publishers. ISBN 978-1-4020-0390-5. - Stull, Roland B. (2001). An Introduction to Boundary Layer Meteorology (1st ed.). Kluwer Academic Publishers. ISBN 978-90-277-2769-5. - Richard M. Goody; James C.G. Walker (1972). "Atmospheric Temperatures" (PDF). Atmospheres. Prentice-Hall. p. 60. Archived from the original (PDF) on 2016-06-03. - Danielson, Levin, and Abrams, Meteorology, McGraw Hill, 2003 - Richard M. Goody; James C.G. Walker (1972). "Atmospheric Temperatures" (PDF). Atmospheres. Prentice-Hall. p. 63. Archived from the original (PDF) on 2016-06-03. - "The stratosphere: overview". UCAR. Retrieved 2016-05-02. - Landau and Lifshitz, Fluid Mechanics, Pergamon, 1979 - Kittel; Kroemer (1980). "6". Thermal Physics. W. H. Freeman. p. 179. ISBN 978-0-7167-1088-2. problem 11 - "Dry Adiabatic Lapse Rate". tpub.com. Archived from the original on 2016-06-03. Retrieved 2016-05-02. - Minder, JR; Mote, PW; Lundquist, JD (2010). "Surface temperature lapse rates over complex terrain: Lessons from the Cascade Mountains". J. Geophys. Res. 115 (D14): D14122. Bibcode:2010JGRD..11514122M. doi:10.1029/2009JD013493. - "Saturation adiabatic lapse rate". Glossary. American Meteorological Society. - "Mixing ratio". Glossary. American Meteorological Society. - Manual of the ICAO Standard Atmosphere (extended to 80 kilometres (262 500 feet)) (Third ed.). International Civil Aviation Organization. 1993. ISBN 978-92-9194-004-2. Doc 7488-CD. - Whiteman, C. David (2000). Mountain Meteorology: Fundamentals and Applications. Oxford University Press. ISBN 978-0-19-513271-7. - Beychok, Milton R. (2005). Fundamentals Of Stack Gas Dispersion (4th ed.). author-published. ISBN 978-0-9644588-0-2. www.air-dispersion.com - R. R. Rogers and M. K. Yau (1989). Short Course in Cloud Physics (3rd ed.). Butterworth-Heinemann. ISBN 978-0-7506-3215-7.
<urn:uuid:74fd1daf-c9da-403b-88ef-a2064182661c>
CC-MAIN-2022-33
https://en.wikipedia.org/wiki/Lapse_rate
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573630.12/warc/CC-MAIN-20220819070211-20220819100211-00602.warc.gz
en
0.836442
3,232
3.921875
4
24/7 writing help on your phone Save to my list Remove from my list In today’s society, more and more inappropriate material is becoming acceptable. Children are becoming more comfortable with bad language, corrupt movies, and offensive books as they are exposed to this material more frequently. The age they begin to learn about violence, drugs, and sex is lower than ever before. Today’s generation seems to be more experienced and knowledgeable about these shockingly crude things than most adults! Parents can not stop this maturing all together; however, they can slow it down by monitoring their children. It is a parent’s right to know what their child is learning, in case it goes against their family’s views. A parent should be completely comfortable with what their child is being taught in school. I Know Why the Caged Bird Sings is a book that most parents do not feel comfortable with. Due to many inappropriate scenes and language, this book has been frequently challenged by parents and authority, being the third most challenged book in the 1930’s and 1990’s (Baldassarro). These shocking passages could offend and hurt some children; however, the book does have some redeeming qualities–if the reader is mature enough to appreciate and understand them. Therefore, I Know Why the Caged Bird Sings should be allowed in libraries for those few who can really appreciate it’s merit, but should be restricted because it is not appropriate for all children. I Know Why the Caged Bird Sings has a legal history as it is frequently challenged and therefore should be restricted. The book’s graphic depiction of childhood rape, racism, and sexuality has caused it to be challenged or banned in many schools and libraries. I Know Why the Caged Bird Sings has had thirty-nine public challenges or bans since 1983 (Baldassarro). For example, in Kansas parents were uncomfortable with the book and attempted to ban it based on the “vulgar language, sexual explicitness, and violent imagery that is gratuitously employed” (Baldassarro). It was challenged for being on a Maryland high school reading list in 2001 because of its sexual content and foul language; it was banned for language and being too explicit in the description of rape and other sexual abuse in 2002; It was challenged in 2003 as required reading in Montana due to sexual exploration by teenagers, rape and homosexuality; and finally, it was challenged in Virginia school libraries by the group Parents Against Bad Books in Schools for “profanity and descriptions of drug abuse, sexually explicit conduct and torture.” 2005 resulted in a banning due to racism, homosexuality, sexual content, offensive language and being unsuitable for the age group (Baldassarro). With so many challenges and bans, one can see how controversial I Know Why the Caged Bird Sings is. This book has been challenged for years, always offending parents as they find the material unacceptable. People usually only take the time to challenge a book if they feel strongly about it’s content. Going to court takes a lot of time and effort and obviously these parents are very uncomfortable with their children being exposed to this material. Therefore, due to it’s history in court, I Know Why the Caged Bird Sings is not appropriate for children and should be restricted in schools. I Know Why the Caged Bird Sings should be restricted because it contains many inappropriate scenes and a lot of crude language. For example, the main character, Maya, is referred to as a “pretentious little bitch” in one of the opening scenes. There are many other shocking words and phrases used throughout the book such as “nigger”, “shit”, “sex”, “titties”, “pubes”, “whore”, “hell”, “pervert”, “queer”, and “vagina” (PABBIS). At times, these words are unnecessary and take away from the overall merit of the book. Even if children in high school hear these things everyday from their peers, it is inappropriate for the students to hear them in a classroom setting. These words and phrases can make some kids uncomfortable and distract from their learning (Boudreau). In addition to the language, there are shocking scenes in this book as well. For example, the main character gets molested by her stepfather at eight years old, and vividly describes the experience: I awoke to a pressure, a strange feeling on my left leg… it was his ‘thing’ on my leg. He said, ‘Just stay right there, I ain’t gonna hurt you.’ I wasn’t afraid I knew that people did ‘it’ and they used their ‘things’ to accomplish the deed… Mr. F. put his hand between my legs. He threw back the blankets and his ‘thing’ stood up like a brown ear of corn. He took my hand and said ‘Feel it.’ It was mushy and squirmy like the inside of a freshly killed chicken. He slowly dragged me on top of his chest. His right hand began moving so fast and his heart was beating so hard that I was afraid he would die. Finally he was quiet, and then came the nice part. He held me softly.. Then he rolled over, leaving me in a wet place and stood up…he said, ‘do you love your [brother]?… If you ever tell anyone what we did, I’ll have to kill [him].’ (Angelou 72) This scene is absolutely inappropriate. It goes into too much detail and can even make adults uncomfortable. And worse, there are more scenes like this, including another rape, a murder, and prostitution. Children should not be exposed to this type of behavior unless they are mature enough to handle it and most children are not (Boudreau). With such explicit material in I Know Why the Caged Bird Sings, it is not appropriate to be read in a school setting. Students should not be forced to read this book in a classroom because it could offend and hurt some people with already low self esteem. Low self esteem is a very serious issue facing the majority of today’s teenagers.There are problems with depression, anorexia, and low self-esteem as teenagers desperately hope to look like someone else, or have what others have (Brothers). In I Know Why the Caged Bird Sings, the main character, Maya, always hates herself as well. For example, she “longed for whiteness: white skin, blonde hair, decent clothes, and simple recognition” (Fox-Genovese 37). Maya always hoped for what she could not have, never being content with what she was given in life. This is not a good example for teenagers in this day-and-age to be looking up to or reading about. This shows kids that not accepting themselves is okay. Also, although this book is written to show the racism of the time period, it offends people of different colors. They feel bad about themselves as Maya always believed she had “the wrong hair, and the wrong legs, but also the wrong face. She was the wrong color” (Smith 51). This phrase has a very negative connotation, using the word “wrong” to describe a skin color. This hurts kids of color who already struggle with their race and their own self-image. Also, throughout I Know Why the Caged Bird Sings the word “nigger” is used very often. Nigger was an informal slang word used by slave owners in reference to blacks. It derived from the word “negro.” Slave owners used the word to refer to their slaves so that they did not have to dignify them with a real name. It is considered insulting to black people because it is a symbol of the way they used to be treated and it can “signify that they are undeserving of a birth-given name, simply because their skin is dark” (Barns). The frequent use of this word in I Know Why the Caged Bird Sings can still seriously offend someone of color. This book can definitely hurt or offend people with already low self-esteem and therefore should not be read in schools. Despite this bad material, I Know Why the Caged Bird Sings has some redeeming qualities for the mature reader. Because this book is an autobiography, it makes it more relatable because the events actually happened. Angelou wrote this book to “probe her identity, to stop lying to herself to cover her fear. She turns to her pen to atone for past falsities and to acknowledge the truth about herself” (Fox-Genovese 37). Angelou was brave enough to share her own story with the world, and a mature reader could recognize and appreciate this. The fact that it is an autobiography creates a stronger effect as the reader can picture the story actually happening in real life (Didion 34). Since Angelou lived through it, this book is a rare piece of social history of the time and a personal look into the lives of all African Americans when “they were forced to face the continuation of slave mentality and racism” (Bloom 16). But, as it adds to the story for those mature readers, it also can make the book less fit for reading. If the reader is already uncomfortable with the storyline, the fact that it actually happened can unsettle the reader even more, but if the reader can handle it, the fact that it is an autobiography adds merit. The way that Angelou “introduces herself as Maya, a ‘tender-hearted’ child, allows her story to range in an extraordinary fashion along the field of human emotion,” allowing the mature reader to connect with the characters easier (Kelly 24). Ernece B. Kelly recognizes that this book may not have excellent syntax, but that it makes up for the lack thereof with “the insight she offers into the effects of the social conditioning on the lifestyle and self-concept of a black child” growing up in the South in the 1930’s (24). Despite it’s inappropriate content, I Know Why the Caged Bird Sings definitely has some literary merit, giving the reader hands on knowledge of what truly happened during that time period. But, the reader would have to be mature enough to look past the inappropriate material to truly appreciate the novel. For a sophisticated reader, I Know Why the Caged Bird Sings is full of redeeming qualities. I Know Why the Caged Bird Sings can offer some insight and knowledge for some steady readers, but can offend and hurt others who are not ready for it. Therefore, it should be allowed in libraries, for the few who will understand and appreciate it’s input, but it should not be on a required or suggested reading list. This society attempts to “turn a blind eye to actual events which it deems too troubling to admit to, let alone deal with” (Baldassarro). This book is about real situations that actually affected real people and real lives. By banning this book altogether, schools would be covering up the truth and pretending it never happened. Therefore, the book must be available in the library to any student who is interested in reading it on their own time. Schools however, cannot require I Know Why the Caged Bird Sings as a class reading assignment. Despite the literary merit, there are too many students that are not ready to overlook it’s shocking language and detailed scenes. It should be up to the individual student and their parents whether or not they are ready to read and understand this book. A teacher can never assume that a student can handle such a book and by assigning this book, a teacher is assuming that all their students are prepared for the inappropriate material, which is usually not the case. Schools must find middle ground, being careful not to offend anyone. Therefore, to make the book available to those who will appreciate it, I Know Why the Caged Bird Sings will be in the library, but to protect those who are not ready, the book will not be assigned in a classroom. I Know Why the Caged Bird Sings should be available in libraries for those few who can appreciate it’s merit, but should not be assigned because it is very inappropriate. This book has been challenged almost forty times by passionate parents. It contains crude language and horrid scenes that are not appropriate for children to be reading. This book is a bad example for teenagers with already bad self-image as the main character struggles with self esteem as well. Despite these drawbacks, I Know Why the Caged Bird Sings does have some redeeming qualities. Because it is an autobiography, the reader has a better insight into her life during this time period. Because this book can teach some mature readers that are willing to look past the shocking material, I Know Why the Caged Bird Sings should be available in libraries. But because of the offensive, inappropriate material, this book cannot be read in classrooms. This compromise will make parents more comfortable as they can control what their child is learning. This control can be important in today’s society as children are becoming more and more accepting of inappropriate material. 👋 Hi! I’m your smart assistant Amy! Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.get help with your assignment
<urn:uuid:b2c11711-7067-4c83-8f4b-9e59b462ad47>
CC-MAIN-2022-33
https://studymoose.com/i-know-why-the-caged-bird-sings-inappropriate-tool-for-school-essay
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571536.89/warc/CC-MAIN-20220811224716-20220812014716-00005.warc.gz
en
0.967021
2,827
2.796875
3
|Hypertext Markup LanguageCascading Style SheetsExtensible Markup LanguageJava ScriptjQueryAngularJSOracle SQLOracle PL/SQLOracle D2K FormsOracle D2K ReportsOracle DatabaseJavaJava Database ConnectivityJava ServletJava Server PagesHibernateApache TomcatCC++PHPMy-SqlUnix/LinuxMS-DOSHR Interviews| |What is HTML?| HTML stands for hypertext mark-up language. Hyper is the opposite of linear. Old-fashioned computer programs were necessarily linear - that is, they had a specific order. But with a hyper language such as HTML, the user can go anywhere on the web page at any time. Text is just what you're looking at now - English characters used to make up ordinary words. Mark-up is what is done to the text to change its appearance. For instance, marking up your text with <b> before it and </b> after it will put that text in bold. Language is just that. HTML is the language that computers read in order to understand web pages. HTML stands for Hyper Text Markup Language HTML is not a programming language, it is a markup language. |Why we Place all External CSS Files within the Head Tag?| Moving style sheets to the document HEAD makes pages appear to be loading faster. This is because putting style sheets in the HEAD allows the page to render progressively. |What is the simplest HTML page?| <HEAD> <TITLE>Welcome! </TITLE> </HEAD> <BODY> Hello world! </BODY> |What is a html tag?| HTML markup tags are usually called HTML tags. |What is the file extension of html?| The HTML file extension is .htm or .html |What is a hypertext link and how to create it?| A hypertext link is a special tag that links one page to another page or resource. If you click the link, the browser jumps to the link's destination. The anchor element is used to define the start and/or destination of a hypertext link. A hyperlink (or link) is a word, group of words, or image that you can click on to jump to a new document or a new section within the current document. Example: The <a href="http://www.webiwip.net/"> webiwip</a> |How can I include comments in HTML?| An HTML comment begins with <!-- , ends with --> .The following are examples of HTML comments: Can I nest tables within tables? Yes, a table can be embedded inside a cell in another table. Here's a simple example: <table border='1'> <tr> <td>this is the first cell of the outer table</td> <td>this is the second cell of the outer table, with the inner table embedded in it <table border='1'> <tr> <td>this is the first cell of the inner table</td> <td>this is the second cell of the inner table</td> </tr> </table> </td> </tr> </table> |What is html Meta element?| The <meta> tag provides metadata about the HTML document. Metadata will not be displayed on the page. |What are HTML Lists and html code for list items?| HTML lists are ordered and unordered <ul> <li>Coffee</li> <li>Milk</li> </ul> <ol> <li>Coffee</li> <li>Milk</li> </ol> <dl> <dt>Coffee</dt> <dd>- black hot drink</dd> <dt>Milk</dt> <dd>- white cold drink</dd> </dl> |What is a html form?| HTML forms are used to pass data to a server. A form can contain input elements like text fields, checkboxes, radio-buttons, submit buttons and more. Example <form action="./#" method="get" name="input">Username: <input name="user" type="text" /><br /> <input type="submit" value="Submit" /></form> |Can I have two or more actions in the same form?| No. A form must have exactly one action. However, the server-side program that processes your form submissions can perform any number of tasks. |What is a DOCTYPE?| According to HTML standards, each HTML document begins with a DOCTYPE declaration that specifies which version of HTML the document uses. Today, many browsers use the document's DOCTYPE declaration to determine whether to use a stricter, more standards-oriented layout mode. |What is the difference between the HTML form methods GET and POST?| As per functionality both GET and POST methods were same. Difference is GET method will be showing the information information to the users. But in the case of POST method information will not be shown to the user. GET method characters were restricted only to 256 characters. But in the case of POST method characters were not restricted. GET method you can only use text as it sent as a string appended with the URL, but with POST is can text or binary. GET is the default method for any form, if you need to use the POST method you have to change the value of the attribute method to be Post. |Why html page looks good on one browser, but not on another?| There are slight differences between browsers, such as Netscape Navigator and Microsoft Internet Explorer, in areas such as page margins. The only real answer is to use standard HTML tags whenever possible. |How can I display an image on my page?| Use an IMG element. The SRC attribute specifies the location of the image. The ALT attribute provides alternate text for those not loading images. For example: <img src='logo.gif' alt='ACME Products' /> Choose appropriate article for the given sentence: |Child Development & Pedagogy| Which of the following is the most effective way to create interest for learning in a student? The rules of presenting the content to make them easy are called_________? The term SLD in children’s mental health denotes for? The child will be able to use proper sentences by the age of? ______is an institutional mechanism to provide representation to the states of India? |Books and Authors| Which text is not written by Kalidasa? Which Constitutional Amendment introduced Right to Education as a Fundamental Right? Which animal is engraved on Harappan seals? Who is known for initiating electoral reforms in India to ensure free and fair elections? The sequence in which the given terms are mentioned in the preamble to the Constitution of India is? Which of the following parts of the Constitution of India provides fundamental rights ? |Books and Authors| Who among the following is the author of ‘Politics’? Which one is the text not written by Abul Fazl? Which King was the author of Ratnawali, Nagananda, Priyadarsika? |Fill in the blank| Fill in the blanks with the most appropriate preposition in the given sentence. The workers were so skilled that their _____ was quite visible even to a layman. Choose appropriate article for the given sentence: बैंकिंग कार्मिक चयन संस्थान Institute of Banking Personnel Selection(IBPS) - 7855 पद - 29/07/2022 अंतिम तिथि कर्मचारी चयन आयोग (एसएससी) Staff Selection Commission (SSC) - 1411 पद - 29/07/2022 अंतिम तिथि |Jobs in Rajasthan| राजस्थान अधीनस्थ और मंत्रिस्तरीय सेवा चयन बोर्ड Rajasthan Subordinate and Ministerial Services Selection Board (RSMSSB) - 5546 पद - 22-Jul-2022 अंतिम तिथि उत्तर मध्य रेलवे भर्ती प्रकोष्ठ Railway Recruitment Cell(NCR) - 1659 पद - 01/08/2022 अंतिम तिथि |Jobs in West Bengal| इंडियन पोस्ट सर्विस Indian Post Service - 2357 पद - 19/08/2021 अंतिम तिथि - पश्चिम बंगाल पोस्टल सर्कल भारतीय स्टेट बैंक State Bank of India(SBI) - 6100 पद - 26/07/2021 अंतिम तिथि - CRPD/APPR/2021-22/10 कर्मचारी चयन आयोग Staff Selection Commission(SSC) - 25271 पद - 31/08/2021 अंतिम तिथि |Jobs in Madhya Pradesh| राष्ट्रीय स्वास्थ्य मिशन मध्य प्रदेश National Health Mission MP(NRHM) - 5215 पद - 22/06/2021 अंतिम तिथि |Jobs in Punjab| पंजाब स्टेट पावर कॉर्पोरेशन लिमिटेड Punjab State Power Corporation Limited (PSPCL) - 2632 पद - 20/06/2021 अंतिम तिथि |Jobs in Chhattisgarh| स्वास्थ्य सेवा निदेशालय, छत्तीसगढ़ Department of Health & Family Welfare, Chhattisgarh - 267 पद - 26/06/2021 अंतिम तिथि
<urn:uuid:ebbe52fa-c5c9-4ab5-bcd0-c877808d6395>
CC-MAIN-2022-33
http://webiwip.com/interview-questions/hypertext-markup-language-interview-questions/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.78/warc/CC-MAIN-20220817001643-20220817031643-00004.warc.gz
en
0.735196
2,760
3.828125
4
Yom Kippur marks the end of an 11 week process of working through a tumultuous relationship with God. We mark this time by reading haftarot each Shabbat focusing on the time of year instead of the parashah. The first three weeks pre-empt the destruction of the Temple that we commemorate on Tishah Be-Av, the next seven console us following its destruction, and the final ones address the themes of Yom Kippur. This cycle of haftarot marks out a period of the Jewish year that we rarely think of as a cohesive unit. However, more careful analysis of the progression in these haftarot will bring to light an important facet in the relationship between God and the Jewish people in exile. The haftarot of the three weeks starting after the 17th of Tammuz and ending with Tishah Be-Av are characterized by harsh rebuke (Jeremiah 1:1-2:3; Jeremiah 2:4-28; Isaiah 1:1-27 “Hazon”). They serve to remind us that the Temple was destroyed due to our own sins and that we have only ourselves to blame. However, Tishah Be-Av itself follows these haftarot, a day of despair on which we feel that the punishment was too severe and that we were abandoned by God. Because Tishah Be-Av is a day of intense religious pain and distance from God, it is followed by seven weeks of haftarot reassuring us through the words of Isaiah that the relationship between God and the Jewish people is not over (Isaiah 40:1-26 “Nahamu”; 49:14-51:3; 54:11-55:5; 51:12-52:12; 54:1-10; 60:1-22; 61:10-63:9). Finally, the cycle concludes with two haftarot of teshuvah: “Dirshu” on Tzom Gedalyah (Isaiah 55:6-56:8) and “Shuvah Yisrael” on “Shabbat Shuvah,” the Shabbat before Yom Kippur (Hosea 14:2-10, Joel 2:11-27). This custom of thematic haftarot is recorded in the Pesikta Rabbati and is cited by Rabbeinu Tam (Tosafot Megillah 31b s.v. Rosh Hodesh) and Rambam in his Mishneh Torah (Hilkhot Tefillah 13:19). But why do the thematic haftarot have these specific calendrical boundaries? What is significant about the period beginning the week after the 17th of Tammuz and ending the week before Yom Kippur? And how did the haftarot of repentance get grouped together with the other haftarot about the destruction of the Temple? These boundaries evoke a memory for the Jewish people that can shine an important light on our own relationship with God. The Mishnah (Ta’anit 4:6) teaches that on the 17th of Tammuz the first set of tablets were destroyed by Moses in reaction to the sin of the Golden Calf. Later in the chapter (Ta’anit 4:8), the Sages reveal that it was on Yom Kippur that the second tablets were given to us, signaling full forgiveness from God. The Children of Israel abandoned God and His mitzvot when they worshipped the golden calf. God wanted to destroy them completely. Moses acted as an intermediary, initiating a process of reconciliation. He persuaded God that the covenant still stood and that the relationship was not over. God agreed not to destroy the Children of Israel entirely but was still angry with them. Moses then urged the people to repent, and he himself went to beg God for forgiveness on Mount Sinai. God called Moses to the mountain to receive the second set of tablets, also giving him the 13 middot ha-rahamim, or attributes of mercy, as a formula to ensure future forgiveness. The culmination of this process of forgiveness, reconciliation, and redemption took place on Yom Kippur, when Moses descended with the second tablets. This day was so joyful that it was called “the day of His wedding” (ibid.). Biblically, according to the Mishnah’s timeline, the period between the 17th of Tammuz and Yom Kippur is a period when the Jewish people healed their relationship with God and God forgave them. Thus, at a basic level, the cycle of haftarot delineated by these same calendrical boundaries reminds us to continue to take part in that same process of healing and reconciliation from Tishah Be-Av to Yom Kippur. But by looking at the texts of these haftarot, we can uncover a different perspective and a message more apropos for our current relationship with God in exile. Let us consider how Abudarham, a fourteenth century commentator on the synagogue liturgy, frames the seven haftarot of consolation. Abudarham explains that these haftarot are a dialogue between the Jewish people and God, initially mediated by Isaiah. In this framework, the first line of each haftarah represents its theme: - God tells Isaiah, nahamu nahamu ami – bring comfort to My people. - But His people reject the comfort offered by Isaiah – va-tomer tziyyon azavani Hashem – Zion says God has abandoned me. - Isaiah returns to God and says aniyah so’arah lo nuhamah – the poor and persecuted nation refuses to be comforted by me. - God responds, Anokhi Anokhi Hu menahemkhem – it is I, it is I who consoles them! - God adds, rani akarah – sing out barren nation, because I am your Husband and will not abandon My wife. - God then finishes with, kumi ori! – Rise and shine, since all the nations who have persecuted you will flock to honor you. - Finally, at long last, the Jewish people respond to God’s efforts, sos asis ba-Hashem – I will greatly rejoice in God, for I am His bride. Note that the Jewish people in this analysis are very picky about the type of comfort that they will accept. They reject Isaiah as an intermediary to give comfort and heal the relationship; they will hear only from God Himself (haftarah #2). They also reject certain formulations of the relationship between God and His people. They do not respond when God speaks only in terms of a nation and its God (#1 and #4). Rather, it’s only when God depicts Himself as a Hatan and the Jewish people as a kallah (#5) that they rejoice in the image of a joyful and powerful nation crowned as the bride of God (#6 and #7). Examining the words of the haftarot more closely, one can see this process unfold. In the haftarah of rani akarah, God tells His people, “For God has called you, ‘as a wife forsaken and grieved in spirit. And a wife of youth, can she be rejected?’ says your God. For a small moment have I forsaken you, but with great compassion will I gather you.” (Isaiah 54:6-7). Then, in the haftarah of kumi ori, God paints a glorious picture of Israel as the honored nation to which all others will bow down to. Finally, In sos asis, the final of the seven haftarot of consolation, Israel responds directly to these combined ideas: I will greatly rejoice in the Lord, my soul shall be joyful in my God. For He has clothed me with the garments of salvation, He has covered me with the robe of victory. As a bridegroom puts on a priestly crown, and as a bride adorns herself with her jewels. For as the earth brings forth her growth, and as the garden causes the things that are sown in it to spring forth, so the Lord God will cause victory and glory to spring forth before all the nations (Isaiah 61:10-11). Further on in sos asis, God replies to Israel’s joy in similar terms: “For as a young man marries a young woman, so shall your sons marry you. And as the bridegroom rejoices over the bride, so shall your God rejoice over you.” (Isaiah 62:5). Read this way, these haftarot identify the terms on which Israel finds comfort in God following the apparent abandonment of Tishah Be-Av. They need to hear that the relationship is intimate and loving, like that of a husband and wife. They need the assurance that their reconciliation will not be muted, but will be as joyful as a wedding day. Abudarham’s model provides a post-Temple perspective on the process of abandonment and reconciliation that begins with the 17th of Tammuz and concludes with Divine forgiveness on Yom Kippur. In the Torah, following the sin of the Golden Calf, Moses acts as the intermediary between the people and God, putting forward arguments for reconciliation on behalf of the Jewish people. God is difficult to persuade, but in the end forgives His people. However, our own process of reconciliation following Tishah Be-Av is quite different. As explained in Aburdarham’s analysis of the haftarot, we now reject any intermediary. We insist on intimacy; we must have a direct conversation with God. Moreover, this time around, it is God putting forward arguments for reconciliation. We are the ones who need persuading. What is the difference between these two stories of sin and reconciliation? Why has the relationship shifted? Perhaps in a world absent obvious connection with God, without miracles like the Red Sea and without the constant of the Divine Presence—the shekhinah—in our midst, we need an explicit promise to remind us that God is still close. In our world darkened by hester panim, by a shadowing of God’s presence, it feels like the burden of proof has shifted. It is God who must prove that He is still with us, and that He is present with the closeness of a bridegroom to His bride. Leading up to Yom Kippur, we sometimes feel the weight of our relationship with God resting entirely on our shoulders. It is our job to repent, to draw close to God. That is true, but these haftarot remind us that the onus of the relationship is not entirely on us. For the last few weeks God has been actively comforting us, assuring us of His forgiveness and promising us redemption. God is closer to us now, more intimate than ever.The model of our relationship is that of husband and wife, a more equal partnership, where both sides bear responsibility for the relationship. Rashi (Deuteronomy 30:3) explains that the shekhinah dwells with us in exile, and that it is together, hand in Hand, that we will achieve redemption. Recall that the two haftarot of teshuvah (Dirshu and Shuvah) are appended to the end of the seven haftarot of consolation. Following a break in our relationship with God raising the existential questions of exile, punishment, and abandonment, we take seven weeks to heal the relationship. It is only after we understand that God still cares for us and we have forgiven Him for how He has punished us that we can return to Him in teshuvah. Yom Kippur is the culmination of a process of mutual forgiveness. We cannot beg God to forgive us when we still need to forgive God. Once we have understood that God has done His part in comforting us and promising us forgiveness, then we will realize that we must do our part too. We enter Yom Kippur with the knowledge that God has brought Himself close to us. It is now up to us to bring ourselves close to Him and receive His forgiveness. The haftarah of Dirshu is read on all fast days, not just Tzom Gedalyah. However, the Pesikta Rabbati includes it in the cycle of haftarot connected to Tishah Be-Av. The Pesikta Rabbati is a late midrashic collection on the parashah and haftarot. The part of the Pesikta addressing the thematic haftarot is quoted in Tur Orah Hayyim 428:8. Sefer Abudarham, Seder ha-Parshiot v-haHaftarot.
<urn:uuid:93885f7b-867a-40da-829d-3af785b0f1bd>
CC-MAIN-2022-33
https://thelehrhaus.com/scholarship/when-god-appeases-man-yom-kippur-in-a-time-of-exile/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572870.85/warc/CC-MAIN-20220817062258-20220817092258-00404.warc.gz
en
0.949849
2,730
3.3125
3
Developing new treatments for people suffering from sleeping sickness (HAT) or visceral leishmaniasis (VL) is a long and difficult process, especially when clinical trials are carried out in rural areas in Africa where health infrastructure is limited or non-existent. However, it is of critical importance to conduct clinical trials in such areas so as to ensure that the treatments being tested will actually meet the real-life needs of the patients. Areas with a high incidence of sleeping sickness and VL are mostly in remote settings where patients have limited access to health care and skilled staff, and in some cases, are exposed to a general political instability. In very practical terms, patients may walk for 5 days to reach a hospital in order to receive treatment. This means that by the time patients reach help they may be very sick. Patients often suffer from malnourishment and co-infections such as malaria, pneumonia, or HIV. Patients may or may not receive a correct diagnosis due to lack of adequate, field-adapted tools and trained health professionals. Thus, sleeping sickness might be misidentified as VL, or VL might be misdiagnosed as malaria. In respect to clinical trials, several major challenges can be identified as study sponsors endeavour to abide by international Good Clinical Practice (GCP) standards: - In some countries the approval process to start a clinical trial may take over a year (from first submission to approval for recruitment of the first patient) as administrative and regulatory processes must be properly followed in order to receive approval to conduct clinical trials. - Conducting clinical trials depends on the participation and consent of several hundreds of HAT or VL patients who are located in remote areas with poor resources. - In order for patients to participate in a clinical trial, the diagnosis must be confirmed through invasive procedures such as lumbar punctures or splenic aspirates necessary to detect the parasite as more readily field-adapted diagnostics are not yet available. - Ensuring patient compliance and follow-up can also be difficult. For both diseases, treatment is long and must be given at a health centre; the inconvenience of trial periods lasting up to 30 days can have a tangible impact on the family. As in the case of VL, half the patients diagnosed with this disease are children. A family member will usually stay in the vicinity of the treatment centre for the duration, meaning someone has to care and provide for the siblings back at home. The difficulty will be even further compounded due to the possible loss of wages. In the case of HAT, follow-up visits are required up to 18 months post-treatment to ensure that the treatment has effectively killed the parasite. If patients forget their appointments or are unable to come, health care workers are obliged to seek these people out, often necessitating a trip to the patient's village far away from the treatment centre. - There is often limited experience in ethical review of clinical trials in some of the endemic countries where trials are being conducted. - Armed conflict and political instability are also important elements to take into consideration. Apart from the obvious impact on the local populations, such situations add further complexities and could jeopardise the outcome of the clinical trial. Take for instance the recent political instability and subsequent violence in Kenya – this resulted in an approximate 3-month delay for the 2 clinical trials DNDi was conducting in Western Kenya. The Partner advantage Kwamouth Ward in the DRC. To address these challenges, DNDi has facilitated the formation of LEAP and HAT Platforms, which are comprised of a group of scientists, academic institutions and representatives from Ministries of Health from disease endemic countries. Acting together along with MSF operational efforts is the best way to develop clinical trial capacity to ensure centres of excellence for both the trials and the treatment of patients. Some key activities of these platforms include addressing the issues described above: developing appropriate clinical trial methodologies, overcoming system challenges, strengthening clinical trial capacity both in terms of infrastructure and human resources, sharing information, and facilitating communication among all involved. A good illustration of these collaborations is the LEAP-conducted clinical trial investigating paromomycin, an aminoglycoside antibiotic identified in the 1960's as having antileishmanial activity and having recently been approved to treat VL in India. Paromomycin could represent an improved treatment at a lower cost. With the aim to register paromomycin as a new treatment in the East African region and to have it adopted in national treatment guidelines, the study has currently treated approximately 900 patients as part of the clinical trial and a similar number of patients outside the clinical trial in 5 sites in East Africa. When patients do not qualify for enrolment in a clinical trial, they are provided treatment by the trial teams according to guidelines. Arba Minch, patients waiting to be attended - May 2006, Ethiopia A notable step forward in multi-centred collaborative partnerships is the nifurtimox-eflornithine co-administration trial (NECT) a multi-centre clinical study to test a simplified combination of nifurtimox and eflornithine for stage 2 HAT (see Newsletter No14). The study objective is to demonstrate that the co-administration is as safe and efficacious as standard eflornithine monotherapy, but easier to use as the number of slow, intravenous infusions of nifurtimox is reduced from 54 to 16. DNDi's activities have mainly focused on the implementation of this study at three sites in DRC and have included sustainable capacity strengthening for clinical research in terms of rehabilitation and equipment. All of the 280 patients have been enrolled, the 18-month follow-up is soon to be concluded, and full efficacy and safety analysis will be completed by the end of this year. The LEAP and HAT Platforms are a forum comprising of partners with different but complementary backgrounds: physicians, parasitologists, ethicists, and pharmacologists. These partners share one common objective, to come together and operate in a coherent manner to ultimately bring these clinical trials to term in a timely and professional manner. They should all agree on a common protocol and the pertinent strategies to follow in conducting clinical trials. DNDi and its partners in the LEAP and HAT Platforms aim to perform high quality clinical trials whilst strictly observing ethical principles. The Ethical committee's primary role is to protect the rights of patients or volunteers participating in a clinical trial. They do this by providing a careful assessment of the trial and by providing patients with unbiased and necessary information in an easily digested manner. Along with other players like TDR, Swiss Tropical Institute, the platforms organise training on ethics in clinical trials for ethics committee members and investigators alike. If a treatment is found to be safe end efficacious in clinical trial(s), the new treatment must then be registered in order for it to be incorporated into national treatment guidelines and ultimately become available to patients. The regional research platforms aim to facilitate the registration process by engaging local and national health authorities early on in the process, so as to ensure that needs-adapted treatments are made available to patients in as timely a manner as possible. Building an environment conducive to efficient clinical trials Because infrastructure is very limited in many endemic areas, DNDi is in the process of, or has already upgraded ward and laboratory facilities in all of the clinical trial sites to enable them to meet GCP standards. For example, in Gondar, Ethiopia, a leishmaniasis Research and Treatment Centre has recently been inaugurated (see box; photo). Previously in Gondar there was insufficient room in the hospital; the VL ward functioned under a tent. The communication challenges that present the remote locations of many of the sites are evolving. With the increased use of mobile phones and internet access, the ability to communicate with the clinical trial sites is becoming easier. For example, site investigators from Sudan no longer have to drive to the nearest town to make a phone call or access the internet. However, some HAT sites are still suffering from lack of electricity and are not covered by mobile phones. In addition to physical infrastructure, trained staff is essential in order to carry out clinical trials according to GCP. Training is not just important at the start of a trial, but is a continuous process to update existing staff and to train new members. Frequent training topics include GCP and trial-specific procedures. From external consultants to the experienced trial site staff, the sharing of better practice principles can also act to motivate teams working in difficult field conditions. Ensuring that all sites have standard operating procedures (SOPs) and endeavouring to standardise them across the different sites (local regulations permitting), as well as ensuring that everyone understands the need to follow SOPs and meet GCPs through a comprehensive monitoring plan is vital. Independant clinical monitors visit trial sites on a regular basis to ensure that they are complying with good clinical and laboratory practices and SOPs They also address adverse reaction detection, reporting mechanisms, and ethical issues during planning and conducting of the trial. In addition, an independent external GCP auditor ensures that all aspects of regulatory and ethical documentation in the clinical trial comply with GCP standards. This monitoring and auditing further educate staff and reinforce the importance of carrying out clinical trials to international standards, as a positive outcome for the clinical trial will hopefully result in a new treatment. Taking steps together in the right direction At a symposium held during the RSTMH Centenary Meeting in London in September 2007, Prof. Ahmed El-Hassan, a Sudanese leishmaniasis expert, raised the issue about just what a change in infrastructure can accomplish. "The project has brought African scientists in the region together to tackle a disease that knows no political boundaries: an example par excellence of South-South collaboration, about which we talk a lot and do very little... In leaps and bounds, DNDi has moved forward with activities in the region. When we started, we held our clinics under the shade of trees in protection from a very hot sun. Now, we have modern facilities that allow us to serve the unprivileged and marginalized communities with state-of-the-art medicines." This is precisely why DNDi-sponsored LEAP and HAT Platforms will carry on with these important endeavours. Capacity Building in Gondar, Ethiopia Currently in Ethiopia, 40 VL / kala-azar foci have been reported on. As Dr. Sisay Yifru, Gondar's Principle Site Investigator, puts it: "the estimate of the annual burden in the country ranges from 4,000 to 5,000 cases and is distributed throughout the Metema and Humera lowlands in the West of the country as well as the Segen Valley and its surrounding area in Konso, in the South West. Furthermore, new foci like Libokemekem in South Gondar are also becoming more and more exposed to kala-azar. DNDi, together with its regional partners (Gondar and Addis Ababa University), is conducting clinical trials in Ethiopia. One of the trial sites initially identified was based in Gondar. However, the infrastructure to support quality clinical studies was missing to begin with. So DNDi also provided the necessary support to the Ministry of Health in strengthening the capacity of local health infrastructure by establishing a well functioning leishmaniasis Research and Treatment Centre at the University of Gondar. The team inaugurated the new treatment centre on 11 May 2008 and it is now fully operational with 24 beds. It will provide treatment to up to 300 patients per year, be they adults or children suffering from VL. Funding for the construction came from Medicor Foundation and the Canton of Geneva with on-going funding from DFID; MSF; Canton de Genève; Region of Tuscany, Italy; MAEE France; Medicor Foundation; and Kings College for the Paromomycin trial and the LEAP Platform.
<urn:uuid:a60fa1df-4ad2-4c16-a6f8-b229d3cde0e4>
CC-MAIN-2022-33
https://dndi.org/newsletters/n16/4_1.php
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573172.64/warc/CC-MAIN-20220818063910-20220818093910-00005.warc.gz
en
0.95386
2,484
3.34375
3
The Sept. 10, 2019 PBS article accompanying the Frontline documentary “Deadly Water” was topped by a provocative headline: “The EPA Says Flint’s Water is Safe — Scientists Aren’t So Sure.” The PBS story relied on a study of adverse health outcomes for people given point-of-use (POU) water filters during the Flint Federal Emergency. We were astonished. Several of us worked closely with residents to first expose the problems with lead and Legionella that defined the Flint Water Crisis. We were supportive of later humanitarian efforts to provide Flint residents with the free point of use (POU) lead filters, since they effectively remove lead from water used for drinking and cooking. These off-the-shelf water filters are routinely used in about a third of U.S. homes, so we were mystified as to how they could have wrought such devastation when deployed in Flint. We turned to the scientific study that PBS had cited, published in the International Journal of Infectious Diseases (IJID) in early 2019. The study, by Gina Maki and colleagues at Henry Ford in Detroit, claimed that the human test subjects in Flint had a 70% rate of severe pneumonia, 10% rate of sepsis, and a 10% rate of folliculitis and half died from “bacterial infections” – implied to be Legionnaire’s Disease – as a result of using the POU filters. Such results —if they were accurate – would indeed be shocking. The PBS Frontline article and the associated IJID study called into question the competence of the government agencies who supplied the filters. We critically reviewed the article, which consisted of just 350 words and one table. There was little explanation of methods, but we did our best to dissect the logic supporting the conclusions. The timeline was especially perplexing. Specifically, the 2014-2015 Flint Legionnaires Disease outbreak that killed at least a dozen people from pneumonia ended around September 2015. But the high lead in water was not widely recognized until early September 2015 and the POU filters were not widely distributed until later. How was it possible that the POU filters distributed to Flint residents after September 2015 could be responsible for a 2-year Legionella outbreak that had already ended? The Frontline article also noted that PBS reporters were responsible for collecting the water samples from residents that were used in the Maki et al. study. This also struck us as highly unusual. We decided to seek assurance that the POU filters were installed in homes of Flint residents before the infections described in the paper occurred. We also wondered how a human subjects study with PBS reporters collecting water samples, involving a total of 10 homes using filters in Flint and one control home using a filter in Detroit, could be approved by a Institutional Review Board (IRB). Such a review should have required assurance that a human subject study design was sound and supported the generalizable conclusions. We corresponded with IJID editors, determined a mutually agreeable course of action, and decided to write a letter to the IRB at the Henry Ford Medical Center (Henry Ford IRB) to inquire about specifics of the study and the approval process. o our surprise, the day after we sent our inquiry to the Henry Ford IRB, two of us (Edwards and Pruden) were accused of human subject misconduct – including a failure to obtain IRB approval for our own early work in Flint that exposed the problems with lead and legionella. It turned out that Henry Ford’s Marcus Zervos, the senior author of the IJID study, had somehow learned about our query to the Henry Ford IRB. He immediately wrote a letter alleging the scientific misconduct to the Department Chair of Civil and Environmental Engineering at Virginia Tech. Additional allegations about our misconduct were then made to Virginia Tech IRB. Edwards and Pruden only learned about the misconduct allegations, because Zervos made accusations to other individuals at VT not involved with VT-IRB. VT-IRB maintained strict confidentiality about the concerns raised , as required by IRB protocol. After a thorough investigation conducted over the next 5 weeks into the allegation made by Zervos against Edwards and Pruden, it was determined that they had obtained appropriate human subjects approval for the research. There was no evidence of scientific misconduct. It turned out that was the easy part. Getting answers to simple questions about the IJID study has taken two and a half years – and counting. Following our questions about the involvement of PBS Frontline reporters in collecting water samples for the IJID study, the PBS article was quietly corrected. The original text read: In 2018, a team including Zervos tested water filters from 10 Flint residents’ homes that they suspected were infected, using samples collected by FRONTLINE. The text “using samples collected by FRONTLINE” was suddenly deleted and the only indication of this change now appears discreetly at the end of the article: [Correction: This story has been updated to accurately describe the 2018 study on water filters.] In response to our questions submitted to Henry Ford IRB, IJID editor Eskild Petersen wrote us that “the corresponding author of the conference abstract, Marcus Zervos, has promised me to discuss your questions directly with you.” That promise was never fulfilled. None of the technical questions we submitted in writing were ever answered. Henry Ford IRB only replied with a simple statement that all necessary human subjects approvals to the study were obtained. They did not provide us with any documentation. After our questions were ignored for a month, we asked Petersen to intervene. Peterson repeated that “the authors of the abstract in IJID you refer to promised to be in contact with you,” and added: “If that does not happen I have no possibility to enforce that. The COPE [Committee on Publication Ethics] guidelines do not empower an editor to obtain the information you ask for from an author.” Petersen did offer us an opportunity to publish a letter to the editor expressing our concerns. But we did not see the point in writing such a letter until we had answers about the timing of the filter installation that allegedly resulted in the deaths of five Flint residents. e discovered at least two other communications were sent to the IJID authors posing questions similar to ours about the research. Hernan Gomez and colleagues wrote a letter to the editor of IJID noting “an astounding 50% mortality rate in their study sample.” They feared there had been a “selection bias” in selecting homes for the study and directly questioned the conclusion “that the use of the filters caused the deaths.” Gomez and his co-authors also expressed concerns about the possibility the IJID article conclusion would reduce “the use of POU filters, thereby unnecessarily increasing the risks of lead exposure for the residents of Flint.” The published response from the authors was evasive. They wrote that “only patients with suspected infections from waterborne pathogens were invited to participate,” which seemingly confirms the selection bias. But they did not provide assurance that the allegedly deadly filters were installed before the infections occurred. Instead, the authors cryptically explained that they did not “draw conclusions about deaths; rather, this information only characterized the status of participants.” Well, in terms of the “status” of the participants, we were never confused about the fact they were dead. But the relevant scientific question was whether their deaths resulted from using the filters as stated in the IJID article. Gomez and colleagues also expressed alarm “that the ethical standards for proper scientific inquiry have not been met by the [IJID article] authors.” To which Maki and colleagues claimed that it would have been “irresponsible to NOT share microbial results” about the allegedly dangerous filters with Frontline and others. Another researcher engaged in Flint, pediatrician Mona Hanna-Attisha, also e-mailed Zervos about the paper. “This paper is scary,” she wrote. “5/10 people died. Is there more to it?…..How was this paper published -a sample size of 10 with such limited (and scary) info? I worry about the unintended consequences of this paper.” Zervos responded that “the 10 homes in Flint were from people that consulted with me about their illness” so it is not “representative of Flint.” Zervos said he had “hopes of doing an evaluation of this that is representative of Flint” and that could provide a “more detailed look at reasons for pneumonia deaths.” By that point, the authors of the IJID paper had at least three opportunities to clarify the timing of the filter installation relative to the deadly infections, or to disavow their scientific conclusion that the filters caused the deaths of Flint residents. They did not do so. The journal editor claimed there was nothing further that could be done. So we brought the case to the Committee on Publication Ethics (COPE). OPE eventually determined that it was indeed “the editor’s duty to follow up on concerns about published content…[including the>]”time at which the infections took place relative to the water filter usage, cause of death of the patients…” The IJID editor was then directed by COPE to resolve our concerns or face sanctions, which could include being kicked out of COPE. The IJID editor eventually responded to COPE, saying they repeatedly “advised Dr. Edwards to talk to Dr. Zervos and his institution.” He did not mention that we previously submitted written questions about the timing of the deaths to Henry Ford IRB and received no response. IJID further stated they adequately responded by publishing the letter from Gomez and colleagues, and that beyond that “we refuse to be used in a dispute that we do not understand.” You read that right: The IJID editors claimed to not understand, that if the POU filters were installed after the infections occurred, then the published IJID conclusion was invalid. At that point, the IJID response was reviewed by three COPE trustee board members “who determined the case did not merit a formal sanction.” They did recommend “feedback to the journal on the importance of alerting readers about concerns relating to the validity of published conclusions.” The case was closed. We asked to see the feedback sent from COPE to the IJID, but we were told that this was private. hile it seems that we may never know if the infections (and related deaths) occurred before or after the filters were installed in the homes of the test subjects, this case is not over. The PBS Frontline story and the Flint filter fears are a central issue in upcoming criminal trials in which State of Michigan employees are charged with willful neglect and other felonies. The troubles all started when many of the IJID article authors formed a large research team called the Flint Area Community Health and Environmental Partnership (FACHEP) that received millions of dollars in research funding from the State of Michigan in 2016. On the basis of dubious sampling, members of FACHEP started rumors that the POU filters were causing a shigella outbreak in Flint. The researchers later found out that their rumors were without basis, and the U.S. Centers for Disease Control and Prevention determined that the Flint shigella outbreak was likely spreading by conventional person-to-person contact, and not from using water filters. But many Flint residents still believe the FACHEP rumors that, “we have shigella because we washed our hands” and that the POU filters grew dangerous bacteria. In late 2016, the team then tried to get a peer reviewed paper published warning about alleged dangers of the POU filters in Flint. One of the authors of this blog, Dr. Susan Masten, was a senior member of FACHEP and co-author of that paper, and she later contacted her administration with ethical concerns about possible contractual violations and co-authorship issues. This 2016 paper was rejected following peer review. The fear-mongering that the shigella was coming from potable water, failures to promptly address human subjects issues, and other concerns caused a lot of friction between the FACHEP team and the State of Michigan, who was funding their $3.4 million dollar grant. The filter friction eventually led to the allegations of “obstruction of justice” by Michigan Department of Health and Human Services’ Eden Wells and Nicholas Lyon who were overseeing the FACHEP grant. An “extortion” felony charge against Rich Baird, one of Governor Rick Snyder’s advisors, was added for “threat<ing> to cause harm to the reputation and/or employment of [the FACHEP] leader.” After the felony charges were filed, we began to publicly document numerous scientific concerns about FACHEP’s Flint filter research up to the publication of the PBS Frontline article in late 2019 through a series of blogs. As part of the standard COPE investigation agreement, we had to agree to stop blogging on the issue while it was under review. The felony obstruction of justice charges were later dropped against Wells and Lyon, but the “extortion” charge for “threatening the reputation” of one of the IJID article authors still stands. The punishment for this offense can range up to 20 years imprisonment and a fine of $20,000. All of this raises the possibility that there might be more learned about the PBS Frontline and IJID filter death story before the Flint criminal trials finally come to a close. To be clear, we want justice for the wrongdoing that brought on the water crisis in Flint as much as anyone else. However, confusion about the science may only result in more injustice, and perpetuate the sort of unethical behavior that created the Flint Water Crisis in the first place. Marc Edwards and Amy Pruden are University Distinguished Professors at Virginia Tech. Sid Roy is a Research Scientist at Virginia Tech. Kasey Faust is an Assistant Professor at the University of Texas at Austin. Susan Masten is a Professor at Michigan State University. Source: Read Full Article
<urn:uuid:6fbe4ee0-35ca-4dea-bae2-cfc35cc08be6>
CC-MAIN-2022-33
https://familywnews.com/health-news/accused-of-scientific-misconduct-and-that-was-just-the-beginning/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00205.warc.gz
en
0.975839
2,970
2.8125
3
« ZurückWeiter » province of New York had been under English rule one hundred and twelve years, and many of these years had been filled with contentions between the royal government and the assemblies chosen by the people. The people claimed, and finally gained, the right to have the sole power of the appropriation of money, and consequently of taxation, without dictation or amendment on the part of the royal governor and his council. If they did not like a royal governor or judge they would not pay his salary. Practically, the people of the colony of New York had nearly as free institutions in 1776 as they have to-day. They were thoroughly alarmed by the declaration of the English Parliament that the king with its consent “ had the right to bind the colonies in all cases whatsoever." It was on the 9th day of July, 1776, that the Declaration of Independence was read and ratified by the “ Provincial Congress of the colony of New York.” This was not the assembly of the colony, but a sort of rebel congress convened at the request of an executive council appointed by the people. This council was assembled “to deliberate upon, and from time to time to direct, such measures as may be expedient for our common safety;” it was in fact the government of the people in displacement of the royal government. On the 10th of July this Congress changed its title to the “ Convention of the representatives of the State of New York.” The people in New York were divided into parties. There were parties of peace, of action, and of union, but the parties of action and union became one, with large accessions from the party of peace. This state convention, it is interesting to notice, moved about considerably, the delegates probably consulting their personal safety. At one time we read of them at White Plains, then at Harlem, at Fishkill, and finally at Kingston, where on the 20th day of April, 1777, the first Constitution of the State of New York was adopted. John Jay, afterwards Chief Justice of the United States, was the principal draftsman of the instrument, and it is not too much to say that it was a good piece of work. We find a curious record of the convention at Fishkill. It met in the Episcopal church, which, says the record, “ being DECLARATION OF INDEPENDENCE. foul with the dung of doves and fowls, without any benches, seats, or conveniences of any kind, the convention adjourned to the Dutch church.” The palatial apartments of the representatives of the people in the capital at Albany contrast strikingly with this hencoop at Fishkill, and the contrast illustrates the growth of the state. The states of Pennsylvania, Delaware, Maryland, and North Carolina adopted their constitutions in 1776, Georgia in 1777, and Massachusetts in 1780. These constitutions were very much alike. They were copied largely from their colonial charters, except that election by the people was substituted for appointment by the king or his governor. The executive, legislative, and judicial departments were continued. These departments existed in Great Britain, and in the several colonies, and there was no reason why they should be less serviceable under popular than under monarchical governments. Of course, there was some modification which experience had suggested. There was usually a full bill of rights, founded in great part upon Magna Charta, and the Bill of Rights of English subjects as declared upon the accession of William and Mary in 1688, with additions suggested by the Declaration of Independence. The colonists had in vain contended that an act of Parliament against Magna Charta was void, and they therefore were explicit in defining the rights of the people which their own governments must not invade. Valuable as these constitutions were, they were quickly and easily written. They were adaptations, not inventions. It is a mistake to suppose that our fathers took up arms against actual oppression. It was oppression threatened and feared, rather than executed and felt, which they rose to resist. They met it at the threshold and strangled it there. An examination of the array of alleged “ facts submitted to “ a candid world,” in the imposing rhetoric of the Declaration of Independence, will disclose the truth to be, that it is the threatened assumption of power by the king that forms the chief burden of the formidable indictment against him. Our fathers were striving to retain their liberties, not to resume them. Instead of throwing off the yoke of King George, they refused to put it on. A NATIONAL GovernmeNT. — THE ARTICLES OF TIE ConstitUTIONAL CONVENTION. The Convention. We have seen that it was comparatively easy for the colonies to change their colonial into state governments. But there was to be wrought out under the necessity and pressure of the circumstances of their war with the mother country, and the burdens and duties which the war would entail, a common government for the common defence and the general good of all the states. This was the new problem which the American people were destined to solve. The states themselves must be protected against the common enemy, and possibly against each other. It is this elaboration of the general government which resulted in 1787 in framing, and in 1788 in adopting, the Constitution of the United States, that forms the most interesting and instructive portion of our constitutional history. It took the twelve years from 1776 to 1788 to bring it all about. The first step was the meeting of the Continental Congress. Practically, this accomplished the union of the colonies for the purpose of carrying on the war. The second step was the Declaration of Independence. This affirmed the union of the colonies in their renunciation of allegiance to Great Britain. The third step was in the efforts of Congress to provide efficient measures, in which all the states should take part, to prosecute the war, and resulted in the Articles of Confederation. The fourth step was the adoption of the Constitution. The Articles of Confederation were of themselves the first written Constitution of the United States. Their importance will justify our attention to their history and character. ARTICLES OF CONFEDERATION. The necessity of an organized union of the colonies into one common power, adequate to command the resources of the whole in the conflict with Great Britain, was obvious from the first. But it was not obvious that the creation of one state out of all the people, and commanding them all, of its own right and power, was the best method. It was plain enough, however, to a few. Thomas Paine, in “ Common Sense,” in January, 1776, said: “Let a continental conference be held to frame a continental charter.” Many wise friends of the cause repeated, and from time to time renewed, the suggestion. But a continental charter or constitution for one continental state or nation was to await the teachings of experience and the pressure of calamities. An association or confederation of the states, in which each state should pledge itself to comply with the request of the committee or congress of the whole, was thought to be either a sufficient or the only practicable expedient. In June, 1776, a committee was appointed by the Continental Congress to prepare and digest the form of confederacy to be entered into between the colonies. This was before the Declaration of Independence was adopted. The committee in July did report a plan, and Congress debated, and considered, and waited, until a year from the then next November, before it actually agreed upon the plan, in the form of Articles of Confederation, to be submitted to the several states for adoption. The method of adoption proposed was that each state should instruct its delegates in Congress to subscribe the same in behalf of the state. Congress sent out a circular letter to each state. That letter probably tells the truth about the difficulties in the way, as clearly as they can be stated. It recites that " To form a permanent union, accommodated to the opinions and wishes of the delegates of so many states, differing in habits, produce, commerce, and internal police, was found to be a work which nothing but time and reflection, conspiring with a disposition to conciliate, could mature and accomplish. Hardly is it to be expected that any plan, in the variety of provisions essential to our union, should exactly correspond with the maxims and political views of every particular state. Let it be remarked that after the most careful inquiry, and the fullest information, this is proposed as the best which could be adapted to the circumstances of all, and as that alone which affords any tolerable prospect of general ratification. Permit us then earnestly to recommend these articles to the immediate and dispassionate attention of the legislatures of the respective states. . . . Let them be examined with a liberality becoming brethren and fellowcitizens, surrounded by the same imminent dangers, contending for the same illustrious prize, and deeply interested in being forever bound and connected together, by ties the most intimate and indissoluble. And finally let them be adjusted with the temper and magDanimity of wise and patriotic legislators, who, while they are concerned for the prosperity of their own immediate circle, are capable of rising superior to local attachments, when they are incompatible with the safety, happiness, and glory of the general confederacy." When the Articles of Confederation were submitted for adoption, many objections were stated by the different states, and many amendments proposed. “It is observable,” says Mr. Madison in the 38th number of “The Federalist," " that among the numerous objections and amendments suggested by the several states, not one is found which alludes to the great and radical error which on actual trial has discovered itself.” That error was, the confederacy did not itself execute its resolves, but requested the states to execute them. But Congress did not deem it wise to accept any of the modifications suggested. The states were intensely jealous of any central power or headship over themselves, and, had not the pressure and danger of the war been upon them, they would not have adopted these articles. All the states, except Delaware and Maryland, ratified them in 1778; Delaware in 1779, and Maryland not until March, 1781. One of the causes of delay was a controversy between the states in regard to the public lands which the crown had held, and the states now claimed. The states which had the least land, or whose boundary claims were doubtful, felt that the whole ought to be devoted to the United States to provide a fund to pay the expense of the war. Five of the seven years of the war had passed before this Constitution was adopted. What authority had Congress in the mean time? None whatever, except what was implied from the consent of the states or of the people. The Congress was in fact the only central government that existed, and its
<urn:uuid:9852a514-c589-466e-8af2-7eb663990323>
CC-MAIN-2022-33
https://books.google.com.br/books?id=aW5DAAAAIAAJ&pg=PA46&vq=%22regarded+as+beings+of+an+inferior+order,+and+altogether+unfit+to+associate+with+the+white+race,+either+in+social+or%22&dq=editions:NYPL33433057100806&lr=&hl=de&output=html_text
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572215.27/warc/CC-MAIN-20220815235954-20220816025954-00405.warc.gz
en
0.981356
2,260
2.859375
3
In unixoid operating systems - these include GNU/Linux, FreeBSD and OpenBSD - a program called “init” is always started as the first process. This is specified in the operating systems kernel. The init program in turn starts other programs, such as system services, and provides a login prompt at the end of the boot process. To put it simply, starting a Unix-like system goes like this: Bootloader (e.g. Grub) → Kernel (e.g. Linux kernel) → init → login prompt A few years ago a change of the init system took place in most Linux distributions. The previously used init system “SysVinit” still has its origin in the primeval times of Unix. Services are started with SysVinit by shell scripts, which can become very complex. By the sequential processing of these scripts a system could need very long time to start. This seemed to be out of date. Because of this there were attempts to replace SysVinit by something more modern for some time. Canonical tried it with Ubuntu with the “Upstart” system. Long before that, IT professor Daniel J. Bernstein (also known as “djb”) tried to improve the init system with his “daemontools”. n 2010, Lennart Poettering - an employee at Red Hat - programmed the software “systemd”. Systemd should not only serve as an init system, but provide a complete framework, which serves the administration of Linux systems. Systemd not only starts services, but also provides sockets. In addition, systemd provides service with its own utilities, which are intended to replace the traditional Unix programs. For example, systemd-networkd, systemd-logind, systemd-journald (as a replacement for syslog), systemd-resolved (name resolution), systemd-timesyncd, etc. exist. And the number keeps growing. In the last decade, systemd has become the default for most distributions. In Debian, it has been the default init system since version 8. Systemd is not only an init system. It performs a variety of tasks. If you consider the tasks that a modern desktop operating system has to perform, then it certainly makes sense to combine these tasks in an integrated system. In general, users are not interested in the single internal services that run on the computer. They want the computer to work and be fast. This is perfectly legitimate and systemd is fine for this purpose. In IT, we often talk about “use cases” and the “desktop” use case is just one of many. In the area of system administration, it often comes down to keeping the system (server, router, etc.) stable and secure. Two basic Unix principles to achieve this are called Keep it small and simple Do one thing and do it well This also has the purpose of avoiding too much complexity, because complexity is the enemy of security. One must acknowledge that systemd only pays little attention to these principles. It consists of over 1.2 million lines of code (as of 2019) and has - unsurprisingly - already made headlines due to spectacular security vulnerabilities. Software which has such a fundamental significance in an operating system should therefore - if security is of importance - be “small and simple” in order to avoid security as far as possible. As already said, there have been some attempts to modernize the init system during the last 20 years. Some of them found quite some popularity. The system “OpenRC” could be seen as an evolutionary development of SysVinit. This system is mainly used by Alpine Linux and Gentoo. Based on the above mentioned daemontools from djb further systems developed, like s6 or runit. These daemontools-inspired init systems are similar in structure and use, but have different levels of complexity. Runit is primarily designed for simplicity and has a small code base. This in itself is a good prerequisite to build a safe system. It consists of several small programs and knows per default 3 “stages”: The various programs are: In general an init system consists of Runit is kept very minimal and has no full-blown service manager. For starting and stopping services sv is used. ln -s /etc/runit/sv/service_name /run/runit/service sv down service_name # or sv stop service_name sv up service_name # or sv restart service_name sv restart service_name sv status service_name I assume here a minimal system installation of Debian 11, done with a “netinst” ISO image. How to perform such a minimal installation of Debian is not part of this article. Important is only: in the software selection everything should be deselected. After logging in to the system as root, at first the runit packages are installed apt install runit runit-init Since this replaces the init system, a confirmation prompt is issued, where Yes, do as I say! must be entered. Then the system is rebooted with Then login again as root. Runit should already run, but some cleanup is needed. First uninstall systemd apt –purge remove systemd A login manager is required in most cases apt install libpam-elogind Finally, APT preferences are used to ensure that systemd does not sneak in again through the back door (through any dependencies) cat << EOF > /etc/apt/preferences.d/00systemd Package: systemd Pin: origin "" Pin-Priority: -1 EOF Now runit is running, but other than serving as init and starting and monitoring getty, it doesn't do much yet. Just like SysVinit it also starts services via scripts in /etc/init.d, but you could also have that with SysVinit. To take advantage of runit with supervision of services these services need to be started “runit style”. Fortunately, this is very simple. Runit services, unlike SysVinit, usually only need a very short startup script. The services run in the foreground and must not fork into the background (i.e. no “daemonizing”). Also things like “start-stop-daemon” are no longer needed with runit. At first the service rsyslogd is “converted” to a runit service # create runit service directory mkdir /etc/sv/rsyslogd # create "run" file cat << EOF >/etc/sv/rsyslogd/run #!/bin/sh exec /usr/sbin/rsyslogd -n EOF # make executable chmod a+x /etc/sv/rsyslogd/run # stop SysV rsyslogd /etc/init.d/rsyslog stop # disable SysV service update-rc.d -f rsyslog remove # enable runit service ln -s /etc/sv/rsyslogd /etc/runit/runsvdir/default/ As you can see, creating a symlink from the service directory to the runsvdir default directory ensures that the service is set as “active” and also started immediately. Dbus needs in addition to “run” another file called “check”. mkdir /etc/sv/dbus cat << EOF > /etc/sv/dbus/check #!/bin/sh exec dbus-send --system / org.freedesktop.DBus.Peer.Ping >/dev/null 2>&1 EOF chmod a+x /etc/sv/dbus/check cat << EOF > /etc/sv/dbus/run #!/bin/sh dbus-uuidgen --ensure=/etc/machine-id [ ! -d /run/dbus ] && install -m755 -g 81 -o 81 -d /run/dbus exec dbus-daemon --system --nofork --nopidfile EOF chmod a+x /etc/sv/dbus/run /etc/init.d/dbus stop update-rc.d -f dbus remove # An error message followed, don't let it irritate you: insserv: FATAL: service dbus has to be enabled to use service elogind ln -s /etc/sv/dbus /etc/runit/runsvdir/default/ mkdir /etc/sv/elogind cat << EOF > /etc/sv/elogind/run #!/bin/sh sv check dbus >/dev/null || exit 1 exec /usr/lib/elogind/elogind EOF chmod a+x /etc/sv/elogind/run update-rc.d -f elogind remove ln -s /etc/sv/elogind /etc/runit/runsvdir/default/ To “runit-fy” additional services should not be a problem. You can also check the Arch-based distro Artix btw. This repo contains many examples of runit service scripts Runit comes with svlogd, a logging daemon that supports autorotate. For this purpose a directory log is created in the service directory. In this directory we create an executable script run, which starts svlogd. Here is an example mkdir -p /etc/sv/<dienstname>/log cat << EOF > /etc/sv/<dienstname>/log/run #!/bin/sh S="dienstname" mkdir -p /var/log/runit/$S chown _runit-log:adm /var/log/runit/$S chmod 750 /var/log/runit/$S exec chpst -u _runit-log svlogd -tt /var/log/runit/$S EOF chmod a+x /etc/sv/<dienstname>/log/run In order for svlogd to log, the service must output its messages to stdout. Some services need an additional configuration for this. I have demonstrated how to replace systemd on Debian 11 with the init service runit. Since runit is a proven, very lightweight and secure system, it is also possible to configure Debian quite a bit more securely and reliably with it. This is true for desktops as well as (ift not even more so) for servers. Supervision makes sure that services are monitored. I will not hide the fact that runit reaches its limits in more complex scenarios. For this, the similar system s6, which is also based on djb's daemontools, might be more suitable, but it has a much steeper learning curve.
<urn:uuid:20b80da4-176f-4f47-9d32-13da5383a531>
CC-MAIN-2022-33
https://simpletools.info/doku.php/osinstallation:debian11runit
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.95/warc/CC-MAIN-20220817032054-20220817062054-00201.warc.gz
en
0.899615
2,349
3.140625
3
The KWImF was conceived during Germany’s golden era of scientific development The opening of the Kaiser Wilhelm Institute for Medical Research (KWImF) was the culmination of years of intensive planning and organizational efforts by Ludolf von Krehl. Krehl was one of the earliest promoters of the integration of discrete disciplines of the natural sciences under the rubric of medical research. Today, we may take such an approach for granted, but at the time the KWImF was founded, this was a remarkably novel idea. Perhaps the only comparable institute in the world was the Rockefeller Institute in New York. Certainly, there were none in Germany. Krehl deserves the lion’s share of credit for tirelessly pursuing this vision. Of course, his plans for the Institute were not formulated in an intellectual vacuum. Instead, they evolved during one of the most progressive and productive periods in the history of science – the first quarter of the 20th century. Major advances in physics, biology and chemistry in Germany created an aura of excitement that greatly shaped both Krehl’s plans and the approaches of the scientists who were to participate in the research of KWImF. During the latter part of the 19th century, the world center of science and technology had gradually shifted away from England and France to Germany. A primary factor in this transition was the unprecedented support by government and industry for the nation’s universities and technical institutes. As Germany’s academic reputation soared, many of the innovations its scientists generated were utilized in the young nation’s industrial and military development. And in turn, as Germany evolved into a world power, the practical advances in technology encouraged further government policy supporting scientific research. Ludolf von Krehl Photo: Courtesy of Max-Planck-Institut für Medizinische Forschung Germany’s scientific rise was first marked by major achievements in the fields of inorganic and organic chemistry, which helped to usher in a new era for physiological chemistry. The influential school of Adolf von Baeyer, which included such luminaries as Paul Ehrlich, Emil Fischer, Richard Willstätter, Fritz Haber, Heinrich Wieland and Eduard Buchner, also took an early interest in the application of their knowledge of chemistry to biological materials. Their experimental methods, theories and results were essential forerunners to the discoveries that were later made in Heidelberg. Krehl and those who helped him shape the KWImF were closely connected to these scientists. During the first two decades of the 20th century, scientists began to forcefully formulate the chemical reactions that occur within cells. The mechanisms of oxidation, respiration and fermentation in cellular processes were particularly important research topics during this time. The structural mechanisms of chemicals involved in these processes and the role of enzymes in stimulating biochemical reactions, for example, were just beginning to be systematically explored. Critical methodologies that scientists now take for granted were developed. For example, in vitro studies – the chemical demonstration of biological processes outside of the living cell – were first used to analyze the processes of yeast fermentation and metabolic pathways. The first exploitation of radiation in biological experimentation followed during the next decade. The application of the physical laws of thermodynamics and methods for the measurement of the transfer of biochemical energy in cells was also completely novel. Ludolf von Krehl and his pioneering vision of multidisciplinary biological research It was in this general context that medicine and the natural sciences began to interface for the first time. Physician-scientists Ludolf von Krehl, Gustav Embden and Carl Neuberg were early pioneers in this effort. Such men had a dual motivation: clearly they wished to apply as many basic scientific tools as possible in order to understand and potentially cure diseases, but they also believed that the study of pathological conditions might hold the key to understanding normal cellular functions. Ludolf von Krehl had come to Heidelberg in 1906 as the Director of the University’s Medical Clinic. Krehl had already published a landmark textbook on Pathological Physiology and actively encouraged the physicians in his hospital to keep abreast of the newest developments in the natural sciences. He also made it a practice to always hire a few research scientists at the hospital. Between 1909 and 1912, Otto Warburg, Julian Huxley and Otto Meyerhof worked in his clinical laboratory. Each of these men went on to stellar careers that profoundly influenced the development of modern biology. Despite highly interesting research results at his clinic, Krehl was aware of the limited professional opportunities that such a small laboratory offered and realized that high quality researchers could be expected to stay for only a very short time. After the departure of Warburg, Huxley and Meyerhof, Krehl began to make tentative plans for a full scale research institute that would be independent of his medical clinic. During the next decade, these plans evolved to include research departments in chemistry, physics, physiology, cell biology and pathology. Fortunately, Krehl was one of Germany’s most highly respected physicians. He was energetic, personable, and very wealthy. Finally, he was well connected to the elite of the German academic and scientific communities. Krehl played each of these cards astutely in his efforts to successfully promote his proposal for a new multidisciplinary institute. Rallying support for Krehl’s proposal Ludolf von Krehl approached his friend Adolf von Harnack, the President of the Kaiser Wilhelm Society (Kaiser Wilhelm Gesselschaft in German), about the possibility of the scientific organization supporting his proposed institute. With similar views about how science should be conducted, Harnack wholeheartedly supported the plan under the auspices of the KWG. Unfortunately, the First World War and subsequent disarray in Germany delayed the plan until the mid-twenties, at which time details were finally approved. After 1926, Krehl and Harnack worked rapidly to shape the new institute, with Krehl reaching out to his many personal contacts in search of strong directors of the individual institutes who would enthusiastically participate in the collaborative environment he envisioned. In addition to Harnack, Max Planck, Richard Willstätter, Heinrich Wieland, Otto Warburg and Emil Fischer actively counseled Krehl. By 1927, Krehl was prepared to make offers to the key scientists for the planned institute. Krehl first called upon his former research assistant, Otto Meyerhof, to direct the KWImF’s Physiology Institute. Meyerhof had gained an international reputation since leaving Krehl’s clinic for his creativity in combining physics and chemistry in ground breaking studies of muscle metabolism. In 1922, he had won the Nobel Prize for Physiology. Unfortunately, due to politics and anti-Semitism, Meyerhof was unable to find a research position in Germany to match this stature. The KWG had come to his rescue in 1924, reuniting him with Otto Warburg in the latter’s KWI for Biology. But the position in Warburg’s KWI represented only a temporary solution for Meyerhof. Given his professional reputation and his personal familiarity with Krehl, Meyerhof was a welcomed addition as KWImF Director of the Physiology Institute. Photo: Courtesy of Walther Meyerhof, son of Otto Meyerhof Otto Warburg strongly encouraged Krehl to offer the position in chemistry to a brilliant young organic chemist, Richard Kuhn, whose work on enzymes had recently begun to overlap with his own research. At the time, the choice was considered something of a gamble, because Kuhn was only 26 years old. On the other hand, he came highly recommended by Willstätter, still the most influential chemist in all of Germany, and Wieland. In fact, Kuhn already held a chair in General and Analytical Chemistry at the Federal Institute of Technology in Zurich and was rapidly gaining a reputation for his studies of the stereo chemistry of enzymes and natural pigments. Kuhn signed a contract with the KWG in May of 1928. A decade later, he proved the wisdom of this trust when he was awarded the Nobel Prize for chemistry. Photo: Archiv zur Geschichte der Max-Planck-Gesselschaft, Berlin-Dahlem Karl W. Hausser was chosen to direct the Physics Institute. Hausser had first come to Heidelberg in 1913 as a graduate student of Philipp Lenard, an early Nobel Prize winner and the powerful head of the Physics Institute of the University of Heidelberg. After graduating, Hausser had gained broad experience in industrial research, working extensively with X-rays at Telefunken and then, after military service during the First World War, as a group leader in the Medical Physics Laboratory of Siemens and Halske in Berlin. It was there that Hausser first began to straddle the border between physics and medicine with research on the origin of Erythemen and the effects of sunlight upon skin pigmentation. His enthusiasm for biological topics and the close match to Kuhn’s own interest in the chemistry of natural pigments made him a perfect candidate for the new KWImF. |Karl W. Hausser Courtesy of Max-Planck-Institut für Medizinische Forschung Krehl planned to retire from his position at the University’s Medical Clinic as soon as the new Institute was built and preside over the overall KWImF operation during the start-up period (the task was to be rotated on an annual basis). He would also direct the Pathology Institute himself, which he envisioned as containing two sections: a small research group and a clinical ward of 20 beds which would provide a source of subject matter for his own research group and the other institutes. The early plans for a cell biology group were meanwhile dropped for lack of a suitable candidate. By the middle of 1928, the four appointed heads of the proposed institutes were meeting with KWG officials and each other, exchanging correspondence regarding plans for the library and possible collaborations, negotiating budgets and submitting lists of required equipment. Among the most powerful connections that Ludolf von Krehl relied upon in establishing the KWImF was his relationship with Adolf von Harnack. Although he was a theological scholar by training, Harnack was one of Germany’s most influential figures for the promotion of natural sciences. In 1910, Harnack had collaborated with Emil Fischer, Walther Nernst and August von Wasserman to establish a new organization dedicated to the service of German society through the advancement of scientific research. Their original proposal for this organization emphasized basic principles for the cooperation of science, industry and government and included construction of a series of elite research institutes. The proposal underscored the importance of complete research freedom for its members, structurally supported by the award of lifetime contracts to a few carefully chosen research directors. Harnack and his colleagues quickly garnered the personal backing of Kaiser Wilhelm II, who also provided a name for the Kaiser Wilhelm Society. They then successfully solicited the support of many of the nation’s leading industrialists and bankers. In fact, the KWG was formed with the then astronomical budget of 10 million marks. Harnack was elected President and the first of the KWG’s institutes opened in 1911 in Berlin. Known for his liberal views, Harnack’s basic hiring principle was scientific ability first. He recruited some of the country’s most important scientific figures for the KWG’s original institutes, including Richard Willstätter, Otto Hahn, Lise Meitner, Ernst Beckmann, Fritz Haber, August von Wasserman and Albert Einstein. By the 1920s, the KWG was recognized as one of the most prestigious scientific organizations in the world. The foundation of the Kaiser Wilhelm society Photo: Courtesy of the Archiv zur Geschichte der Max-Planck-Gesselschaft, Berlin-Dahlem Financing the KWImF during the chaos following WWI Throughout this time, Krehl and Harnack were busy soliciting support to finance construction of a state-of-the-art research facility. Despite the KWG’s auspicious beginning, this was a major challenge in Germany during the 1920s. Stabilization of the German currency earlier in the decade had helped immensely by halting hyperinflation, but the country still suffered from massive debt for war reparations, as well as political and social instability. Moreover, the Germans were sensitive to the fact that France still occupied what was considered by many to be German territory west of the Rhine (this included Alsace, as well as areas that today belong to Germany). The southern state of Baden, which lies east of the Rhine and under which Heidelberg was administered, was particularly hard hit and remained underdeveloped in comparison to the northern sections of the country – especially Prussia. Despite the reputation of the University of Heidelberg, Baden had seen little scientific investment since the end of the war. Regional prejudices were also part of the problem. The state of Baden was considered something of a backwater by the northern sections of the country. Even the KWG had refrained from placing any of its more than prestigious institutes south of Frankfurt and the Main River. Local and regional officials in Baden, who lobbied for decentralization of national resources, were keen on lending their political support to the development of a KWI in Heidelberg. Its establishment, they hoped, would seed further scientific and industrial development in the south. The city of Heidelberg, therefore, donated a large plot of land for the project, while the state government of Baden agreed to give more than three-quarters of a million marks for construction of the building. This was a huge sum at the time and covered about half the necessary funds. Most of this money came from a special fund earmarked for development of “borderland” areas of the country. The remainder was provided by other national and regional organizations, including the forerunner of the German research ministry, the state of Prussia, ministries in Baden and the KWG itself. The KWImF rises on the banks of the Neckar River The construction site for the new KWImF was located at the western edge of Heidelberg on the north bank of the Neckar River. This area has since developed into the University of Heidelberg’s large new campus for science and medicine. At the time, however, it was an isolated semi-rural setting, surrounded on three sides by fruit orchards. In fact, at the time, the site was praised because the lack of traffic promised to keep down ground vibrations that might affect experimental conditions. The well known architect Hans Freese was chosen to design the KWImF buildings. Krehl encouraged the development of an aesthetic design and amenities for the KWImF staff, but Freese’s primary assignment was to maximize the scientific productivity of the laboratories. To this end, Kuhn, Meyerhof and Hausser worked closely with Freese to customize the plans. In fact, Kuhn became so involved in the design phase that he insisted on locating light switches in his wing of the institute. |The KWImF library in 1938 Photo: Courtesy of Irmgard Weiland Construction began in 1928 and, considering the still fragile economic situation in Germany, the building phase went smoothly. The four research institutes were arranged separately in wings that spread from a central core in the form of the letter H, with each of the laboratories fitted out with the latest scientific equipment. The KWImF library, common seminar rooms and administrative offices were located in a central cross bar of the H in order to facilitate interdepartmental interaction. The library, with its large, open three-story hall, was the crowning jewel of the building, and the four directors were given enough money to line it with a collection of some 70,000 books and journals. Freese also designed a guest house next to the institute to provide housing for guest scientists and a small number of the scientific staff who would provide assistance in case of an emergency at the laboratory. Finally, the plans included the construction of fine homes for each of the KWImF directors. These were located within easy walking distance of the institute. There were some glitches and compromises, such as the use of red brick rather than the expensive red sandstone that is so typical of Heidelberg architecture. Yet, Kuhn, Hausser and Meyerhof began work in magnificent new laboratories. This was partially due to Krehl’s successful efforts to solicit supplementary funds, including the support of the German chemical company IG Farbenindustrie and the Rockefeller Foundation in the United States. Nevertheless, cost overruns did have an impact on the overall project, most notably affecting the development of Krehl’s Pathology Institute. In a remarkably selfless gesture, Krehl cut his own budget, then postponed construction of his clinical ward so that facilities for Kuhn, Meyerhof and Hausser would be completed on time. Unfortunately, this choice prevented Krehl from immediately resigning his position from the University of Heidelberg Medical Clinic and turning his full attention to scientific development at the Pathology Institute. A view of the newly opened KWImF with Heidelberg in the background. Photo: Courtesy of the Archiv zur Geschichte der Max-Planck-Gesselschaft, Berlin-Dahlem Kuhn, Meyerhof and Hausser moved into their new laboratories at the end of 1929, Kuhn arriving first at the beginning of October. Each director brought a number of assistants with him, enabling both continuity and a rapid start-up of research. Kuhn, Meyerhof and Hausser were also provided with generous budgets to immediately expand the size of their groups and the scope of their work. Krehl administered the overall institute from his office at the University’s Medical Clinic and moved a small pathology research group into the KWImF in early 1930. The opening of theinstitute was believed to signal a scientific renaissance for Southwestern Germany The official KWImF opening ceremony took place on May 27, 1930. It was a joyful occasion for Ludolf von Krehl, who stood proudly beside his three fellow directors and Adolf von Harnack (tragically, Harnack became ill during this visit to Heidelberg and died two weeks later). Speeches by the President of the state of Baden, the Mayor of Heidelberg and the Rector of University of Freiburg were full of patriotic references to the fatherland and boldly predicted a ripple effect that would spread science and technology across southern Germany.
<urn:uuid:0fdf560e-d23d-44d6-949c-6545eb113522>
CC-MAIN-2022-33
https://www.nobelprize.org/prizes/themes/the-foundation-of-the-kaiser-wilhelm-institute-for-medical-research/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571086.77/warc/CC-MAIN-20220809185452-20220809215452-00602.warc.gz
en
0.968453
3,914
3.34375
3
by Gerina Dunwich In contemporary Witchcraft, the cauldron is an important magical tool that symbolically combines influences of the ancient elements of air, fire, water, and earth. Its shape is representative of Mother Nature, and the three legs upon which it stands correspond to the three aspects of the Triple Goddess, the three lunar phases (waxing, full and waning), and to three as a magical number. Additionally, the cauldron is a symbol of transformation (both physical and spiritual), enlightenemnt, wisdom, the womb, of the Mother Goddess, and rebirth. Since early times, cauldrons have been used not only for boiling water and cooking food, but for heating magical brews, poisons, and healing potions. They have also been utilized by alchemists and by Witches as tools of divination, containers for sacred fires and incense, and holy vessels for offerings to the gods of old. If a large cauldron is needed in a ritual, it is generally placed next to the altar, on either side. Small cauldrons, such as ones used for burning of incense, can be placed on top of the altar. In Middle Ages, most of the population believed that all Witches possessed a large black cauldron in which poisonous brews and vile hell-broths were routinely concocted. These mixtures were said to have contained such ingredients as bat’s blood, serpent’s venom, headless toads, the eyes of newts, and a gruesome assortment of animal and human body parts, as well as deadly herbs and roots. In fourteenth-century Ireland, a Witch known as Lady Alice Kyteler was said to have used the enchanted skull of a beheaded thief as her cauldron. Also in the fourteenth century, a male Witch by the name of William Lord Soulis was convicted in Scotland for a number of sorcery-related offenses. His peculiar form of execution was death by being boiled alive in a huge cauldron. According to an old legend, if a soreceress dumped the vile contents of her cauldron into the sea, a great tempest would be stirred up. Ancient Irish folklore is rich with tales of wondrous cauldrons that never run out of food at a feast, while an old Gypsy legend told of a brave hero who was boiled in a cauldron filled with the milk of man-eating mares. It is said that bad luck will befall any Witch who brews a potion in a cauldron belonging to another. If the lid is accidentally left off the cauldron while a magical brew is prepared, this portends the arrival of a stranger, according to a superstitious belief from Victorian-era England. The cauldron and its powers are associated with many goddesses from pre-Christian faiths, including Hecate (the protectress of all Witches), Demeter/Persephone (in the Eleusinian mysteries), the Greek enchantresses Circe and Medea, Siris (the Babylonian goddess of fate and mother of the stars, whose cauldron was made of lapis lazuli), the Celtic goddess Cerridwen, from whose cauldron bubbled forth the gifts of wisdom and inspiration. Although the cauldron has traditionally been a symbol of the divine feminine since the earliest of times, there exist a number of male deities from various Pagan pantheons who also have a connection to it. Among them are the Norse god Odin (who acquired his shape-shifting powers by drinking from the cauldron of wise blood), the Hundu sky god Indra (whose myth is similar to Odin’s), Bran the Blessed (the Welsh god of the sacred cauldron), and Cernunnos (the Celtic horned god who was dismembered and boiled in a cauldron to be reborn). Depicted on the famous Gunderstrup cauldron (circa 100 B.C.) is the stag-horned Cernunnos in various scenes with different animals. Believed by many to be of Celtic origin, this large silver cauldron may have once been used in sacrificial rites. The use of sacrificial cauldrons can be traced to the ancient religious and magical practices of various European cultures, as well as to some shamanic traditions. Human and animal victims would first be beheaded over the cauldrons and then have their blood drained out into the cauldron, where it would be boiled to produce a mystical substance. Among the Celts, a potion of inspiration was said to have been brewed in such a manner by the priestess of the lunar goddess. The cauldron is linked to the Holy Grail – a chalice that is beleived by Christians to have been used by Jesus Christ at the Last Supper. However, prior to its incorporation into Christian myth in the twelfth century, the Grail belonged to British paganism as a symbol of reincarnation and the divine womb of the Goddess. Many Witches pour a bit of ordinary surgical spirit (rubbing alcohol) into their cast iron cauldrons and light it carefully dropping in a lit match. This is often done as part of healing rituals, invocations to the elemental spirit of fire, scrying divinations, sabbat fire festivals, and various working rituals. (Note: A quarter cup of alchohol will burn for approximately three minutes.) Be sure that the cauldron is resting securely on a fireproof stand and is not close to any flammable substances. Do not touch the cauldron while it is hot unless you cover your hands with protective oven mitts. If the fire must be extinguised before it burns itself out, smother it by covering the cauldron with a lid or by sprinkling salt or sand over the flames. Remember, whenever working with the element of fire, use caution and common sense, and respect the spirits of the flame. The sight of a cauldron blazing with fire can be very magical and mesmerizing, and when the alcohol has ben steeped in aromatic herbs, a sweet but gentle incense-like fragrance is produced. To make an herbal cauldron spirit, put a small bunch of any or all of the following into a glass bottle: fresh lavender flowers and leaves, fresh mint leaves, fresh rosemary flowers and leaves, and fresh thyme flowers and leaves. Fill the bottle to the top with the alcohol, cap it tightly, and then give it a good shake. Keep it in a cool place for thirteen days, shaking it twice daily (every sunrise and moonrise). Strain it through a double thickness of muslin into clear bottle. Cap it and store it away from heat and flame. Cauldron spirit will keep indefintely. Using a cauldron, symbol of inspiration and rebirth, has brought new dimensions to both group and solitary work. A cauldron decorates the center of the Circle during Lesser Sabbats. An air cauldron at a spring rite creates a misty, magical quality for the ceremony. In summer, the cauldron will flash and spark. A blue flame burns mysteriously within the Water cauldron during the autumn festival. Throughout Yule, the Earth cauldron burns steadfast and constant. During moon rites, when magick is done, we write the purpose of our working on flash papers and toss them into the burning cauldron while chanting. A working cauldron should be of cast iron, with a tight-fitting lid, three sturdy legs, and a strong handle. Season your cauldron before using it for the first time. Pour in generous helping of salt and lighter fluid, slosh it up to the rim and wipe dry. For indoor use it MUST have a fireproof base or your workings will summon up yellow-coated salamander spirits from the fire department. Layer salt, wax shavings, three powdered or ground herbs, lighter fluid and ivy leaves in the cauldron while focus and chanting. Use a candle to light it. When the smoke starts to roll, extinguish the cauldron by putting the lid on. Using tongs, put a chunk of dry ice is a small glass or ceramic bowl and place the bowl on a cloth in the bottom of the cauldron. Allow the cauldron to smoke as long as the ice lasts. The mists create excellent images for scrying. Cover the inside bottom with dirt or sand to dissipate heat. Light incense charcoal and add either salt-peter for flame and spark or flash powder for a different but spectacular effect. To assist in releasing or firing off peak energy, try using flash “bombs”. Make a small pocket in a piece of flash paper, fill with flash powder and tie with thread. The “bomb” should be about the size of your smallest fingernail. The results are spectacularly bright, so use the powder sparingly. Don’t look directly at the flash as you drop the “bomb” in the cauldron. At least seven days before the ritual, place equal quantities of three appropriate herbs in a pint glass jar. Fill the rest of the jar with Everclear (200 proof alcohol), cap tightly, and shake gently while concentrating on the purpose of the ritual. Add a chant if its feels right. Let the jar rest in a dark, warm spot and shake twice daily, charging with purpose. Before the ritual, place a fireproof ceramic or glass bowl in the cauldron. Pour in the herb mixture, being careful none spills into the cauldron. Light with a candle to produce a beautiful blue flame. The cauldron, as the fifth elemental spirit, symbolizes inspiration, rebirth, illumination and rejuvenation. Use a Fire cauldron with salt-peter to cast a Circle. Use the mists of an Air cauldron for an initiation. Burn away hate, prejudice and negative self-images, with a Water cauldron. The Earth cauldron is ideal for indoor Beltane rites. Remember to place a burning cauldron on a fireproof surface. Practice safety when using any volatile materials and you will enjoy your cauldron for many rites. The cauldron or pot symbolizes cyclical time and the lunar calendar.This is because the cauldron represents the womb of rebirth, the bowl of blood held by the Hindu Kali and other goddesses. This blood is the Wise Blood from the Cosmic Womb. It has been called soma by the Hindus, red claret by the Celts, and greal by the Welsh Bards. In Vedic myth, Indra stole the soma so that he could rule over all the gods, a reference to the stealing of importance and power from the Goddess for a patriarchal god. The Goddess and Her cauldron is the center of all feminine power andevery female group. Spiritual transformation can only come through Hercauldron,or belly-womb. Ancient tradition says that only women can tap into the great power of the cauldron, for only women are made in the image of the Goddess with Her all-renewing womb of rebirth and transformation. This tradition remains in the figure of the witch and her cauldron. The cauldron is also the repository of inspiration and magick, asseen in Cerridwen’s cauldron which was sought by the Bards. The Goddess has long been considered to be the source of inspiration and the Mistress of Magick. When a true initiation takes place, the initiate willingly descends into the cauldron, she is often filled with ecstatic emotions when she returns to her present state. She may sing, play music, dance, prophesy, see visions, or become creative in poetry and prose. In short, she is filled with Goddess spirit and inspiration, the type of power that only comes from the sacred cauldron. Such Bards as Taliesin stated that they regularly “drank”from the cauldron to promote their creativity and divine inspiration. Magickal Meaning: development of psychic gifts; creative talentsbeing used Coming to terms with physical death, either through the death of someone close to you, or a very personal experience in dreams and/or meditation.”
<urn:uuid:8a382a42-12c9-4648-8c93-6b1940436336>
CC-MAIN-2022-33
http://www.lunasgrimoire.com/cauldron-magic/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570827.41/warc/CC-MAIN-20220808122331-20220808152331-00604.warc.gz
en
0.930492
2,516
2.703125
3
August 20th, 2021 by Fix Auto USA Everyone knows that driving on bald tires is unsafe. Even on dry, well-maintained roads, it’s an accident waiting to happen. Driving fast or on wet, slippery roads only increases the chances of an accident. The dangers of driving on worn-out tires isn’t just folklore – it’s a proven fact. The National Highway and Traffic Safety Administration reports nearly 11,000 tire-related motor vehicle crashes each year. And, nearly 200 people die in these crashes. Protecting yourself and your loved ones starts with understanding the role tire treads play in safe driving and when it’s time to change your tires. What Do Tire Treads Do? Tire treads consist of carefully designed grooves, or channels, on the surface of the tire. They provide the traction necessary to grip the road in potentially dangerous weather conditions, such as rain, snow, ice, or mud. Without them, vehicles would be almost impossible to control on wet, icy, or slick roads. In fact, driving on bald tires in snow is one of the most dangerous situations for a driver. Thanks to tire treads, you can steer on slick roads by forcing water away from the tire. This enables the tire to maintain a solid grip on the road even when the rain or snow is coming down hard. Treads also play an important role in making sure the car travels in the direction that we steer. Worn tires are unable to force water away, thereby making it difficult to control the car and steer in the right direction. Are Worn-Out Tires Dangerous? Bald tires are one of the leading causes of accidents, especially those that involve a single vehicle. What makes them so dangerous? Here’s a closer look. 1. Too Much Heat Buildup Driving creates friction between your tires and the road surface, and friction creates heat. Too much heat can cause a blowout, causing you to lose control of the car, especially at high speeds. Tire materials can withstand fairly high levels of heat. But, once the surface temperature reaches a certain limit, the tires become unsafe. Treads help cool the tire by allowing air to flow in between the grooves. Bald tires don’t have the grooves provided by treads, so the heat can easily build up to unsafe levels. 2. Increased Risk Of Hydroplaning Hydroplaning occurs when a layer of water gets between the tire and the surface of the road. Modern tire tread patterns contain deep grooves that channel water away from the tire, allowing it to maintain a firm grip on the road in wet conditions. As the tread wears away over time, the grooves become shallower, making them less effective at directing water away from the tire. The shallower the grooves, the greater the risk of hydroplaning. 3. Difficult Handing In Snow And Ice Unless you have good snow tires, which have wider and deeper grooves than everyday tires, driving on a road covered with snow or ice can be risky. Many winter tires come with “sipes” – small, thin grooves or channels cut into the edges of the treads. These help improve traction by providing more surface area to grip the road. If your snow tires don’t have sipes, take them to your tire shop so they can add the extra edges for you. Keep in mind that as your tread wears away, so do the sipes. Having both in good condition will minimize the dangers of spinning out on icy roads. Bald tires in snow should be avoided at all costs. 4. Loss Of Air Pressure Another problem with bad tires is they lose air faster than tires with good tread depth. Even if you check your tire pressure on a regular basis, low-tread and worn-out tires can lose their air sooner than you think. Once worn-out tires become under-inflated, they’re even more dangerous to drive. They can’t grip the road properly, even in dry conditions, which can make it harder to steer. They can cause the car to skid during sudden stops. They even put a dent in your bank account by reducing gas mileage. Under-inflation also causes the remaining tread to wear out quicker, which requires replacing your tires sooner than expected. 5. Sudden Blowouts Treads help reduce the chances of suffering a blowout while driving. Blowouts are dangerous at any speed; at high speeds, they can be fatal. Treads can’t prevent all punctures, but if you run over a nail or other sharp object, they stand a better chance of resisting a blowout than bad tires. 6. Stopping Distances Can Increase On Wet Roads There is a direct correlation between tire tread and stopping distance when you press down on the brakes. This is especially true if your tire treads are worn out and you’re driving in wet weather. You always want your tires to ensure you can quickly and securely stop your car when you step on the brake pedal. If the tires are worn out, an abrupt stop can turn into an accident. 7. Legal Violations Each state has its own guidelines relative to tire tread. In most instances, you’re required to have a tire replaced if the tread falls below 2/32 of an inch. Learn your state’s laws, so you can comply with guidelines relating to tire tread. You can also bring your car to a certified auto technician, who can inspect your tire tread and replace any defective tires. 8. Car Repair And Maintenance Costs Can Increase The longer you wait to replace worn-out tires, the more likely it becomes that they’ll contribute to an accident. It can be expensive to replace your tires, but the costs of doing so rises even further if you wait too long to do so. So, by being proactive in your efforts to replace defective tires, you may be able to avoid additional vehicle maintenance and repairs. How To Tell If Your Tires Are Bald A simple visual inspection is all it takes. If the treads are gone and the surface of the tires is smooth, you have bald tires. If you forget to look, your tires will let you know by losing traction on wet roads, skidding when you come to a sudden stop, or becoming harder to steer at high speeds. Your ride will also be less comfortable, as worn-out tires have nothing to cushion the impact of bumps, potholes, or hitting small objects in the road. Other signs that your tires are bald include: Tires can make a humming sound that changes with speed. This indicates there may be a chopped tread, due to lack of rotation or a failing suspension component. Meanwhile, a thumping sound is a sign that there may be a flat spot on a tire. The spot can be caused by a defect in the tire or locking up the brakes. Wobbling can occur when you’re driving at low speeds. It can cause you to feel like your car is bouncing up and down. You may see your steering wheel move on its own. Wobbling occurs due to separation of the internal belts. In this instance, pressurized air presses on a tire’s tread. This can lead to a large bubble on the tread, resulting in wobbling. There may be times when tire tread develops a defect that cannot be balanced out. This can be caused by a small separation of the steel and polyester bands inside a tire. At this point, it may feel like your tire is out of balance. Yet, no matter how many times you balance your tire, the problem will persist. You cannot repair your tire when this happens. Instead, you’ll need to have your defective tire replaced. Do You Need To Replace Bad Tires Right Away? How long can you drive on bad tires? That depends on several factors. If they’re completely bald, get to a tire store and change them right away. Otherwise, you’re putting your own life and those of any passengers at risk. Sometimes, only one or two tires may go bald in a set of four. This is a sign that your car may be out of alignment, especially if the baldness is just on the edges of the tire. In this situation, it’s a good idea to have your vehicle inspected for mechanical problems before replacing the tires. Otherwise, your new tires may lose their tread sooner than they should. If you only have one bad tire, it can be tempting to let it go until all tires need replacing. This can cause several problems that compromise safety. It will cause the other tires to wear unevenly, which can result in steering problems. It increases the chances of a blowout in the worn-out tire. It can also lead to skidding during sudden braking or on slick roads. It is never a good idea to drive on bald tires – even just one. What Counts As A Worn-Out Tire? Tires don’t have to be completely bald to be considered worn tires. Once the treads are worn down to a certain level, they become unsafe to drive. It’s easy to tell when your tire treads are wearing low. However, you need to inspect them on a regular basis, which can be easily done in just a few minutes. Tire professionals recommend checking them at least once a month. Here’s what to look for: 1. Not Enough Tread Depth Instead of “eyeballing” the tread depth, always use a tire tread gauge. It doesn’t cost much, is simple to use, and can be easily stored in your glove compartment. Another way to evaluate tread depth is to insert a penny into the tread with the “heads” side facing you. If you can see all of Lincoln’s head, your tread is too low and it’s time to get new tires. 2. Visible Indicator Bars Today’s tires are made with tread indicator bars. These flat rubber bars are built right into the tire, but you can’t see them when the tires have plenty of tread. As the tread wears down over time, the bars gradually become visible. When you can clearly see them, it means the tread has reached unsafe levels. 3. Sidewall Cracks Tire sidewalls tend to dry out as the miles pass. This can lead to cracks or cuts that compromise the structural integrity of the tire. Very small cracks are common on worn tires, and don’t pose much of a threat. Large cracks should not be ignored. Any time you spot one, head to your tire shop for a professional evaluation. Worn-out tires can also develop bulges and blisters that create weak spots on their surfaces. These can increase the chances of a sudden blowout and can also lead to skidding, hydroplaning, or losing control of your car by reducing the tire’s ability to grip the road. As with large cracks, bulges, and blisters should never be ignored. How To Avoid Worn-Out Tires Some of the best things you can do to avoid wearing out your tires include: - Avoid taking curves too fast and other unsafe driving behaviors that can otherwise compromise the integrity of your tires. - Monitor and maintain the appropriate tire pressure. - Get a wheel alignment. A proactive approach to tire care is key. If you do your part to guard against worn-out tires, you can keep your tires in top condition now and in the future. Be Tire-Safe: Rotate Good Tires And Replace Bad Ones Tire professionals recommend rotating your tires on a regular basis. This will extend the life of the tire and keep you safer by having the tread wear evenly on all four tires. As a general rule of thumb, plan to rotate your tires every six months or 7,500 miles. When purchasing new tires, follow the manufacturer’s recommendations for how often to rotate the tires. Your driving habits and local weather conditions may dictate more frequent rotation. Sooner or later, your worn-out tires will need replacing. For safety’s sake, sooner is always better than later. Never replace your tires based solely on the estimated lifetime mileage. Instead, learn how to measure tread depth and check for signs of tire damage on a monthly basis. Finally, if you’re unsure if you need or will be needing new tires, it’s best to consult your local tire shop immediately. Driving on bad tires puts you and everyone else on the road at risk, so be a good citizen and keep your tires in good condition at all times. This blog post was contributed by Fix Auto Yorba Linda, a leading industry expert and collision repair shop servicing the northeastern Orange County.
<urn:uuid:13b5b8bb-4953-4c26-924d-14db14873950>
CC-MAIN-2022-33
https://www.fixautousa.com/blog/5-reasons-why-driving-on-worn-out-tires-can-be-dangerous/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00398.warc.gz
en
0.942676
2,687
2.640625
3
This post describes an experimental procedure for the general chemistry laboratory student that conveniently illustrates the electroplating process. Students plate a thin film of nickel on to a screen-printed carbon electrode. The plating process is accurately controlled using a traditional three-electrode electrochemical cell arrangement. Each screen-printed electrode pattern includes all three required electrodes (working, reference, and counter). Nickel is plated onto the working electrode from a conventional Watts nickel plating solution. The light weight electrode pattern is massed before and after deposition of the metal. By massing the patterned electrode before and after deposition, the student obtains the mass of nickel actually deposited on to the working electrode. This mass result used with Faraday’s Law allows the student to compute the efficiency of the electrodeposition process. Electroplating is an important branch of electrochemistry with many applications in modern technology. For example, the automobile industry relies upon nickel and chromium electroplating to protect steel from corrosion. Noble metal electroplating of the coinage metals (see Figure 1) is used for decorative purposes such as fabricating jewelry. Electroplating is also an essential tool for manufacturing state-of-the-art electronic devices. Electrodeposited copper metal interconnects within integrated circuits play an important role in the devices used to build the information superhighway. In this lab experiment, basic principles of electroplating are explored by plating thin films of nickel onto screen-printed carbon electrodes. Figure 1. Screen-Printed Carbon Electrodes with Nickel, Copper, Gold, and Silver (left to right) This laboratory experiment makes use of inexpensive screen-printed electrodes (SPEs) as a convenient (and disposable) alternative to larger-sized bulk metal electrodes. Each electrode consists of a carbon working electrode (WE), a carbon counter electrode (CE), and a silver/silver chloride reference electrode (REF). The student voltammetry cell (see Figure 2) is ideal for vertically-mounting the SPE, thereby making observation of the plating process easy to visualize. Figure 2. Compact Voltammetry Cell Components: (a) USB-Style Cell Cable; (b) Cell Grip Mount; (c) Screen-Printed Electrode (SPE); (d) Scintillation Vial; and (e) Cap For accurate control of the electrode potential used for electroplating, it is recommended that the working, counter, and reference electrodes be connected to a modern three-electrode potentiostat (such as the Pine Research WaveNow). Such potentiostats are capable of applying and maintaining a constant potential between the working and reference electrodes while at the same time accurately measuring the flow of charge (current) through the working electrode. As the potential of working electrode moves toward more negative values, electrons at the working electrode surface become more readily available to reduce ions in the solution. Nickel(II) cations in solution are reduced to nickel metal. The standard half-cell reaction for this process can be described as follows: The reduction potential for Ni(II) (see Equation 1) is E0 = -0.25 V vs. SHE. Under acidic plating conditions (pH = 3), the reduction of hydronium (H+) to form hydrogen gas is the main side reaction that consumes additional electrical charge (see Equation 2). In electrochemistry, the reduction potential of hydronium is arbitrarily set to zero; i.e., E0 = 0 V vs. SHE. Faraday’s law states that the extent of chemical reaction (i.e., mass of electroplating metal, m) caused by the flow of current is proportional to the amount of electric charge (Q, in Coulombs) passed through the electrochemical cell (see Equation 3). In Equation 3, M is the molar mass of the deposited metal, n is the number of electrons transferred in the reaction (oxidation state of the metal ion), and F is Faraday’s Constant (96,485 C/mol). It is common to simplify Equation 3 by introducing a new term, Z, called the electrochemical equivalent, which is a constant of proportionality (see Equation 4). Therefore, the electrochemical equivalent, Z, for the case of electroplating of Ni from Ni2+ solution is 3.041 x 10-4 g/C. This value means that consumption of one coulomb (C) of electric charge can electrodeposit a maximum of 3.041 x 10-4 g of nickel onto the cathode. Side reactions often reduce the overall electroplating current efficiency below 100%. One can calculate the electrodeposition (plating) efficiency (W) by taking the ratio of actual mass plated and theoretical mass plated (see Equation 5). The actual plating efficiency is always less than 100% due to various unwanted side reactions that also occur, consuming additional electric charge. Prepare 1 L of standard Watts nickel plating solution as follows: add 290 g nickel sulfate hexahydrate (NiSO4 • 6H2O), 30.0 g of boric acid (H3BO3), and 8 g of sodium chloride (NaCl) to a 1 L volumetric flask and dilute to the mark with distilled water. This nickel plating solution is rather stable. 4.2Initial Mass Measurements Subsequent analytical calculations depend on accurate initial measurements. The screen-printed electrode must be massed prior to experimentation to determine the mass of nickel plated. Rinse the SPE with deionized water. Allow to dry. You can speed up the drying process by gently dabbing the SPE with a lab wipe or blowing a light stream of N2 over the surface of the SPE. After the electrode has dried, mass the entire SPE on a tared analytical balance. Record the mass to four significant digits (if available) in your notebook as the initial mass of the SPE. The cell (see Figure 2) should be filled with approximately 10 mL of Watts plating solution. Insert the SPE into the cell grip mount (the cell grip mount is two-sided, allowing SPE electrodes to be loaded in either direction). Use between one to two spacers on the backside of the SPE to ensure the electrode is held tightly in place. Place the grip mount loaded with the SPE into the cell cap and connect to the vial containing the Watts plating solution. Connect the electrode to the potentiostat. When viewing the blue surface facing downwards, connect the mini-USB-style plug to the receptacle to the left of the SPE face. Create a “Bulk Electrolysis (BE)” experiment in AfterMath. Set the parameters to apply a negative potential (-1.2 V) to the carbon working electrode for 60 minutes (see Figure 3). The potential of the working electrode is held constant with respect to the Ag/AgCl reference electrode during the electrolysis experiment. This means the electrode potential is 1.2 V more negative than the potential of the silver/silver chloride reference electrode. This potential is sufficiently negative (cathodic) to cause nickel ions in the plating solution to be reduced to metallic nickel at the surface of the carbon working electrode; thus, nickel naturally forms a thin film on the electrode surface. Figure 3. AfterMath Bulk Electrolysis Parameters Observe the working electrode surface at regular intervals during the electrolysis. During the early minutes of the plating process, you should observe white nickel gradually covering the black carbon working electrode surface. Record observations in your notebook and include the electrolysis duration time that has elapsed when making these observations. AfterMath will monitor the current at the working electrode throughout the plating procedure. The current is a measure of how fast the electrons flow from the working electrode and reduce chemical species in the solution, including but not limited to, the nickel cations. Initially, you will observe a larger current that fairly rapidly decays (exponentially). As more nickel ions are reduced to the electrode surface, the solution around the working electrode becomes depleted of additional nickel. Therefore, the current, a measure of electron transfer at the working electrode (reduction of nickel ion to nickel metal), will decay slowly over time. When left long enough, no additional nickel would plate onto the electrode unless the solution was stirred. 4.4Post-Plating Mass Measurement After an hour of electrolysis, there should have formed a visibly thick layer of nickel on the working electrode surface. Carefully remove the SPE from the compact voltammetry cell and dry the electrode as before, being careful with a lab wipe. The ideal method to dry the surface is with a gentle stream of N2 gas or compressed air. After the electrode is dried, obtain the mass of the SPE on a tared analytical balance. Record the mass to four significant digits (if available) in your notebook as the final mass of the SPE. As described, during a constant potential bulk electrolysis, current is measured as a function of time. Your data will resemble a typical chronoamperogram (see Figure 4). Figure 4. Current vs. Time Plot (Chronoamperogram) for Bulk Electrolysis Experiment From Faraday’s law, you know that the total amount of charge passed during an electrolysis is proportional to the mass of metal reduced at the working electrode surface (see Equations 3-5). During analysis, you will determine the charge passed during the total electrolysis using AfterMath tools. Calculate the mass of nickel plated to the electrode surface. Record this value in your notebook. Find the amount of charge passed during the electrolysis. This can be accomplished in AfterMath. The first step in analyzing your results is to obtain the charge data from the chronoamperogram. Charge is the area beneath the current vs. time plot. Current has the unit of amperes (A), where 1 A = 1 coulomb/second (C/s). If you were to multiply each current measurement by the elapsed time (C/s x S = C) and sum the charge across the entire experiment, you would determine the total charge passed during electrolysis. The AfterMath software provides an easy way to measure charge from a chronoamperogram, using the area tool (see Figure 5). Measure the charge (area) for your nickel plating electrolysis. Record the total charge in your notebook. It may take some time to become familiar with the tools in AfterMath. Consult with your instructor if you have any difficulties using the tools to determine charge. Figure 5. Determination of Charge from a Chronoamperogram in AfterMath 6.3Calculate Electrodeposition Efficiency For the final calculation, you will have to combine all pieces of data collected thus far. Review Section 3 to gain familiarity with Faraday’s law. Pay attention to units. Calculate the electrodeposition efficiency using Equation 5. - What is the color of nickel metal? When two electrons are removed from nickel, what color will result? - Did you observe anything on the surface of the electrode as the potential was applied and the deposition occurred? What did you see and what do you think it could be? (Hint: A molecule of water is composed of two atoms of hydrogen and one atom of oxygen) - Was your average plating efficiency less than or greater than one hundred percent? Why do you think that this was the case? Bard, A. J.; Faulkner, L. A.  Electrochemical Methods: Fundamentals and Applications, 2nd ed. Wiley-Interscience: New York, 2000. Chuan, Y.; Chyan, O.  Metal Electrodeposition on an Integrated, Screen-Printed Electrode Assembly.  J. Chem. Educ., 2008, 85(4), 565.
<urn:uuid:ffb9101e-4759-4a7e-aa31-e80b11c5c490>
CC-MAIN-2022-33
https://pineresearch.com/shop/kb/applications/laboratory-exercises/exploring-inexpensive-electrodes/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573699.52/warc/CC-MAIN-20220819131019-20220819161019-00003.warc.gz
en
0.895584
2,603
3.90625
4
« ՆախորդըՇարունակել » active zeal by being appointed first lord of the treasury and chancellor of the exchequer, in the cabinet ostensibly led by his brother-in-law and early friend, Viscount Townshend. A severe illness followed his elevation, and the prosecution of the rebels, a task in which he had labori. ously aided. In the interval of his absence the septennial bill was introduced into parliament; an act which has justly been looked on as one of the measures of his government, from his assistance in its preparation previous to his illness, and which is certainly strikingly characteristic of an administration which turned all its measures not on general principles of policy, but on the means of fortifying their party. On the visit of the king to his native country, the earl of Sun. derland, assisted by Sir William Wyndham, a tory, but the friend of Townshend and Walpole, began to rise in personal influence with the monarch, and the tories viewed with pleasure and expectation the balance almost equally held between two parties among their enemies. Townshend, when the power of his new opponents was fully established, quickly exchanged his premiership for the lord-lieutenancy of Ireland. Walpole, who might have remained ostensible head of the administration, preferred being powerful in opposition to being weak in the cabinet. On the 10th of March, 1717, he called on the king to deliver up the seals of office: his majesty, anxious to retain so useful a friend, is said to have thrown them into the minister's hat, and to have familiarly returned them ten times before he would finally accept the resignation. After his resignation, Walpole brought before the house, as 'a country gentleman,' a plan for reducing the national debt by means of a sinking fund, a measure which deserves notice as having affected latter ages. A sinking fund has lately been shown to be mere borrowing from one to pay to another, and therefore in principle fallacious; but the very ignorance of its real power gave it in the hands of Walpole two beneficial practical effects. First, the debts of government were calculated at an average to bear seven per cent. interest, while a sinking fund could be borrowed at four; and secondly, the promised advantages of the system raised the credit of government securities, and enabled the nation to dictate terins to creditors not anxious for immediate repayment. There is reason to believe that the acuteness of Walpole afterwards pointed out to him fallacies in the system which he did not think fit to acknowledge. In 1733, in despite of a powerful and watchful opposition, he took from the sinking fund half a million for the current services, an act which Coxe and others have looked upon as the chief blot in his administration. “On this occasion,” says his biographer, " he advanced this remarkable position, that the situation of the country, and the case of the public creditors, was altered so much since the establishment of the sinking fund, that the competition among them was not who should be the first, but who should be the last to be paid ; an assertion which none of the opposition ventured to contradict, and therefore may be considered as true.” The minister may have hesitated to add, that since promulgating the scheme, he had found reason to doubt the supposed omnipotence of compound interest, on which it was founded. Walpole, on resigning, made a candid declaration that he • Swin's Works, vol. xvi. 302. would not impede the measures of a whig government; but either his passions or his interest forbade him to preserve his resolution, and he counteracted their measures in the purest spirit of an opposition ;' but among other such acts, it must be recorded to his honour, that he opposed the bill, patronised by the king from a jealousy to his son, for limiting the number of peers and making Britain an aristocracy. When it was proposed to sell the irredeemable annuities to the South sea society, Walpole was one of those few members who had presence of mind sufficient to maintain that offers should be accepted from the other trading companies before the dazzling measure was adopted, and he finally objected to treating with the South sea company in preference to the bank, from the former body being unlimited in the price of their stock. In the meantime, finding either that his foresight and opposition were dangerous enemies to their measures, or that he might be a useful aid, the ministry, on the 6th of May, 1720, restored him to his old post of paymaster of the forces. On the sudden fall of the price of stock, and the consequent dread of a national bankruptcy, Walpole was appealed to by the nation and the monarch as the only man capable of restoring confidence; and on his announcing a plan for the adjustment of the claims, stock rose to a price somewhat beyond its natural value, though far beneath that at which the insane avarice of the nation had previously ranked it. An attempt, without the sanction of legislative authority, to retrieve the credit of the company, by the bank agreeing to circulate a specified amount of the company's bonds for one year, having failed, (the bank resiling from the contract on the ground that the minute was deficient in legal formalities,) Walpole secured the adoption of his proposals by a legislative act, which sanctioned an agreement unwillingly entered into by the bank and the East India company, to ingraft with their own a portion of the stock of the South sea company. The suggestion of this plan was owing to Jacombe, under-secretary at war, and in the excitement which the house of commons suffered on the subject, it required all the tact and influence of Walpole to put it in practice. The projectors of the scheme, and the ministers who fostered it, were the opponents of Walpole, and he displayed the moderation or the foresight of his disposition in shielding them from the popular rage which doomed them to destruction. With some temporary sacrifice of popularity, he obtained the acquittal of Sunderland, on whose ruin he afterwards rose ; and he was presently replaced, with his brother-in-law, at the head of the cabinet. On the discovery of the machinations of the Jacobites in 1722, he had an opportunity of showing his moderation, when a leader of the councils, by merely giving additional protection to the Hanoverian dynasty, and driving from the country the factious priest who had lent the aid of his great talents to the conspiracy. Of the opposition over which Walpole had triumphed at the fall of the South sea scheme a remnant remained, from which arose a powerful and vigilant body of opponents who never permitted him to perform a ministerial act uncanvassed, and after the most protracted and bitter warfare ever known in political history, finally drove him from the helm. Carteret, who considered himself as the successor to the fallen interest of Sunderland and Stanhope, divided the cabinet against Walpole and Townshend; but after a first unsuccessful attempt, through the influence of the mistresses of the king and the Hanoverian favourites, he sunk before their superior influence. Walpole, now in the height of his influence, having previously declined a peerage, which was bestowed on his son, was, iust after the termination of the parliament in 1724, created a knight of the order of the bath, and in 1726 he was installed a knight of the garter, an ornament which had before been only conferred on one com With some inconsistency, Walpole encouraged the return of Bolingbroke in 1725, and moved for the repeal of the bill of attainder which he had himself brought in in 1716. Whatever were his expectations from this measure he was disappointed; the brilliant Jacobite, chagrined at not being restored to the influence and rank of his lost peerage, became fretful and turbulent,—he joined in intrigues against the ministers, which they had power just sufficient to overcome,—and uniting the honesty he could assume, with that which was possessed by his coadjutor, Schippen, headed a party, which, without much prospect of overcoming without the aid of a rebellion, was still powerful enough to sting. In the meantime danger was threatened to Walpole from a more distant quarter, which he dexterously parried. A new coinage of halfpence was requisite for Ireland, and the necessities of the province were made the medium of conferring a favour on the friend of a royal mistress. William Wood, a miner and proprietor of iron-works, obtained a patent to coin halfpence and farthings to the extent of £100,000 sterling. There is no doubt that the patentee would have performed the contract with honesty ; but the national pride was roused at the kingly right over it as a conquered nation being put into the hands of a mechanic; and Swift, in the renowned · Drapier's Letters,' roused tlie nation against the insult by representing the halfpence as deficient in value, turning gradually, after he had thus roused the feelings of the common people, to the real cause of grievance, the putting into the hands of foreigners the exercise of every description of influence in Ireland. The underlings of the government threatened in the name of their leader; but Swift shows a disposition to be courteous to Walpole, and allows so powerful a man to avoid the consequences, by personally acquitting him of connection with the act. Walpole appears to have understood the hint, for he was not a man who would brave a nation for the defence of a dependant on his ministry. He approached the abolition of the patent by degrees, reducing the issue to £40,000, and finally contrived to send his rival Carteret, who had watched with pleasure the fomenting of disturbances, which might shake the stability of the minister, to settle the matter as lord-lieutenant of Ireland. The good opinion of Swift towards Walpole was of short continuance; he had an interview with him, of which he has left a full account, in which he endeavoured to lay before him the injustice and folly of treating Ireland in every respect as a conquered kingdom. The information was coldly and haughtily received,- ,-a circumstance which has been accounted for on the authority of Sir Edward Walpole, by the minis • Drapier's Letters, No. 4. • Scott's Life of Swift, p. 295. 10'See a Letter to the Earl of Peterborough, Works, chap. xvii. p. 67. ters having intercepted a letter of the dean to Dr Arbuthnot, mentioning the means he was to use for gaining his end, and observing that he knew no flattery was too gross for Walpole.” The treaty of Vienna, supposed to have been so dangerous to the peace of Britain, involved Townshend and Walpole in much odium from the opposition; but the burden chiefly fell on the former, who better understood, and generally managed the foreign department. But a greater danger threatened the stability of Walpole's ascendancy froin the death of George the First. As that monarch's prime minister, he was compelled to oppose the prince, and is said to have volunteered some expressions of contempt towards him, which were duly retailed and exaggerated. For several days in the opening of the new reign, he incurred the neglect of a discharged minister. But his powers in supporting a civil list were known to the king, and he had obtained a firm friend in the person of the queen, to whom, among his other means of recommending himself, it must not be forgot that he offered a jointure of £100,000 a-year, while his rival, Sir Spencer Compton, could not venture to offer more than £60,000. Sir Spencer yielded the post to the superior powers of his rival, and Walpole was once more at the head of the treasury. From the accession of George the Second, Walpole, from his personal influence at court, was virtually the sole prime minister, and the power of Townshend gradually decreasing, jealousies and contentions originated between the two brothers. An unministerial scene which took place during a dinner party at the house of Colonel Selwyn—in which a remark by Walpole, hinting a distrust of the sincerity of Townshend, roused that fiery nobleman to a threat of personal violence-finally terminated their intercourse. Townshend left the cabinet with an honour almost unsullied, and never condescended to indulge in opposition. From the period when Walpole ruled the cabinet to his resignation, his acts are so entirely the events of history, and so well known as leading features of the times, that a brief biographical notice can only glance at such as are most broadly shaded by his personal character, and the principles with which he governed. In 1733 he formed the celebrated plan of extending the method of collecting revenue by excise, to the duties on wine and tobacco. Sir William Wyndham, and Pulteney, who, by his vast wealth and his talents as a party-debater, now stood foremost and greatest in the opposition, became aware of his views, and sounded the trumpet of alarm through the land ; the various speakers of the opposition obscurely hinted at a plan devised, and about to be produced, for the secret destruction of British liberty, and Walpole was compelled to divulge his plan before he was prepared to attempt a legislative measure on its principles. The great leading causes for the alteration he maintained to be the partiality of the existent system, the opportunities of evasion, and the necessary venality of the public officers. The whole oratory of the opposition was thundered forth in denunciation of the scheme,—the clamours without were loud and ominous,—and it was finally dropped: the minister, for the purpose of keeping himself in office, making a practical admission of the great principle, that even a system which the propounders of it may consider unexceptionably excellent, must not be enforced " Letters and Miscellanevus Papers of Barre Charles Roberts, pp. 20, 21. against the general voice of a people. Along with the financial measure, one which can more unhesitatingly be pronounced salutary to the commercial interests of the country, was lost for a period—the system of bonding imported goods for payment of the duties; and in the full enjoyment of this great facility to commerce, the British public have at this day to thank Sir Robert Walpole for the best gift he has left to posterity. It was generally the object of the opposition to propose motions, the rejection of which would involve the minister in odium or unpopularity,--and in admitting or opposing them, the minister had to choose whichever side was most conducive to the government in being, and at the same time sure of a majority. “ It will be advisable,” says a memorandum by one who bitterly opposed the minister, “to propose easy whig points, to bring off honest well-meaning people,--and render others inexcusable, such as a reasonable place-bill to exclude those of lower ranks in the treasury and revenue, such as clerks, &c. from sitting in the house of commons. A bill to make the officers of the arny for life, or quamdiu se bene gesserint, or broke by a council of war. These patriotic principles were diligently pursued and opposed in a corresponding spirit. To have admitted either the place or the pensionbill to pass, would have struck a deadly blow at that system of influence which Walpole had so adroitly framed to succeed the arbitrary power of the crown. The pension-bill passed the commons in 1730, but was thrown out by the lords ; and the minister finding such a plan likely to save a share of his popularity, the place-bill, when introduced in a later period of his administration, “ was not opposed, because out of decency it is generally suffered to pass the commons, but is thrown out in the lords.” 1 The attempt to deprive government of the power of dismissing officers in the army he likewise resisted, for he had made use of the power, and had not hesitated to discharge those who opposed him. To the repeal of the test act-a measure attempted not only by the opposition whigs, but in the very purest spirit of party, and by the tories also—he appears to have had no other objection but the danger of offending the church, and is said to have been personally partial to the measure. He was in the habit of telling the dissenters, that whatever were his private inclinations on the matter, the attempt was improper, and the time was not yet arrived. “ You have so repeatedly returned this answer,” replied Dr Chandler, principal of a depu. tation of the dissenters, “ that I trust you will give me leave to ask you when the time will come ?" “ If you require a specific answer," said the minister, “ I will give it you in a word,-never." His inge. nuity enabled him, however, by the annual act of indemnity, to save the dissenters from oppression, and to preserve the church of England from a dangerous odium, while its supremacy was fully admitted. At length, after bathed efforts and repeated disappointments, the opposition began gradually to undermine the great power so long assailed in vain. The death of Queen Caroline, in 1737, struck the first sure blow at Walpole's influence, and the enmity of the prince regent served as a marked rallying point to his opponents. In 1738, when the alleged outrages of the Spaniards on British ships roused the popular feeling of " Memorandum in the handwriting of Alexander, Eail of Jlarchmont. Marchmont Papers, vol. ii. p. 14. 13 Horace Walpole to Horace Mann.
<urn:uuid:a640bc8a-e4bf-41ad-b52f-817f07f02e5b>
CC-MAIN-2022-33
https://books.google.am/books?pg=PA149&vq=%22a+middle-sized+spare+man,+about+forty+years+old,+of+a+brown+complexion,+and+dark+brown%22&dq=editions:NYPL33433000455547&lr=&id=cysAAAAAQAAJ&hl=hy&output=html_text
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570692.22/warc/CC-MAIN-20220807181008-20220807211008-00405.warc.gz
en
0.9832
3,758
2.9375
3
Each and every IELTS aspirant often uses the terms Citizen and Permanent Resident, but did you know that these two statuses have a big deal of difference between them? Let us help you clear this out. Go through this article to get a clear understanding of the differences and similarities between Permanent Resident and Citizen. What is Permanent Resident? Getting citizenship status lawfully gives you permission to reside in Canada for a defined period of time. That implies that you are no longer a visitor in the world, but rather a legal alien. The permanent resident status goes a step further, giving you the freedom to live permanently. For as long as they wish, a lawful PR may remain in Canada. That being said, Canada’s permanent resident has a range of disadvantages relative to Canadian residents. Firstly, they stay legal residents of their country of birth. They’re not going to get a Canada visa or voting rights. Spending a year off from Canada may include them in deportation proceedings and may face the risk of being removed from Canada Both permanent residents, after a certain amount of time (usually 5 years), are entitled to qualify for Canadian citizenship. Through their time as permanent citizens, they must display strong moral integrity and a thorough analysis of Canada’s history and governance in order to become a naturalised citizen of Canada. Permanent Resident Card – Rights and Benefits Legal permanent residents in Canada possess a variety of advantages over the visitor and refugee populations. To begin with, they will get the coveted Green Card, which also requires the right to work in Canada. They are often permitted to embrace jobs or to start a legal enterprise. They can also bring their immediate family members (spouse and younger kids) to live with them. Permanent residents can leave/enter Canada at will without the fear of being rejected by immigration authorities. Permanent residents have the right to request for government-sponsored financial support for academic purposes. They have exposure to security clearances and exclusion from export controls. They are also eligible for Social Security payments, extra security income and Healthcare benefits. Canada Permanent Resident – Options So if you’re curious about how and when to apply for Canada permanent resident visa from India, let us just start with the simple options for Indians to begin the process. The options for immigration are: - Quebec Immigration - Transition Immigration - Business Immigration - Federal Economic Class - Provincial Economic Class - Family Class Sponsorship Today, based on the needs and requirements for permanent residence, you need to pick the right visa scheme. Selecting the right visa programme would guarantee better odds and a far easier process for a Canada permanent resident visa. Be certain and make the correct choice, since it will be an exhausting procedure to change programmes, and you could be refused a visa if you pick the wrong one, so be sure you look at all the specifics about each programme and then consider what suits you best. Canada Permanent Resident – Process You need to pick your software first. There may be separate programmes in each programme. The Federal Economic Class Initiative comprises the Federal Skilled Worker Program, the Federal Skilled Trader Program and the Canada Experience Program. To apply for a visa through the Federal Economic Scheme, you would need to register via the Express Entry system. This method allows applicants to build and request a profile. Points are given to each profile in a variety of categories. People with the lowest and highest marks are invited to qualify for services in the Federal Economic Class. So if you want to use the federal skilled worker service, you would need to build a profile to start the registration process. How to Start the PR Process? You need to build an account or profile for your Express Entry scheme. You can obtain an invitation to apply once you submit your profile. With modern requirements, this invite is valid for a span of 90 days. You will need to start applying for an online form where you will need to request a list of documents that are needed to acquire a visa. You need to make sure that all the responses you fill-up the form are absolutely right and accurate, any contradictions will result in your submission being rejected. You need to fill in all the details carefully, and you can save the form if and when you want to, and you can start from where you dropped off. Canada PR Process Duration After you upload your form, the authentication process will begin. At this point, you might wonder how long the PR process is going to take in Canada? The turnaround time for Canada PR is also usually 6 months in the express entry method. For some forms of visas, there might be variations and modifications for provincial visa applicants. You’ll still need to offer your biometrics in this 6 month period. You’re going to be told what to do all the time. The organizations will be in contact with you through your account online. If required, you will need to include more documentation, and you will also need to give an interview if considered necessary. They’re going to give you updates to your application throughout. You will be provided 30 days to submit your biometrics after you have received a letter to provide your biometrics. It is proposed that the biometric payments are charged at the same time as the registration fees are paid such that there is no lag. Canada Permanent Resident: Eligibility Express Entry Eligibility - Candidates must be admitted into the Express Entry pool of IRCC and have an Express Entry identity number and a career search validation code. - Candidates must provide the requisite settlement money to support themselves and their families in Canada. - They must have a true, secure, full-time work offer from the Canadian employer Skilled Worker Eligibility - Applicants must have at least six months of work experience certificate in the nominated role. - Training and work experiences must fulfil all the criteria of the career. - Candidates must have the following minimum language skills in hearing, chatting, reading and writing at level 0 and A, CELPIP (English): level 7, IELTS (English): level 6, TEF (English): level 4 - Candidates must have the following minimum scores in language skills which are listening, speaking, reading and writing at level B, CELPIP (English): level 5, IELTS (English): level 5, TEF (English): level 3 Nominee Program Eligibility - Must have a rating of at least 65 points on the evaluation chart. - Should have a minimum level of high school graduation. If your company needs further education, you must also have evidence of this. - At least 3 years of practice in corporate administration certificate. - At least 5 years of business-relevant job experience certificate. - Never rejected immigration to Canada - Do not have successful submissions for any candidate service What is the Meaning of Citizen? An individual may become a citizen of Canada by birth or by naturalisation. You are a resident of Canada by birth whether you were born somewhere in Canada or its territory. In fact, you are eligible for “derived citizenship” if you were born overseas, but one of your parents was a legal resident of Canada at the time of birth. Then this is the definition of citizenship by naturalisation. It refers to people who were born in a different nation and then immigrated to Canada. They can apply for citizenship of Canada following the acquisition of permanent resident status. If they are licenced by the appropriate authority, they become naturalised citizens. Citizen Rights and Benefits Ask yourself what advantages Canada’s citizenship brings? Citizenship is the highest rank that a person can achieve under the Canada Immigration Act. Becoming a citizen of Canada offers many benefits to a person living in Canada. Many of the most important advantages synonymous with citizenship include: - You’re going to be the holder of a Canada passport. - The right to vote in favour meaning that you can vote in the city, state, and federal elections. - The right to run for an elected service seat. - Eligibility for payments to government workers. - The right to benefit from Canada’s tax laws. - You will not be liable to expulsion until, in the first place, you have committed fraud in gaining citizenship. - The opportunity to make family members visit you in Canada - To be willing to sponsor the family members to receive their Green Cards. Requirements for Canada Citizenship To become a citizen of Canadian, you must - be a permanent resident of Canada - have resided in Canada for 3 out of the last 5 years - have covered your taxes, if you have to - clear a test on your rights, responsibilities and information of Canada - prove your language skills Canada Citizenship Process Apply for an Application Package The application kit contains a guidance sheet and all the forms you need to fill in. Using the directions and the text guide to make sure you don’t forget something. Please make sure that you use the edition of the application form dated October 2017 or after that Pay the Application Fees Your payments depend on whether you are an adult (aged 18 and over) or a minor. Your payments can include the following: - The fee for production - The Right to Residency Tax If you send more than one form at the same time, you will pay all the fees collected. Submit Your Application You must be qualified for Canadian citizenship on the day before you submit the application form. Difference between Citizen and Permanent Resident People who carry Green Cards or are residents of Canada have certain liberties and privileges. It means a lot of fortune to wear these titles if you were born outside the world, and the distinctions between the two are mostly due to the enhanced privileges that people hold over Green Card holders. Citizens of Canada, for example, can vote, while citizens of permanent residency cannot. Citizens cannot be removed, but a legal permanent resident can be expelled for a variety of reasons. Citizens are not constrained by their family members’ immigration quotas, but they are permanent residents. As you can see, being a permanent resident is not as prestigious as becoming a citizen, but they normally need to become a recipient of a Green Card to become a citizen, because it’s nice to know what they’re going to earn! Permanent Resident vs Citizen Both possibilities are intriguing for foreign nationals. Although being a citizen typically succeeds in becoming a permanent resident, opting to stay as a recipient of a Green Card Visa is a decision that they might decide is the better alternative. Both phases are long, but both would have entry to the world’s largest economy. There’s no wrong choice! Canada is a land full of opportunities for all people wanting to live there. If you’re coming to research, visit, or work, there are a variety of ways to meet your immigration goals. In certain situations, many foreigners determine that they want to become a permanent resident and, ultimately, a Canadian citizen on the way. Citizen status is the highest individual category in the hierarchy of Canada. Legal permanent residency is commonly considered to be a required first step in the acquisition of Canadian citizenship. A permanent resident can live in Canada permanently, but he/she becomes a lawful citizen of another country. While permanent residents get a certain amount of added rights over visa holders, citizenship offers all the rights that an applicant may have in Canada, including the prized Canada passport and the ability to engage in public elections.
<urn:uuid:bd6aa4c9-ad22-43c9-aaeb-1779b665af9c>
CC-MAIN-2022-33
https://ieltsninja.com/content/tdifference-between-citizen-and-permanent-resident-what-ipermanent-resident/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571246.56/warc/CC-MAIN-20220811073058-20220811103058-00005.warc.gz
en
0.949809
2,387
2.625
3
The effective use of information is one of the prime requirements for any kind of business operation. At some point, the amount of data produced goes beyond simple processing capacities. That’s where machine learning algorithms kick in. However, before any of it could happen – the information needs to be explored and made sense of. That is what unsupervised machine learning is for in a nutshell. We had talked about supervised ML algorithms in the previous article. In this one, we’ll focus on unsupervised ML and its real-life applications. What is unsupervised machine learning? Unsupervised learning is a type of machine learning algorithm that brings order to the dataset and makes sense of data. Unsupervised machine learning algorithms are used to group unstructured data according to its similarities and distinct patterns in the dataset. The term “unsupervised” refers to the fact that the algorithm is not guided like a supervised learning algorithm. How does unsupervised ML algorithm work? The unsupervised algorithm is handling data without prior training – it is a function that does its job with the data at its disposal. In a way, it is left at his own devices to sort things out as it sees fit. The unsupervised algorithm works with unlabeled data. Its purpose is exploration. If supervised machine learning works under clearly defines rules, unsupervised learning is working under the conditions of results being unknown and thus needed to be defined in the process. The unsupervised machine learning algorithm is used to: - Explore the structure of the information and detect distinct patterns; - Extract valuable insights; - Implement this into its operation in order to increase the efficiency of the decision-making process. In other words, it describes information – go through the thick of it and identifies what it really is. In order to make that happen, unsupervised learning applies two major techniques – clustering and dimensionality reduction. Clustering – Exploration of Data “Clustering” is the term used to describe the exploration of data, where the similar pieces of information are grouped. There are several steps to this process: - Defining the credentials that form the requirement for each cluster. The credentials are then matched with the processed data and thus the clusters are formed. - Breaking down the dataset into the specific groups (known as clusters) based on their common features. Clustering techniques are simple yet effective. They require some intense work yet can often give us some valuable insight into the data. Clustering has been widely used across industries for years: - Biology – for genetic and species grouping; - Medical imaging – for distinguishing between different kinds of tissues; - Market research – for differentiating groups of customers based on some attributes - Recommender systems – giving you better Amazon purchase suggestions or Netflix movie matches. Dimensionality Reduction – Making Data Digestible In a nutshell, dimensionality reduction is the process of distilling the relevant information from the chaos or getting rid of the unnecessary information. Raw data is usually laced with a thick layer of data noise, which can be anything – missing values, erroneous data, muddled bits, or something irrelevant to the cause. Because of that, before you start digging for insights, you need to clean the data up first. Dimensionality reduction helps to do just that. From the technical standpoint – dimensionality reduction is the process of decreasing the complexity of data while retaining the relevant parts of its structure to a certain degree. 7 Unsupervised Machine Learning Real Life Examples k-means Clustering – Data Mining k-means clustering is the central algorithm in unsupervised machine learning operation. It is the algorithm that defines the features present in the dataset and groups certain bits with common elements into clusters. As such, k-means clustering is an indispensable tool in the data mining operation. It is also used for: - Audience segmentation - Customer persona investigation - Anomaly detection (for example, to detect bot activity) - Pattern recognition (grouping images, transcribing audio) - Inventory management (by conversion activity or by availability) Hidden Markov Model – Pattern Recognition, Natural Language Processing, Data Analytics Another example of unsupervised machine learning is the Hidden Markov Model. It is one of the more elaborate ML algorithms – a statical model that analyzes the features of data and groups it accordingly. Hidden Markov Model is a variation of the simple Markov chain that includes observations over the state of data, which adds another perspective on the data gives the algorithm more points of reference. Hidden Markov Model real-life applications also include: - Optical Character recognition (including handwriting recognition) - Speech recognition and synthesis (for conversational user interfaces) - Text Classification (with parts-of-speech tagging) - Text Translation Hidden Markov Models are also used in data analytics operations. In that field, HMM is used for clustering purposes. It finds the associations between the objects in the dataset and explores its structure. Usually, HMM are used for sound or video sources of information. DBSCAN Clustering – Customer Service Personalization, Recommender engines DBSCAN Clustering AKA Density-based Spatial Clustering of Applications with Noise is another approach to clustering. It is commonly used in data wrangling and data mining for the following activities: - Explore the structure of the information - Find common elements in the data - Predict trends coming out of data Overall, DBSCAN operation looks like this: - The algorithm groups data points that are close to each other. - Then it sorts the data according to the exposed commonalities DBSCAN algorithms are used in the following fields: - Targeted Ad Content Inventory Management - Customer service personalization - Recommender Engines Principal component analysis (PCA) – Data Analytics Visualization / Fraud Detection PCA is the dimensionality reduction algorithm for data visualization. It is a sweet and simple algorithm that does its job and doesn’t mess around. In the majority of the cases is the best option. In its core, PCA is a linear feature extraction tool. It linearly maps the data about the low-dimensional space. PCA combines input features in a way that gathers the most important parts of data while leaving out the irrelevant bits. As a visualization tool – PCA is useful for showing a bird’s eye view on the operation. It can be an example of excellent tool to: - Show the dynamics of the website traffic ebbs and flows. - Break down the segments of the target audience on specific criteria t-SNE – Data Analytics Visualization t-SNE AKA T-distributed Stochastic Neighbor Embedding is another go-to algorithm for data visualization. t-SNE uses dimensionality reduction to translate high-dimensional data into low-dimensional space. In other words, show the cream of the crop of the dataset. The whole process looks like this: - The algorithm counts the probability of similarity of the points in a high-dimensional space. - Then it does the same thing in the corresponding low-dimensional space. - After that, the algorithm minimizes the difference between conditional probabilities in high-dimensional and low-dimensional spaces for the optimal representation of data points in a low-dimensional space. As such, t-SNE is good for visualizing more complex types of data with many moving parts and ever-changing characteristics. For example, t-SNE is good for: - Genome visualization in genomics application - Medical test breakdown (for example, blood test or operation stats digest) - Complex audience segmentation (with highly detailed segments and overlapping elements) Singular value decomposition (SVD) – Recommender Systems Singular value decomposition is a dimensionality reduction algorithm used for exploratory and interpreting purposes. It is an algorithm that highlights the significant features of the information in the dataset and puts them front and center for further operation. Case in point – making consumer suggestions, such as which kind of shirt and shoes fit best with those ragged vantablack Levi’s jeans. In a nutshell, it sharpens the edges and turns the rounds into the tightly fitting squares. In a way, SVD is reappropriating relevant elements of information to fit a specific cause. SVD can be used: - To extract certain types of information from the dataset (for example, take out info on every user located in Tampa, Florida). - To make suggestions for a particular user in the recommender engine system. - To curate ad inventory for a specific audience segment during real-time bidding operation. Association rule – Predictive Analytics Association rule is one of the cornerstone algorithms of unsupervised machine learning. It is a series of techniques aimed at uncovering the relationships between objects. This provides a solid ground for making all sorts of predictions and calculating the probabilities of certain turns of events over the other. While association rules can be applied almost everywhere, the best way to describe what exactly they are doing are via eCommerce-related examples. There are three major measure applied in association rule algorithms - Support measure shows how popular the item is by the proportion of transactions in which it appears. - Confidence measure shows the likeness of Item B being purchased after item A is acquired. - Lift measure also shows the likeness of Item B being purchased after item A is bought. However, it adds to the equation the demand rate of Item B. The secret of gaining a competitive advantage on the specific market is in the effective use of data. Unsupervised machine learning algorithms help you segment the data to study your target audience’s preferences or see how a specific virus reacts to a specific antibiotic. The real-life applications abound and our data scientists, engineers, and architects can help you define your expectations and create custom ML solutions for your business.
<urn:uuid:b157fc8f-39ba-430e-8746-b171bba3fd0b>
CC-MAIN-2022-33
https://www.crecso.com/guide-to-unsupervised-machine-learning-with-examples/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00205.warc.gz
en
0.900852
2,090
3.546875
4
- Topic Background Health seeking behaviour is becoming more popular in the field of research study at present time. The use of this, somehow, became the window of opportunity to policymakers in delivering a better health system especially in developing countries1. (Shaik, 2015). This is true among the elderly population since a shift in the pattern of morbidity and mortality was observed in recent years. Non-communicable diseases have become the top leading cause of morbidity. Furthermore, the emergence of lifestyle diseases in urban areas also adds up to the list of morbidity causes. This change contributes to the reluctance of elderly in seeking wellness therefore an obstacle to achieving good health. Health seeking behaviour plays a major role in the effect of their health status and not solely attributed to advancing age 2 (Sangmee Ahn Jo, 2007). A review literature 3(Grundy, 2010) indicated contributing factors that affect decisions of elderly on health. An identified hindrance is the preference of alternative or traditional therapies over formal health care which reportedly delay consultations, and in effect, cause delay of treatment accordingly 4-14. Grundy (2010) further emphasized that despite the variation in health seeking behaviour across regions, continuing studies of this aspect in health care is essential to provide a better picture of the disease process outcome. In this study health-seeking behaviour is defined as the following: the use of alternative or traditional therapies, reported delays in consultation and compliance of prescribed medicine among elderly population. Review of Related Literature Even though the growing population in the Philippines was dominated by the young we cannot ignore the needs of the increasing population of the elderly. The elderly were not given as much attention in the government health programs but the incidence of health problems play a part to the economic burden of households15. (Cecilia Santos-Acuin, 2013). In the 2010 national census it was stated that there were about 92.34 million Filipinos and approximately 5.8M (6.8%) of these belongs to the elderly population. Philippine population projected to increase to 142 million by 2045 and a span of 35 years around 50million people will be added16. (PSA:Population Projection Statistics, 2014)World Health Organization defined elderly according to the three main categories namely chronology, change in social role and change in capabilities .To standardized UN agreed a cutoff of 60 years old and above17. (World Health Organization:Health Statistics and information system, 2015). If you need assistance with writing your nursing essay, our professional nursing essay writing service is here to help!Find out more Health-seeking behaviour among elderly patients varies from each country. In the event of non-consultation or delay consultation among elderly it is obvious that the outcome was associated with adverse medical consequences. In one of the study conducted about managing nutrition among the elderly they pointed out the importance of prevention and early intervention because of the difficulty in treating an individual once the disease was already established4. (Damian Flanagan, 2012). This was also supported by cross-sectional study done in Namibia which the outcome resulted in higher treatment delays. In the study they determined the cause and categorized delay in the treatment as longer delay based on older age, urban residence, and longer walking distance to the nearest public facility, and doing a chest x-ray while having HIV seropositive and formal education determined the shorter delays5. (Kingsley Ukwaja, 2013). One significant Malaysian study focusing among elderly which utilized CAM for natural and safer use found out that non-consultation would contribute to the increasing undiagnosed cases of chronic diseases6.(Shahid Mitha, 2013). Further studies for different ways of treatment were done to substitute for complementary and alternative medicine especially common amongst Asians with elderly multiple co morbidities6 (Shahid Mitha, 2013).A study on DM conducted in Uganda showed that the unavailability of medicines prompted the people to use CAM for treatment and consulted a faith healer especially to those failures to manage DM causing an increase in DM related complications7. (Katarina Hjelm, 2011). Moreover, the elderly in the Philippines use medicinal plants before consulting to health professionals because of its availability, cheaper price than Western drugs, and usefulness in the treatment of various illnesses and to alleviate milder form of illnesses8. People who had chronic multiple morbidity took their medicines in a daily basis to survive, to work normally and to fulfil social work or obligations in the family. Taking multiple tablets in a day is a burden to them9. (Anne Townsend, 2003). One of the study conducted in Malaysia showed that the presence of a particular symptom will only start the usage of prescribed medicine. However, once these symptoms are resolve, medication would also be terminated giving them reason not to take drugs religiously. This will just worsen the disease process and later will lead to multiple admittance. Other studies also pointed out that noncompliance of medicine are due to the fear of drug dependency, multiple side effects and interaction with other drugs.(10). Thus, being more cautious and elaborative in giving instructions to patients who are taking multiple drug regimens should be practiced by health practitioners11. (Isacson D, 2002). A house-hold survey done among elderly Nigerian revealed that regardless of age and sex, family consultation is their first choice of treatment for their illnesses. This somehow increases the morbidity among the elderly population since family members know little about the safety and appropriate treatment for them12. (Abdulraheem, 2007) A cohort study in South Korea using AGE found out that the increase level of awareness and concern about the health of elderly women increases health-care consultation thus, resulted to increased risk of morbidity.2 (Sangmee AhnJo, 2007). In Myanmar, a study conducted to elderly women concluded that low-level of education and income play great role in skipping treatment and self-care13. (Soe Moe, 2012). Similarly, in Bangladesh, younger adult and elderly age group were compared in terms of health seeking behaviour (self-care/self-treatment). It showed no significant difference in health-seeking pattern. Both age group opted self-care/self-treatment as the first line of prevention due to poverty which would explain the increase in morbidity pattern of both.14(Syed Masad Ahmed, 2005). The growing trend of non-communicable diseases is the common cause of morbidity in today’s modern world. This lifestyle related disease can be altered in the future by determining the source of it. Also, health seeking behaviour plays a major role in determining the outcome of health status of an individual. No study on health seeking behaviour and factors that influence the behaviour of our elderly in our locality so a research study would be beneficial in gathering new information. Added to that, our elderly may have different factors towards health seeking behaviour and different morbidity pattern than the others. This study aims to determine what are the demographic and clinical characteristics of elderly patient 60 years old and above of the Davao Regional Hospital FAMED outpatient department that are associated with their health seeking behaviour? Significance of the study Since health care programs to the elderly is not yet well established in Davao Regional Hospital, the outcome of this study will be the basis of the future recommendation of programs for the elderly in the DRH outpatient department. With this study we will be able to deliver better health services to our elderly patients such as: a. Creating a geriatrics club that would exclusively cater the needs of the elderly patient so that they don’t need to line-up with other patients. This would somehow help lessen their delay in consultation at the same time will increase the need to seek consult to a physician as their first choice of health care giver. b. By incorporating a primary giver as a potential treatment partner for the elderly patients that would monitor and check the elderly patients’ compliance to medicine and assure treatment success. C.Enrolling those elderly patient’s ages 70 years and above residing within 5 km of the hospital premises to a family oriented program .This would benefit those elderly patient’s that cannot visit the hospital due to old age, too sick to move and avoiding too much crowd. A home visit from the assign physician will help lessen their delay in consultation, correct the use of alternative medicine and affect their first choice of care giver. Objective of the study This study general objective is to identify the demographic and clinical characteristics of elderly patient 60 years old and above of the Davao Regional Hospital FAMED outpatient department that are associated with their health seeking behaviour. - To determine respondents socio-demographic and clinical profile. - To determine the health seeking behaviour among elderly patients in terms of: - Delay in consultation of chief complaint - Use of alternative and traditional therapies - Compliance of prescribed medicine - First choice of health care provider - To identify the socio-demographic and clinical characteristics of patient that would determine their health seeking behaviour. A. Research Design A cross-sectional study will be conducted among elderly patient of Davao Regional Hospital outpatient department. This will be done at Davao Regional Hospital outpatient department of Family Medicine sometime in September 1, 2015 to October 31, 2015. The triaging system of Davao Regional Hospital outpatient department starts with a priority number to all with special considerations to the elderly population. All elderly on the senior citizen lane will be distributed to the different departments based on their chief complaint. In this study all respondents triage to the Family Medicine department will be invited to participate. The respondents of this study include elderly patients ages 60 years and above willing to participate in this study. All those who are critically ill will be excluded from the study. D. Sampling Procedure A convenience sampling will be done. E. Interventions and Comparisons: Not applicable F. Randomization: Not applicable G. Data Gathering Approval of the CERC board will be obtained first prior to the collection of data. Data will be collected using a three-part standard questionnaire which will be administered through a one on one interview by the FAMED residents rotating at the outpatient department. - Part 1 will consist of information about socio-demographic profile like age, sex, highest educational attainment, place of origin and source of funds. - Part 2 will consist of the clinical profile of the respondents which includes presence of concomitant chronic diseases and current chief complaint. Part 3 will be the information about the respondents’ health seeking behaviour and the outcome to be measured. In this study the following health seeking behaviours are explored. First health seeking behaviour is according to delay in consultation which in this study refer as the time from onset of chief complaint to first consult in Davao Regional Hospital FAMED outpatient department. For this study, a delay of 14 days or more from the time of onset of chief complaint to the time that the patient goes to the hospital will be considered as “longer delay” and a delay of 7 days to 14 days from the time of onset of chief complaint to the time that the patient goes to the hospital will be considered as “shorter delay” 18-19(Fact sheet Diarrhoel disease, 2013) (Blanca Ochoa, 2002). The second health seeking behaviour is the use of alternative or traditional therapies which are define in this study as the use of herbal medicines, over the counter drugs, acupuncture, reflexology, hilot and others not part of the conventional medicine before the initial consult referable to the chief complaint. Another health seeking behaviour is the compliance of prescribed medicine which in this study defines as the correct usage of drugs as to dosage, frequency, duration, and timing as prescribed by licensed physician of Davao Regional Hospital in relation to its chief complaint. Last health seeking behaviour is according to the first choice of health care providers. For this study, the first choice of health care providers in relation to its chief complaint. H. Sample size computation Sample size of this study was computed using the software StatCalc from EpiInfo 7. Calculations were based on the following assumptions: 40% of patients aged <70 years (non-exposure) consult 2 weeks after onset of their chief complaint (outcome); 60% of patients aged >70 years (exposure) consult 2 weeks after onset of their chief complaint (outcome); and, there are as many patients aged >70 years as there are patients aged 60-70 years. In a computation of odds ratios of getting the outcome, carried out at a 5% level of significance, a total sample of 194 patients will have 80% power of rejecting null hypothesis (no significant increase or decrease in odds ratio) if the alternative holds. An interim analysis will be done halfway through the recruitment (97%) in order to recompute the ideal sample size. I.Data handling and analysis Data for the study will be encoded in the Microsoft Excel and analyzed using EpiInfo 7. Categorical data will be summarized as frequencies and percentages, and compared. Continuous data will be summarized as means and standard deviations, and compared. Odds ratios of having particular health seeking behaviours will be computed. Level of significance will be set at 5%. Prior to participating in the study, the consent of the participant must be obtained. The proponent of the study will secure an approval from the Cluster Ethics Research Committee of Southern Philippines Medical Center prior to doing the research. Informed Consent: Form A written consent is obtained from the potential participants prior to conducting the study. Informed Consent: Signatory The signature of the participant should appear in the consent form. Informed Consent: Witness No witness will be required in order for the informed consent to be binding. Informed Consent: Proxy Consent There will be no proxy consent aside from that of the participant will be allowed. Informed Consent: Process Prior to signing the consent form, the potential participants are informed about the study rationale and objectives. Informed Consent: Timing and Venue The informed consent will be taken prior to the administration of the questionnaire. It will be done in the assigned area of the participant within DRH premises during office or duty hours. Disclosure of Study Objectives, Risks, Benefits and Procedures The participants will be informed of the study objectives, its purpose, its benefits and what is expected of them. They will also be told that there are no risks involved in the study. Remuneration, Reimbursement and Other Benefits No remuneration or reimbursement will be given to the participants. Privacy and Confidentiality The researchers will not disclose the identities of the participants at any time. Only the main proponent of the study has the personal information of the participants. The researchers will not contact the participants after this one time interview. It is the investigator’s responsibility to ensure the confidentiality of any information obtained during the research. Voluntariness and Alternative Options The respondent’s participation in the study will be entirely voluntary. In case the participants wish to withdraw from this study the researchers will respect that decision and there will be no effect in the present and succeeding consultations. Information on Study Results The participants will have access to their data. After the data has been analysed, the overall results will also be made known to the participants. Extent of Use of Study Data At present there are no intended plans to use the data aside from the objectives stated in the protocol. Authorship and Contributorship Jacqueline N. Nuenay, M.D. is the principal investigator and the main author of the study. Dr. Chrysteler Clet is the co-author. Conflicts of Interest The principal investigator and the co-author declare no conflict of interest. The research may be submitted for national and/or international presentation or publication. The main proponent of the study is using personal funds to conduct the study. Duplicate Copy of the Informed Consent Form A duplicate copy of the informed consent form will be provided to the participants of the study. Additional copies can be made on request. Questions and Concerns Regarding the Study The participants will be encouraged by the principal investigator to voice out concerns about their participation in the study. The participants of the study will be provided with the cell phone number of the principal investigator. The principal investigator is also available for questions, comments and concerns about the study. Cite This Work To export a reference to this article please select a referencing stye below: Related ServicesView all Related ContentAll Tags Content relating to: "research" Nursing research can be defined as any scientific (i.e. systematic) enquiry into the effectiveness or value of nursing practice. It denotes any empirical evidence on which nursing care is based. This includes both quantitative and qualitative research evidence. Geoffrey Keynes' Research on Cancer Treatment In God We Trust. All Others Must Have Data Radical surgery had undergone an astonishing boom in the 1950s and 1960s. William Halsted had become the patron saint of cancer surgery in the United State... Clinical Depression: Drug Treatment Research Article Analysis Overview Clinical depression is a common mood disorder characterized by several symptoms affecting how a person thinks, feels, and behaves. Symptoms are wide-ranging and can include a pe... DMCA / Removal Request If you are the original writer of this essay and no longer wish to have your work published on the NursingAnswers.net website then please:
<urn:uuid:f6345ef6-ea23-4ac9-9916-3d8760301cb2>
CC-MAIN-2022-33
https://nursinganswers.net/essays/elderly-demographics-research-study-3748.php
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573533.87/warc/CC-MAIN-20220818215509-20220819005509-00003.warc.gz
en
0.934108
3,664
2.828125
3
Age of Puppy |5 Weeks||PREVENT/VAV® Parvovirus| |6, 8, 10, 12 Weeks||DA2P+PV without leptospirosis| |14, 16, 18 Weeks||DA2LP+PV without leptospirosis| |Adult Yearly Booster||DA2LP+PV with leptospirosis| FOOD AND DRUG ADMINISTRATION CENTER FOR VETERINARY MEDICINE The following consumer information is provided by Dr. Sandra Woods, Division of Drugs for Non-Food Animals, Center for Veterinary Medicine. TRUE. If the dam is immune to the common infectious canine diseases, her puppies will also be protected for six to sixteen weeks after birth, if they consume colostrum. TRUE. The higher the dam's concentration of antibodies to infectious diseases, the more protection she can pass on to her puppies. Revaccination causes the body to produce a large amount of antibodies. FALSE. The antibodies a puppy receives from his mother will tie up the antigens in a vaccine and prevent the puppy from making his own antibodies for weeks after birth. FALSE. In general, the modified live vaccines are more effective and produce a longer period of immunity. The killed vaccines require repeated doses to produce an adequate immune response, but they are safer for use in sick or pregnant dogs. Your veterinarian can advise you on which vaccines and what immunization schedule is best for your dog. FALSE. The effect that the route of administration has on the dog's response to vaccination depends on the vaccine being administered. For example, rabies vaccine is much more effective given by the intramuscular route than by the subcutaneous route. With canine distemper vaccine, both routes appear to be equally effective. TRUE.The antibodies a puppy receives from his mother gradually wear out and are eliminated by the puppy's disease defense system. The more antibodies the puppy receives in the colostrum, the longer this takes. Vaccination schedules usually provide multiple shots at two to four week intervals, thus ensuring that one or more of the shots are given when the puppy will be receptive to the vaccination. TRUE.Vaccination before one month of age may be ineffective because the immune system does not start to mature until after normal adult body temperature is achieved. A modified live vaccine can cause disease by infecting the immature puppy; therefore, killed vaccines should be used in very young animals. TRUE . Older dogs do not produce as many antibodies in response to vaccination as younger dogs. The duration of protection from a single vaccination will therefore be shorter for the older animal. Yearly revaccination prevents antibody levels from dropping below levels that are protective. TRUE.Revaccinate some of your dogs early so that all future vaccinations will be due at the same time. This simplifies record-keeping and ensures that each animal is protected at all times. FALSE. Vaccination of a sick dog will not prevent disease because the protective antibody level will not be reached before full development of the illness. Four days to two weeks is required for the body to make enough antibodies to protect itself from disease. The antibodies must be present prior to exposure to the disease-causing organism. TRUE. Recent research on litters of puppies matched for age, sex, and weight demonstrated significantly higher antibody levels in the puppies not subjected to a cold environment during the time antibodies were forming after vaccination. FALSE. All of the named diseases can be fatal. Recovery from any of them usually leaves the dog immune to the same disease, but does not prevent internal organ damage which can predispose the animal to other serious disease states. FALSE. Immunosuppressive drugs such as anticancer drags or high dose corticosteroids can impair the immune response to the point that modified live virus vaccines can infect the dog and cause the disease they are meant to prevent. No disease will develop in response to the use of killed vaccines, but no protective level of antibodies will develop either. FALSE. Severely debilitated dogs may be susceptible to vaccination-induced disease from modified live virus since they lack enough protein to make antibodies. If they must be vaccinated, killed vaccines should be used and the dogs should be revaccinated when their health improves. TRUE. Rabies is a serious viral disease that is fatal in humans and animals and can be transmitted from one to the other. Public health regulations require vaccination of all domestic animals that could transmit rabies to people. The normal rabies vaccination age for dogs is four months, but the vaccine can be used in puppies as young as three months. TRUE. Immune serum contains preformed antibodies just like colostrum. It provides instant protection, but as the antibodies are used up (within a few days to a few weeks), they are not replaced. Immune serum is used only to protect dogs that may be exposed to disease before permanent vaccinations can be completed. NOTE: The best way to protect your dog is to have your veterinarian set up a vaccination program. This program will provide your dog with excellent protection against almost all of the important infectious diseases that he could catch. Proper protection means a longer healthier life for your dog. Prior to 1977-78, parvovirus did not exist in the dog. The virus is a close relative of feline panleukopenia (feline distemper) and in fact, may have mutated from the cat and infected the dog in the late 1970ıs. The virus is extremely hardy and survives for long periods outside its host. The virus will live in the environment up to 6 months and survives winter nicely under a blanket of snow where the temperature is usually around 25-28 degrees F. Extremely cold temperatures prior to snow fall will kill the virus. Sodium hypochlorite (bleach) is the only effective disinfecting agent. The virus is transmitted by oral ingestion of viral contaminated feces. Upon ingestion by the new host it infects local lymph nodes, quickly multiplies and then via the blood moves to the small intestine where signs of the disease begin in approximately 5-6 days. The virus is extremely deleterious to the lining (mucosa) of the small intestine. The surface of the mucosa is stripped away upsetting crucial barriers and interfering with normal balance of digestive enzyme secretion and nutrient absorption. Additionaly, the normal bacterial flora of the small intestine which aid in digestion are now exposed to ulcerated mucosa, providing a direct route into the blood stream. Fluid loss from both vomiting and diarrhea is dramatic and dehydration ensues. The onslaught of bacteria and toxins into the blood will ultimately cause death. Precipitous drops in white blood cell (WBC) counts are common and relate directly to the prognosis and outcome of the infection. Ominous drops in white blood cells are attributed to overwhelming degradation of WBCıs and the direct depressive viral effect on WBC production in the bone marrow. The incidence of the disease is highest in young dogs and tends to start some time after the puppy has lost its maternal protection passed on at birth with the first milk (colostrum). Any age can be infected but, most dogs are infected between the ages of 2-6 months when maternal antibody decreases below a protective level in the puppy. Signs of the disease usually are mild to nonexistent. However, a full blown case of parvovirus untreated can easily be fatal. Certain breeds seem to be more sensitive to the disease; possibly related to their immune system. They include rottweilers, Doberman Pinschers, and possibly black Labrador retrievers. Generally, a diagnosis is made on the signs of the disease and falling white blood cell counts. Good rapid diagnostic tests are also available at veterinary clinics. Additionally, the virus can be found in the feces by commercial labs using electron microscopy. Treatment for the disease is primarily supportive although recently immunotherapy has become important. Historically, dogs were supported by aggressive intravenous fluid therapy to combat hydration and antibiotics given to reduce secondary bacterial infection. Food is withheld until vomiting has ceased. Many veterinarians employ antiemetics to lessen the signs and aid in the control of dehydration. Blood transfusions have been employed to increase the level of globulins, red blood cells and serum protein being lost via the bowelıs bloody diarrhea. Most recently, antitoxins and antiparvo serum are showing results. With hospitalization and vigorous support most dogs will survive severe cases of parvo virus. Early detection and aggressive therapy are the key to success. Prevention of parvo virus is by vaccination. Modified live vaccines are the most effective and continue to be safe. Producing and effective level of protection requires frequent vaccination starting at 8 weeks of age and repeating every 3-4 weeks until the puppy is sixteen weeks old. Some investigators have suggested extending the protocol until 20-26 weeks because of the persistence of maternal antibody in the puppy which neutralizes the vaccine. Currently, annual revaccination is recommended. Recently, it has been suggested that repeated annual vaccination may also produce persistent antibody interference to the vaccination. After the initial puppy series and first annual revaccination, boosters in the future may be recommended triennial or less frequent. A change in vaccine protocol, until further research is done, is not recommended. About the Disease Rabies is a disease that can kill people as well as animals. The disease is viral in nature and typically passed through contact with the saliva of an infected animal. People may get the disease by being bitten, licked, or scratched (saliva is often found on claws). Approximately twenty four hours after the virus enters the body it attacks the brain. Once this stage has been reached, it is uncurable, and death eventually results. Time is of the Essence If rabies shots are given within the 24 hour initial exposure period, the disease can be prevented. As soon as possible after an animal bite, scrub the wound with soap and water for fifteen minutes. As a general rule, wash well with soap after any contact with a wild animal. Don't take any chances, report all bites to the proper authority in your area immediately! Often, you may call your county health department. If in doubt, or after hours, call your local hospital emergency room, or even 911. A Rabid Animal Rabies may cause the behavior of an animal to change. A friendly pet may want to be left alone; a shy pet may want attention and may seem unusually affectionate. The animal may be restless, have difficulty walking, eating, drinking, drool saliva, make strange noises, bite or scratch an old wound, or seem to be choking. The animal may become excited, confused, or vicious. It may attack people, other animals, or even fixed objects in its state of illness. Warn children against touching, petting, picking up, or even going near any stray dog, cat, or wild animal. Children are often victims of rabies. Beware of any wild animal that seems to be tame, friendly, or is seen in the daytime. The fox, raccoon, and skunk are nocturnal animals which avoid people except in rare cases. When the virus affects their brains they may be seen in areas that are not their usual habitat. They may lose their fear of people and enter buildings, homes, and cars. They may attack anything with no provocation. Wild Animals as Pets There are no rabies vaccines available to immunize skunks, raccoons or other wild animals, be they pets or not. The skunk is the animal most commonly found to be rabid in the US, and is the most common cause of rabies in humans in the US. Skunks are very susceptible to rabies and when infected have large amounts of rabies virus in their saliva. Compounding the problem, pet skunks bite, and may develop rabies as much as six months after being exposed. Any bat that can be approached is sick, and probably has rabies. Never touch a bat. Cover it with a trash can lid or similar until it can be disposed of. Oddly, squirrels are not a big source of rabies in this country. They typically either get away from a rabid attacker totally unscathed, or do not survive the attack. Squirrels are susceptible though, so watch for the warning signs. What to report: Confine the Suspect Animal If possible, confine the animal so that it may be picked up by authorities for 10 day quarantine and observation. This is necessary so the attending physician can treat the victim properly. Vaccinate Your Pet Vaccinations are available for both dogs and cats, and are almost universally required by law. Your pet must be vaccinated if four months or older. They should receive their first two vaccinations one year apart, with boosters following every three years. Contact your vet today to make an appointment. Losing your pet to rabies is tragic, and your pet will be dangerous to you and everyone else. Don't think it can't happen to your pet just because he is in a fenced area, or stays inside. It only takes one mishap and all is lost.
<urn:uuid:e21ca1ac-2653-4c27-b0b4-052bfa265b8b>
CC-MAIN-2022-33
http://www.malteseonly.com/vaccine.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570871.10/warc/CC-MAIN-20220808183040-20220808213040-00001.warc.gz
en
0.943358
2,747
2.734375
3
When an auto manufacturer needs to cut costs it will sometimes look for help from another manufacturer. This process results in a merging between companies in order to benefit one another. Companies may merge to be cost efficient or even to gain entry into another market segment. Either way, manufacturers try to gain instant results by merging. Auto manufacturers compete with each other to give consumers the state of the art safety systems that they demand. Parents are becoming more concerned about their family’s safety with the lifesaving abilities of airbags. Consumers are looking at airbags as a very important option when making a vehicle purchasing decision. Not only must the automobile come equipped with one, but consumers also want a way to disengage the passenger side of the system if needed for children and infants. In the 1960’s, automotive safety began with a man by the name of Ralph Nader. In November of 1965 Nader wrote Unsafe at Any Speed: The Designed-in Dangers of the American Automobile. The target of this book was General Motors’ Corvair Nader claimed the rear suspension was faulty and made it possible to skid violently and roll over (Bollier). After Nader made the public aware of safety concerns, automotive manufacturers started putting items such as power disk brakes as standard equipment on new automobiles. GM started impact testing and designed side beam guards in the late 1960’s (General Motors website). Nader’s continued crusading into the 1970’s made GM realize that it had to be proactive in the safety movement. The result of the movement was designing an airbag in 1973. Volvo had already introduced the airbag in 1972 on its 240/260 series (Volvo History). Companies, realizing that Nader was not going to be disappearing anytime, soon decided to look for suppliers that were safety conscious. Automotive manufacturers began buying safety glass, which reduced injuries from large glass shards in accidents. The introduction of the steel belted tires reduced the amount of tire blowouts, which can lead to rollovers. In the 1980’s the public started to listen and jump onto Nader’s bandwagon. The public was demanding automobiles equipped with life saving safety features. GM introduced the rear lap/shoulder belt in 1986 as standard equipment. Also in 1986 Volvo introduced a detachable seat for children up to the age of four. In the late 1980’s antilock brakes became optional equipment offered by most car companies. In the 1990’s airbags and antilock brakes became standard equipment on most automobiles. Crumple zones became a priority and were implemented to give added safety during an accident. The front and rear of the automobile crumple, accordion style, while keeping the cabin intact. Better day-time visibility was an issue, so daytime running lamps became a standard option for many manufacturers. In the mid-nineties OnStar, a roadside assistance program, was developed for GM to be installed on Cadillacs. In the 1990’s, mergers became more important in the automotive industry, but for different reasons than in the decades before. In the 1940’s and 1950’s auto manufactures bought other automotive makers to eliminate competition and increase in size. This strategy worked in the formative years of the automotive industry because foreign competition was non-existent. People only had American companies to purchase from, so the automaker felt this was the best use of their money. In the 1970’s foreign competition became stronger, but American companies still felt confident in their sales ability. Big American automotive mergers were limited in the 1970’s and 1980’s. Mergers are now taking place to gain entry into a different market segment. Companies now want to merge with automotive leaders in their respective market segments. Ford Motor Company wanted into the luxury car market, so instead of designing a new car it purchased Jaguar. If Ford had not bought an established luxury car company, it would have had to market a whole new automotive line that was not guaranteed to be successful. Jaguar gave them instant credibility without the struggles of an upstart company. The global auto industry has been overcome with merger madness. Leading the charge was the union between Daimler-Benz and Chrysler. This merging changed the landscape of the global auto industry in one stroke. By joining together and combining forces, DailmerChrysler looks to bring formidable and muscle under one roof. To sum up it’s conquest, it wants to change the way the auto industry operates. However, in some merging feats companies like Ford only wish to gain an easy entry into a different market segment. Merging allows for better cost effectiveness as well as easy entry into a specific market segment while instantly gaining a reputation. Cost effectiveness is gained when companies merge allowing them to share technology like engines, platforms, etc. In the merging of DailmerChrysler, Chrysler’s renowned low-cost production of trucks, minivans, and sport-utility vehicles will be called upon to cut costs. This saving is expected to be around $3 billion annually, including $1. 1 billion in purchasing costs alone (Hughes). Chrysler Corporation has shared engines as well as production facilities with Mitsubishi Motors for several years. In the past Chrysler borrowed the 3. 0 liter six cylinder engine from Mitsubishi for its minivans. Currently, Dodge Stratus and the Mitsubishi Eclipse share the same engine as well as production lines in an effort to cut costs. According to Jacques Nasser, Ford Motor Companies president, “The next big efficiency isflexibility between vehicles. You are not retooling, redesigning and remanufacturing every part. ” Ford is looking to cut the number of platforms they use from 32 down to 17 by 2003 (Welch & Howes). Nasser also states that in the coming years the automotive market will demand greater building flexibility between models to build off the same foundation. One example of this is the case of Audi and Volkswagen, where one frame platform serves seven automobiles. Cost efficiency is not the only goal of auto manufactures in today’s merging feats. Another main reason is to gain entry into a certain market segment. Starting up a new product line has tremendous costs associated with it. Also, it is difficult to convince buyers that it can produce quality cars in another particular market segment. By merging with another company, a company can build upon it’s know reputation. Ford says Lincoln stands for “American luxury”; Volvo means “thoughtful” and “understated”, Jaguar suggests “refined power”, and Aston Martin is a “most exclusive club” (Holstein). Ford is looking to these names to catapult them to the rank of the world’s dominant maker of luxury vehicles. In 1998 Ford sold 250,000 luxury vehicles worldwide and Volvo sold about 400,000. If Jaguar continues to expand and the numbers from Ford and Volvo are combined, Ford should be able to sell about 1 million luxury cars per year soon after 2000 (Holstein). With all the merging and production it is very important for an automotive maker to avoid the mistake made by General Motors when it lost the definite identities between Chevy, Buick, and Oldsmobile. To Ford’s credit, it has maintained Jaguars original identity and kept the great reputation that follows it. Jim Mateyka, vice president of the auto practice at A. T. Kearney says, “It’s next to impossible to suddenly convince people after 100 years that Ford is a luxury car” (Holstein). All Ford had to do was purchase the name Jaguar and make the company better while at the same time gaining an instant reputation for luxury. The auto industry started out designed to fit the need for transportation only. Safety was never a concern to the public due to the fact that automobiles were limited to a top speed of fifteen to twenty miles per hour. Auto manufactures were more concerned with cheaper production than safety. As speed increased and time moved on, safety concerns started to rise. These concerns led to the improvement of many safety features from seatbelts to the new “smart airbags”. Recent General Motors studies show that consumers rate safety as the number two item they look for when purchasing a new vehicle. The first item consumers look for is price. “Safety is now something people want and expect in an automobile” according to Dee Allsop, a pollster and senior vice president of Wirthlin Worldwide, which assesses marketing issues for consortium of auto makers (ElBoghdady). The one safety feature expected in today’s automobiles is a state of the art airbag system. Airbags have become one of the most heated issues in the automotive industry. As airbags continue to reduce fatalities and prevent injuries, the consumer demand for airbags rises. Estimated airbag life saving benefits calculated as of October 1, 1999 were that 4,011 drivers were saved and airbags while 747 passengers were saved (from NHSTA). As a result, car driver fatalities was reduced by 31 percent and car passenger fatalities reduced by 32 percent. Light truck driver fatalities was also reduced by 36 percent (from NHSTA). More people are becoming concerned over their children’s safety and are looking for and demanding safety features. This is making the safety technology departments race to make their company’s vehicles safer quicker (ElBoghdady). The airbag demand has Ford installing as standard equipment side-curtain airbags installed in addition to the driver and passenger side airbag on the Focus and the 2002 Explorer. The passenger airbags on the 2000 Ford Taurus has sensors that adjust for a person’s height and sitting position. They will deactivate the front and side airbags in a low speed collision to avoid passenger injury. Even though airbags can save lives, they have been know to tragically take a lives, especially those of children. Safety is the top concern for baby boomers with children due to seatbelt and anti-drunken driving campaigns since grade school (ElBoghdady). These tragic events have the consumers asking about extra safety features such as devices that deactivate the passenger side airbags for infants and children. Mercedes is now equipping some vehicles with a “baby smart system” to deactivate passenger-seat airbag if there is a special child carrier buckled up front (Consumer Reports). Consumers may purchase a special car seat from a Mercedes dealer, which is equipped with a sensor that triggers the “baby smart system” once installed in the front passenger seat. The future of American safety technology may lie in the hands of General Motors. GM is expecting to match the European competition of Mercedes’ Tele Aid system by installing its own on-board global satellite positioning and hands-free cell phone service into their 24-hour OnStar system. When an airbag is deployed, a sensor uses the global satellite positioning to send a signal to the OnStar center to dispatch emergency personal immediately to the site of the vehicle. Ford is expected to start installing a similar system in the year 2001. The future looks bright when Terry Connolly, director of GM’s North American Safety Center says; “There are opportunities in crash avoidance that are at some point going to become fairly dramatic. At some point in the future a crash will be a rare event (ElBoghdady). ” Merging is a highly effective means of automobile manufactures to become cost efficient. Savings may come from sharing engines, platforms, and production facilities. However, cost efficiency may not be the main goal for merging. Many times companies wish to enter another market segment when the opportunity looks promising. For example, a company may wish to enter into the luxury market, but may not want to spend the money to design and start up production of a new line. Starting a new line may come with disastrous consequences if the new model does not fair well. Consumers may not wish to purchase the automobile if the manufacturer is not well known for luxury. The purchase of an automobile maker with a well-known luxury reputation gains instant access into the luxury market segment with instant name recognition and little risks. Safety is a very hot topic as consumers are demanding safety options when purchasing a new automobile. Airbags are quickly becoming the biggest feature to lead the way in safety competition. With a high demand for airbags, there is also a high demand from parents to have a way to disengage the system to protect their children and infants. With airbags saving thousands of lives and reducing fatality rates as high as 36 percent, it is no wonder that consumers are so interested in demanding such a safety feature.
<urn:uuid:053d5a6e-eb17-40b5-a58a-81a3f6dafbb9>
CC-MAIN-2022-33
https://benjaminbarber.org/auto-competition/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572063.65/warc/CC-MAIN-20220814173832-20220814203832-00401.warc.gz
en
0.963983
2,614
3
3
Can you think of a movie or television show that portrays abortion as an option for one of its characters? Do you remember how the plot developed from there? What consequences were in store for that character? According to a recent study on abortion-related storylines in American television shows and films, when abortion is considered by a character, 14 percent of storylines see that character ultimately die, regardless of whether or not she chooses to have an abortion. Sixty percent of the deaths are caused by the procedure itself. After that, the deaths are most often caused by murder or suicide. So what do our dominant narratives say about abortion? It’s a risky procedure wrought with moral and social stigma carrying significant consequences, one potentially being death. But what is the actual risk of death by abortion? Less than 0.05%. Quite literally, your chance of dying from a first-trimester abortion is one in a million. The dominant narrative also tells us that abortion is rare – only a very small percentage of pregnancy stories include abortion as an option when unintended pregnancies occur. In reality, abortion is incredibly common: one in three women will have had an abortion by the time they are 45. These are the dominant narratives about abortion – the stories we see and hear that shape how we perceive the procedure. Meanwhile, where are the real stories? Where are the facts and the lived experiences of actual people having abortions? Stigma and Silence Heavily politicized as reproductive rights are, new legislation is constantly being introduced to create financial and physical barriers to a woman’s legal right to an abortion. From the closing of clinics due to arbitrary laws about door heights and parking lots, to the increase in laws requiring waiting periods and ultrasounds, abortion is becoming increasingly more difficult to access, particularly for low-income women. This anti-abortion political landscape is supported by the assumption that abortions are a dangerous procedure that a small minority of (immoral, selfish) women is getting. The stigmatization (both of the procedure, and the women who choose it) is fed by the media – in television, film, and news stories. But this narrative couldn’t be any farther from the truth. Most people aren’t aware that abortion is actually one of the safest and most common medical procedures. If one-in-three women has had an abortion, where are their stories? There are few public spaces where people feel safe to tell their stories about abortion, and the political, religious, and social stigma surrounding abortion only increases in this silence. People aren’t encouraged to share their abortion stories; quite the opposite, they are encouraged to keep their stories to themselves. Although abortion has been legal for more than 40 years now, the cultural stigma of abortion tells us abortion experiences are to be kept secret and private. This silence is hurtful to each person who chooses to have an abortion or considers one. The stigma isolates us, leaving everyone affected, and many ashamed, confused, and unaware that so many around them share their experiences. Media Archetypes and Myths When the stories of real people aren’t being heard, the archetypes perpetuated by American media and politics carry a lot of weight in shaping our perceptions around the procedure. What are the dominant archetypes of women considering abortion? A popular archetype in fictional television is the struggling teenager – the high school student who accidentally becomes pregnant and must bear the weight of shame in regard to her sexual choices. Despite the predominance of this narrative, only 18 percent of abortions are obtained by teenagers. (Though this does not diminish the need to support abortion access for girls under 18!) The dominant narrative also makes assumptions about parenthood: It is commonly thought that abortions are primarily chosen by those who do not want to be parents. The stigma of choosing abortion says it is a selfish choice, and anti-abortion advocates place mothers at the top of the moral hierarchy, allowing these categories to be perceived as mutually exclusive. In reality, the majority of those who have abortions are mothers. According to the Guttmacher Institute, 61 percent of those who choose abortion already have at least one child, and many cite their children as the reason for their abortion. Another archetype, one perpetuated quite often by anti-abortion politicians and activists, is the concept of someone using abortion as birth control. This image of a villianized, careless woman who obtains countless abortions due to overall laziness and hypersexuality is based on many false assumptions, one being that abortions are easy to get. It follows, as many of these archetypes do, an overall cultural demonization of female sexuality. Our media, in its tendency to oversimplify experiences, particularly those of women, tends to either demonize or victimize women and girls, particularly when it comes to female sexuality. These stories show us what happens when a girl or woman engages in sex and isn’t prepared to deal with the consequences. Abortion deeply intersects with how our culture perceives and treats sexuality and motherhood. As we saw in the above study, when a character even considers an abortion on television, her likelihood of death skyrockets. What kinds of dangerous subconscious link does this then create in the minds of viewers? Even when characters don’t die, they struggle. And this is the larger perception of abortion in our culture: that having an abortion is always an emotionally difficult, guilt-ridden, traumatic experience. Abortion is scary, we’re told – the procedure, the decision to have one, and the weight of silence that one who obtains an abortion must carry throughout their lives. This cultural production of what an abortion experience is supposed to be like has no doubt created scary experiences for countless women, as those making this decision often have no stories based in reality to gauge what their experiences will/should be. So What Are Abortions Actually Like? Abortions aren’t actually like anything in particular. For some, abortions aren’t a difficult decision or experience at all. For others, they are extremely difficult, either financially, emotionally, or both. For most, it is a combination of the two – a complex symbiosis of emotions: relief, guilt, joy, shame, independence, stigma, empowerment. Just like people undergoing any other medical procedure, those who have abortions experience a whole range of feelings based on where they are in their lives, their support system, their insurance and medical access, and their own relationship to the procedure. Unlike any other medical procedure, abortion carries a whole mess of moral and ethical stigma, making the experience far more difficult than it needs to be. But for the overwhelming majority – 90 percent of those who obtained an abortion, the overarching feeling they have afterward is relief. Intersecting Identities of Oppression Increase Stigma When we talk about abortion, who are we talking about? We use the terms “women” and “girls” because it helps draw attention to the fact that gender oppression is at the root of stigma and inaccessibility to abortions. But there’s a problem when we talk about reproductive rights as if women and girls are the only people who get abortions. Trans men and other trans* people also get abortions. And for trans* folks, the stigma is even higher, as it is for everyone else who experiences multiple layers of oppression. Low-income and working-class women experience a higher stigma around abortion, and have the hardest time accessing it. Women of color, particularly Black women, are often faced with a racially charged stigma as (both White and Black) anti-abortion activists argue that abortion is “Black Genocide,” advertising on billboards in Black neighborhoods that “the most dangerous place for an African-American is in the womb.” We should work to understand, as we work to battle stigma around abortion, how that stigma exponentially increases for people who already experience marginalization and oppression in society. Such a large part of the negative emotions experienced by those obtaining an abortion are caused directly by the cultural stigma surrounding it. How can we work to alleviate that stigma? What Can We Do? 1. Create Safe Spaces We can work to create safe spaces where people feel encouraged to share their own stories – and not just those who have had an abortion but all those who have had an abortion experience: parents, partners, friends. In creating these safe spaces, we can work to make these stories heard, making it safer and easier for others to speak and to feel more supported if they choose to have an abortion. More and more, campuses and communities around the country are organizing abortion speak-outs and similar events meant to encourage and empower folks to tell their stories. Advocates for Youth’s 1 in 3 Campaign helps to share these stories through video and social media, and has toolkits for promoting supportive spaces and abortion access. 2. Promote Realistic Portrayals in Media Because media shape our perceptions of experiences, we must also work to create more realistic and sympathetic portrayals of abortion stories in American media. By sharing honest, accurate, and relatable stories about abortion in television and movies, we can work to counter the dominant narrative that villianizes, victimizes, and oversimplifies abortion experiences. 3. De-Stigmatize the Word Despite major forward steps in the abortion rights movement in numbers, reach, and inclusivity, there seems to be a backward motion in regards to the word ‘abortion.’ Following the anti-abortion movement’s lead in stigmatizing the procedure as well as the word, more and more abortion rights advocacy groups are sticking strictly to the word ‘choice.’ NARAL, for example, whose acronym originally stood for National Abortion Rights Action League, changed its name in 2003 to NARAL Pro-Choice America as a marketing move, a new brand that many organizations, as well as political campaigns and grassroots activists, have followed. Who can say no to “choice,” right? This is a political strategy, and one can that can be effective at achieving certain short-term goals. But in the long run, when we shy away from the word abortion, it only serves to further stigmatize it and hurt our cause. If abortion rights advocates aren’t willing to use the word abortion, who will? 4. Share Our Stories Have you had an abortion? Share your story with your friends, if you’re comfortable doing so. When I began talking about my abortion to my friends, I learned that many of them had also had one. Sometimes it just takes one person sharing their story to make others feel comfortable to share their stories as well. When there is such a limited space to talk about abortion in society, the fictional stories we see and the dominant political discourses we hear and participate in become what we think we know about abortion, and this not only affects how we as a culture perceive it, but how the political landscape reacts and blocks access to reproductive rights. When abortion is portrayed and perceived as unsafe, it is much easier for legislators to gain public support for measures that make abortions more difficult to get. When we share our stories, we not only show how common and complex abortion experiences are, we also serve to highlight the realities of abortion – who gets abortions, what they’re actually like, and the issues we face in gaining full abortion access. When we tell our stories, we put names, faces and real lives to this “mythical procedure,” basing it in reality and working to de-stigmatize abortion as a part of a growing effort to make abortion experiences emotionally healthy, supportive and fully accessible to all. [ultimatesocial_facebook custom_class=”fb-btn-us”] [ultimatesocial_twitter custom_class=”tw-btn-us”] Laura Kacere is a Contributing Writer for Everyday Feminism and is a n feminist activist, social justice organizer, clinic escort, and yogi living in Washington, D.C. Laura coordinates the Washington Area Clinic Defense Task Force, teaches yoga with the intent of making it accessible to all, and does outreach for the DC-based sex worker support organization, HIPS. When she isn’t on her mat or at the clinic, she’s usually thinking about zombies, playing violin, eating Lebanese food, and wishing she had a cat. Follow her on Twitter @Feminist_Oryx. Read her articles here. Search our 3000+ articles! Our online racial justice training Used by hundreds of universities, non-profits, and businesses. Click to learn more
<urn:uuid:165a09aa-c201-4fbe-99ed-deff2d49ad0f>
CC-MAIN-2022-33
https://everydayfeminism.com/2014/03/dominant-narratives-abortion/page/4/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572077.62/warc/CC-MAIN-20220814204141-20220814234141-00604.warc.gz
en
0.956697
2,656
2.75
3
One of the side-effects of the famed decadence of the late Roman Empire was population decline, which was brought about not just by economic problems but by moral failings as well. Emmet Scott, whose magnum opus Mohammed and Charlemagne Revisited was discussed here at length several months ago, was written a fascinating article on the topic entitled “The Role of Infanticide and Abortion in Pagan Rome’s Decline” for The New English Review. Mr. Scott makes the case that it was the Christianization of the Empire — which brought with it a new moral and ethical code — that arrested the demographic disappearance of the Romans: The crisis of the third century naturally became the subject of intense debate amongst historians. Nowadays it is often regarded as having an economic origin, and scholars talk of inflationary pressures and such like. This may be partly true; but what seems undeniable is that the real problem lay deeper. There is now little dissention on the belief that by the year 100 the population of the Empire had ceased to grow and had begun to contract. The inability to hold the most outlying of the provinces, in Dacia and Germany, is viewed as an infallible sign of a general shrinkage, and archaeology has provided solid evidence: by around 400 the great majority of the empire’s towns and cities occupied less than half the space they did in 150. There are also clear signs of a marked decline in rural populations: excavations in southern Etruria and elsewhere in Italy have shown a fairly dramatic fall in rural populations from the end of the second century through to the fifth. (See eg. Richard Hodges and William Whitehouse, Mohammed, Charlemagne and the Birth of Europe (London, 1982), pp. 40-42) From the same period archaeologists have noted not only the cessation of major new building but also the demolition and recycling of existing monuments. (See eg Peter Wells, Barbarians to Angels (New York, 2008), pp. 109-10) There appears also in the urban settlements of temperate Europe a layer of dark humic soil, sometimes more than a meter thick, containing cultural debris – pottery, bones of butchered animals, glass fragments, etc – mixed into it, covering occupational remains of earlier centuries. “The dark earth,” says one historian, “has been found to contain remains of timber-framed, wattle-and-daub huts, along with sherds of pottery and metal ornaments datable to the late Roman period. These observations demonstrate that people who were living on the site were building their houses in the traditional British [and north European] style rather than in the stone and cement fashion of elite and public Roman architecture.” (Ibid., pp. 111-12) “What are we to make of these two major changes reflected in the archaeology?” the same writer asks. He concludes that, “After a rapid growth in the latter part of the first century… [there was] a stoppage in major public architecture and a reverse of that process, the dismantling of major stone monuments, at the same time that much of the formerly urban area seems to have reverted to a non-urban character.” (Ibid., p. 112) What could have caused such a dramatic and sustained demographic collapse? As might be expected, writers of various hues have not been slow to propose answers. These range from the plausible to the bizarre. The best explanations however have kept an eye both on archaeology and on the written sources, and what has emerged over the past fifty years is a picture of a Roman Empire unfamiliar to most students of classical civilization. It is picture of a world immersed in decadence, squalor and brutality. Life in a Roman city, it seems, was anything but comfortable. The image of the good life of centrally-heated villas with mosaic floors and marble pillars – the image generally presented to the public in guidebooks and documentaries – was of course far from typical. Much new research has been done on the living conditions of ordinary Romans in the last fifty years, and what has emerged is the picture of a life of almost unimaginable squalor. The cities, by modern standards, were packed: people lived in appallingly confined spaces. In Rome, the great majority of the poor inhabited multi-story apartment blocks named insulae (“islands”), which were little more that multi-story slums. They were also death-traps. Several Roman writers noted that the most frequently heard sound in the city was the roar of collapsing insulae. They were constructed of the cheapest materials, and their occupiers rarely had any warning of their impending disintegration. As might be imagined, deadly epidemics were commonplace, and the failure of the ancients to understand the pathology and spread of infections led to a plethora of pandemics which wiped out millions. Crime too was of epidemic proportions; and a society which exacted the death penalty for minor offences offered no real deterrent against more serious crimes such as murder. The sheer savagery of Roman attitudes is of course already well known, and we need not labor the obvious fact that people who could watch other human beings being torn to shreds by wild beasts for “entertainment” were of a very low spiritual state. The institution of slavery, by its very existence, had a corrupting effect on attitudes, and slaves, as the property of their owners, could be exploited in whichever way their owners wished. All of them, both male and female, were the sexual playthings of their masters, and must submit to the sexual demands of their owners at any time or place. The sex “industry” was a major employer, as excavations at Pompeii, Herculaneum, and numerous other ancient cities have revealed only too graphically. As might be imagined, a society which harbored such attitudes did not shrink from taking drastic measures to deal with the unwanted issue of casual liaisons, and the practice of infanticide was widespread and commonplace in the classical world. (See eg. William V. Harris, “Child Exposure in the Roman Empire,” The Journal of Roman Studies, Vol. 84 (1994)) Official Roman documents and texts of every kind from as early as the first century, stress again and again the pernicious consequences of Rome’s low and apparently declining birth-rate. Attempts by the Emperor Augustus to reverse the situation were apparently unsuccessful, for a hundred years later Tacitus remarked that in spite of everything “childlessness prevailed,” (Tacitus, Annals of Imperial Rome, iii, 25) whilst towards the beginning of the second century, Pliny the Younger said that he lived “in an age when even one child is thought a burden preventing the rewards of childlessness.” Around the same time Plutarch noted that the poor did not bring up their children for fear that without an appropriate upbringing they would grow up badly, (Plutarch, Moralia, Bk. iv) and by the middle of the second century Hierocles claimed that “most people” seemed to decline to raise their children for a not very lofty reason [but for] love of wealth and the belief that poverty is a terrible evil. (Stobaeus, iv, 24, 14) Efforts were made to discourage the practice, but apparently without success: the birth-rate remained stubbornly low and the overall population of the Empire continued to decline. A major and exacerbating factor in the latter was the fact that baby girls seem to have been particularly unwanted. A notorious letter, dating from the first century BC, contains an instruction from a husband to his wife to kill their newborn child, if it turns out to be a girl: I am still in Alexandria. … I beg and plead with you to take care of our little child, and as soon as we receive wages, I will send them to you. In the meantime, if (good fortune to you!) you give birth, if it is a boy, let it live; if it is a girl, expose it. (Lewis Naphtali, ed. “Papyrus Oxyrhynchus 744,” Life in Egypt Under Roman Rule (Oxford University Press, 1985), pp. 54) Although it may be tempting to dismiss this letter as anecdotal, the very casualness of the writer’s attitude shows that what he was saying was not in any way regarded as unusual or immoral. In such circumstances we cannot doubt that girls were especially selected for termination, and since the propagation of populations is fundamentally related to the number of females, such a custom can only have had a devastating effect on the demographics. In addition to infanticide the Romans also practiced very effective forms of birth control. Abortion too was commonplace, and caused the deaths of large numbers of women, as well as infertility in a great many others, and it has become increasingly evident that the city of Rome never, at any stage in her history, had a self-sustaining population, and numbers had continuously to be replenished by new arrivals from the countryside. (For a discussion, see Rodney Stark, The Rise of Christianity: A Sociologist Reconsiders History (Harper Collins, 1996), pp. 95-128) [emphasis added] In his trenchant study of Rome’s social history during these centuries sociologist Rodney Stark wondered how the Empire survived as long as it did, and came to the conclusion that it did so only through the continual importation of barbarians and semi-barbarians. Far then from being a threat, the “barbarians” were seen as a means by which Rome might make good manpower shortages. The problem was that no sooner had the latter settled within the Imperial frontiers than they adopted Roman attitudes and vices. Quite possibly, by the end of the first century, the only groups in the Empire that was increasing by normal demographic process were the Christians and the Jews, and these two were virtually immune from the contagion of Roman attitudes. Taking this into account, several writers over the past few decades have suggested that Rome’s adoption of Christianity in the fourth century may have had, as one of its major goals, the halting of the empire’s population decline. Christians had large families and were noted for their rejection of infanticide. In legalizing Christianity therefore Constantine may have hoped to reverse the population trend. He was also, to some degree, simply recognizing the inevitable. The question for historians was: Did Constantine’s surmise and gamble prove correct? Did the Christianization of the Empire halt the decline? Read the rest at The New English Review.
<urn:uuid:c7f02777-cf65-4dc5-ae7e-b202acc6b8f6>
CC-MAIN-2022-33
https://gatesofvienna.net/2012/11/the-demographic-collapse-of-pagan-rome/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571597.73/warc/CC-MAIN-20220812075544-20220812105544-00597.warc.gz
en
0.9764
2,229
3.125
3
Dogs are man’s best friend, but some people can’t have them as pets for various reasons. Someone in their family might be allergic to them, or the place they’re staying won’t allow pets. Whatever the reasons, technology has an answer for it: robot dogs. In this article, we’ll see how robot dogs enhance our lives in many aspects, not just as companions but also as service dogs. We’ll be looking at what makes each robot dog unique and what goes into making one of them. Robot dogs resemble dogs, but not all of them are intended to be a companion. Boston Dynamics Spot and BigDog are examples of robots that do work. This article also looks at what’s next for robotic dogs and what goes into your average robot dog. What are Robot Dogs Designed For? Robot dogs are generally designed to be a companion for humans or as working robots. Sony’s AIBO is an example of a robot focused on entertainment and companionship rather than doing a job, while Boston Dynamics’ BigDog and LittleDog were designed for carrying loads and research, respectively. These aren’t the only possibilities with robot dogs, though, with service dogs being an ideal possibility. Service dogs are expensive to train and look after, so having a robot dog where most of that effort is lessened will help the owner of those dogs pretty well. The Technology that Goes Into Robot Dogs Since robot dogs are an area of research as well, all sorts of new technology like LIDAR and computer vision, among others, are tested with these robots. Consumer robot dogs have far more familiar technology like a more rudimentary voice recognition system, seeing what’s in front of it, and somewhat limited quadripedal movement. Fitting all of this into a robot makes it heavy, so don’t expect a robot dog to jump around as a real dog would; that and the locomotion that these robots use aren’t able to do so. Movement-wise, some of the more advanced robots use limbs with multiple joints, which are used to move the robot dog just like any other animal. Some of the consumer-grade models use wheels instead, which simplifies the systems that they use. Some are also capable of barking and emoting like dogs, but you’d mainly see these in robots designed for companionship or entertainment. Ever since Sony revealed the Aibo back in 1999, the public has been fascinated by a robot dog that has influenced popular culture even today. For most people, the first thing that would pop up in their minds when they hear ‘robot dog’ would be the Aibo. Sony hasn’t stopped releasing these cute robots, with updates frequently coming to the consumer robot dog. The newest one retails at around $2,900, so it isn’t exactly cheap but what it can do makes up for the price. It has an image-recognition camera on its nose, a time-of-flight sensor in its mouth, and a ranging and motion sensor on its belly. It’s jam-packed with sensors so that it can see and navigate itself in its environment. Like a real dog, Aibo has a personality to match, which changes how you interact with it, but has none of the high maintenance you expect from an actual dog. The Aibo is good for keeping children occupied by something other than a screen, and if you don’t want a large dog near your children yet, the Aibo is a good alternative. i-Cybie Robot Dog i-Cybie is a commercial robot dog with quite a few features that you would expect from a robot dog. It can respond to your touch, sounds, voice commands or any movements that you make. The unique thing about Cybie is that it “grows” the longer you use it, so it starts out as a puppy when you first set it up, and as time goes on and it interacts with you and its environment, it grows. The robot will respond positively or negatively depending on what you say and how the dog is feeling at the moment, but this feature is pretty rudimentary. It can plug itself into its charging dock when it starts to run low on batteries and to save charge; it can go to sleep after you stop using it for more than 30 minutes. Cybie isn’t as powerful or feature-rich as the Aibo, but it’s way less expensive. The WowWee CHiP is another companionship-focused robot dog that can see what’s around it and respond to what you say. It’s able to see and hear what you’re saying and understand some gestures as well. The robot dog also has Low Power Bluetooth that connects with your phone. This robot uses wheels instead of legs and uses an IR camera to navigate your home. Certain areas of the robot have capacitive touch sensors, which lets it know you’re touching it. It can also sense if it is being picked up with a combination of gyroscope and accelerometer. The CHiP is also less expensive than the Aibo, making it pretty accessible as a toy. The first military or service dog we’ll see today is Boston Dynamics’ BigDog. BigDog is a DARPA-funded project intended to help squads of soldiers carry more equipment and ammunition when deployed out on the field. It is a robotic pack mule that extends the capability of a military unit. It has legs that help it move across terrain that no wheeled or tracked vehicles can, and all joints are controlled and checked by the robot thousands of times a second to be as stable as possible on any terrain. It can carry up to 340 pounds up a 35-degree incline and showed a lot of promise. Unfortunately, DARPA abandoned the program because the gas engine it used was too noisy, and it needed that engine to power itself. The LittleDog robot by Boston Dynamics is intended for research and serves as a good base for research in quadripedal locomotion, basically moving on four legs. Its four legs are powered by three electric motors and have a free range of motion controlled by an onboard computer connected to sensors and other devices around the robot. DARPA funded it to encourage research into robots that could walk on legs, but as of now, the project is shelved. Legged Squad Support System (LS3) The Legged Squad Support System or the LS3 was another attempt by Boston Dynamics to aid troops in carrying heavy equipment. First revealed in 2012, the robot received a wide range of inputs from Marine and Army personnel and finally performed in a field exercise simulating a warzone at the end of development. The LS3 has some autonomy, with integrating technology developed as part of DARPA’s robotics autonomy programs. The robot looks similar to BigDog with its legs, but it has the following additional modes that offer the unit the choice to make it follow close or follow but take its own path. Maintenance issues and loudness were factors in the military shelving this project in late 2015, but it showed how helpful a robot could be with a military unit. The Spot is a commercially available four-legged robot from Boston Dynamics that can move around, see its surroundings and map them, or even get you something to drink. The Spot is a working robot dog intended to be used to inspect factories without a human needing to be present and carry small loads for inspection. Its application is primarily in industries where information gathering is essential. Spot can click images of wherever you ask it and make them available to you through your phone or tablet. The robot can laser scan structures to create a digital twin of your facility and alert you if there are any issues with your plant. Since this is a robot, you can use it to inspect spaces where people can’t go, for example, areas with high nuclear or chemical content. Spot does its job well, but it isn’t really a house dog. ANYmal is another industrial inspection-focused robot designed by ANYbotics. Steps, gaps and tight spaces are no issue for the ANYmal with its jointed legs to move autonomously. Once you guide it through your facility, the robot learns all routes and makes the shortest paths possible to any location that it wants to go to. You can also take over if you wish with a built-in teleoperation feature. It is also intelligent enough to plug into its charger when its batteries have gone low. If you want more range from your ANYmal, you can add more docking stations to your system. The inspection tools that the robot carries include thermal, visual and audio sensors that are packaged into a platform capable of pan, tilt and zoom. All data that it receives through its cameras and sensors are analyzed by its built-in AI and reported to its user. It is commercially available, but only for industry. The Vision60, according to Ghost Robotics, is an unmanned ground vehicle (UGV) that is suitable for defense and enterprise applications. It doesn’t need to be walked through an area to have autonomous movement, and it can map the area around it in real-time and navigate it as required. Redundancy is a feature here since even if the sensors fail, the robot can still find its way around and keep itself upright. Ghost Robotics intends to have the robot swim as well later down the line, but it can now walk, run, and climb almost any terrain. This is, in fact, thanks to the well-designed reverse jointed arms that offer the robot a lot of freedom of movement. These robots can’t be bought from your nearest electronics store and are only built to order. This is because it skews more towards the utility side of things. Chinese giant Xiaomi, as part of their diversifying into different types of tech, had come out with the CyberDog back in mid-2021. It is intended to be an open-source design, which means that anyone can design and make changes to the hardware and software of the CyberDog. Xiaomi reserved the first few units for select tech and robotics enthusiasts and around $1,540. It has most of the features that you would expect from a toy robot dog, like responding to voice commands and such, but it seems like Xiaomi is focusing more on the maker aspect of it. Since it is open-source, the number of community features that people could come up with is remarkable. Considering that one of the most popular operating systems globally, Linus, is entirely open-source, the CyberDog could be the robotic version. How the Robot Dog has Changed over the Years The robot dog started as a novelty in the 90s with Sony’s Aibo, but in the 21st century, we’re seeing a shift to research and more application-oriented robot dogs. The robot dog or toy market had been pretty saturated after Aibo shot the concept into popularity, so robotics companies have realized that the real growth lies in research, industrial or otherwise more helpful robot dogs. Robots were always made to lessen the effort humans have to put in, so naturally, robot dogs also seem to go the same way. The Future of Robot Dogs The future for robot dogs mainly lies in being in a support role for humans than outright replacing them. There is also the possibility of robot therapy dogs that are programmed to help a person through therapy. But that requires the robot to understand human emotions, which modern AI is still a bit far away from. We’ll see massive growth in robotics in general if AI and machine learning can keep up the growth it’s been seeing for the past few years. Man’s Best Friend Robot dogs aren’t just man’s best friend like normal dogs; they also help humans with a lot of work. This help might not even be visible, as is the case with Boston Dynamics’ LittleDog, primarily for research. It’s not all positives, though; security risks are always just around the corner waiting to strike, and as we link more parts of our lives to robots, it’s fine to feel insecure. But if companies that make these robots and users like you are proactive enough in dealing with potential security issues and follow protocol, you don’t have much to worry about. You May Also Enjoy Reading: - Best Robot Pets For Adults and Children - 10 Most Advanced Robots and Humanoids That Are Eerily Lifelike - How Are Robots Made? The Answer Is It Depends Frequently Asked Questions What was the first robot dog? The first robot dog was Sony’s Aibo, which came out in the late 90s. It pioneered the robot dog space and is still being sold today. How much does the robot dog cost? Robot dogs can cost from $100 for a basic one up to tens of thousands of dollars for robot dogs that do inspections in industries. It depends on what you use them for, and ones that are mainly used for companionship or entertainment are usually cheaper. When was the robot dog invented? The robot dog was invented in 1999 by Sony when they released the Aibo robot dog. Who invented Spot the robot? Boston Dynamics invented Spot, which factory owners can use to inspect their industrial plants.
<urn:uuid:76c372a0-0775-4e86-9a75-061f03cd730a>
CC-MAIN-2022-33
https://robotliving.com/robot-dogs/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571597.73/warc/CC-MAIN-20220812075544-20220812105544-00604.warc.gz
en
0.959682
2,871
3.140625
3
Protecting and unprotecting sheets is a common action for an Excel user. There is nothing worse than when somebody, who doesn’t know what they’re doing, over types essential formulas and cell values. It’s even worse when that person happens to be ourselves; all it takes is one accidental keypress, and suddenly the entire worksheet is filled with errors. In this post, we explore using VBA to protect and unprotect sheets. Protection is not foolproof but prevents the accidental alteration by an unknowing user. Sheet protection is particularly frustrating because it has to be applied one sheet at a time. If we only need to protect a single sheet, that’s fine. But if we have more than 5 sheets, it is going to take a while. This is why so many people turn to a VBA solution. The VBA Code Snippets below show how to do most activities related to protecting and unprotecting sheets. Download the example file I recommend you download the example file for this post. Then you’ll be able to work along with examples and see the solution in action, plus the file will be useful for future reference. Download the file: 0016 VBA Protect and unprotect sheets.zip Adapting the code for your purposes Unless stated otherwise, every example below is based on one specific worksheet. Each code includes Sheets(“Sheet1”)., this means the action will be applied to that specific sheet. For example, the following protects Sheet1. But there are lots of ways to reference sheets for protecting or unprotecting. Using the active sheet The active sheet is whichever sheet is currently being used within the Excel window. Applying a sheet to a variable If we want to apply protection to a sheet stored as a variable, we could use the following. Dim ws As Worksheet Set ws = Sheets("Sheet1") ws.Protect Later in the post, we look at code examples to loop through each sheet and apply protection quickly. Protect and unprotect: basic examples Let’s begin with some simple examples to protect and unprotect sheets. Protect a sheet without a password Sub ProtectSheet() 'Protect a worksheet Sheets("Sheet1").Protect End Sub Unprotect a sheet (no password) Sub UnProtectSheet() 'Unprotect a worksheet Sheets("Sheet1").Unprotect End Sub Protecting and unprotecting with a password Adding a password to give an extra layer of protection is easy enough with VBA. The password in these examples is hardcoded into the macro; this may not be the best for your scenario. It may be better to apply using a string variable, or capturing user passwords with an InputBox. Protect sheet with password Sub ProtectSheetWithPassword() 'Protect worksheet with a password Sheets("Sheet1").Protect Password:="myPassword" End Sub Unprotect sheet with a password Sub UnProtectSheetWithPassword() 'Unprotect a worksheet with a password Sheets("Sheet1").Unprotect Password:="myPassword" End Sub Catching errors when incorrect password entered If an incorrect password is provided, the following error message displays. The code below catches the error and provides a custom message. Sub CatchErrorForWrongPassword() 'Keep going even if error found On Error Resume Next 'Apply the wrong password Sheets("Sheet1").Unprotect Password:="incorrectPassword" 'Check if an error has occured If Err.Number <> 0 Then MsgBox "The Password Provided is incorrect" Exit Sub End If 'Reset to show normal error messages On Error GoTo 0 End Sub If you forget a password, don’t worry, the protection is easy to remove. Applying protection to different parts of the worksheet VBA provides the ability to protect 3 aspects of the worksheet: - Contents – what you see on the grid - Objects – the shapes and charts which are on the face of the grid - Scenarios – the scenarios contained in the What If Analysis section of the Ribbon By default, the standard protect feature will apply all three types of protection at the same time. Sub ProtectSheetContents() 'Apply worksheet contents protection only Sheets("Sheet1").Protect Password:="myPassword", _ DrawingObjects:=False, _ Contents:=True, _ Scenarios:=False End Sub Sub ProtectSheetObjects() 'Apply worksheet objects protection only Sheets("Sheet1").Protect Password:="myPassword", _ DrawingObjects:=True, _ Contents:=False, _ Scenarios:=False End Sub Sub ProtectSheetScenarios() 'Apply worksheet scenario protection only Sheets("Sheet1").Protect Password:="myPassword", _ DrawingObjects:=False, _ Contents:=False, _ Scenarios:=True End Sub Protect contents, objects and scenarios Sub ProtectSheetAll() 'Apply worksheet protection to contents, objects and scenarios Sheets("Sheet1").Protect Password:="myPassword", _ DrawingObjects:=True, _ Contents:=True, _ Scenarios:=True End Sub Applying protection to multiple sheets As we have seen, protection is applied one sheet at a time. Therefore, looping is an excellent way to apply settings to a lot of sheets quickly. The examples in this section don’t just apply to Sheet1, as the previous examples have, but include all worksheets or all selected worksheets. Protect all worksheets in the active workbook Sub ProtectAllWorksheets() 'Create a variable to hold worksheets Dim ws As Worksheet 'Loop through each worksheet in the active workbook For Each ws In ActiveWorkbook.Worksheets 'Protect each worksheet ws.Protect Password:="myPassword" Next ws End Sub Protect the selected sheets in the active workbook Sub ProtectSelectedWorksheets() Dim ws As Worksheet Dim sheetArray As Variant 'Capture the selected sheets Set sheetArray = ActiveWindow.SelectedSheets 'Loop through each worksheet in the active workbook For Each ws In sheetArray On Error Resume Next 'Select the worksheet ws.Select 'Protect each worksheet ws.Protect Password:="myPassword" On Error GoTo 0 Next ws sheetArray.Select End Sub Unprotect all sheets in active workbook Sub UnprotectAllWorksheets() 'Create a variable to hold worksheets Dim ws As Worksheet 'Loop through each worksheet in the active workbook For Each ws In ActiveWorkbook.Worksheets 'Unprotect each worksheet ws.Unprotect Password:="myPassword" Next ws End Sub Checking if a worksheet is protected The codes in this section check if each type of protection has been applied. Check if Sheet contents is protected Sub CheckIfSheetContentsProtected() 'Check if worksheets contents is protected If Sheets("Sheet1").ProtectContents Then MsgBox "Protected Contents" End Sub Check if Sheet objects are protected Sub CheckIfSheetObjectsProtected() 'Check if worksheet objects are protected If Sheets("Sheet1").ProtectDrawingObjects Then MsgBox "Protected Objects" End Sub Check if Sheet scenarios are protected Sub CheckIfSheetScenariosProtected() 'Check if worksheet scenarios are protected If Sheets("Sheet1").ProtectScenarios Then MsgBox "Protected Scenarios" End Sub Changing the locked or unlocked status of cells, objects and scenarios When a sheet is protected, unlocked items can still be edited. The following codes demonstrate how to lock and unlock ranges, cells, charts, shapes and scenarios. When the sheet is unprotected, the lock setting has no impact; each object becomes locked on protection. All the examples in this section set each object/item to lock when protected; to unlock, change the value to False. Lock a cell Sub LockACell() 'Changing the options to lock or unlock cells Sheets("Sheet1").Range("A1").Locked = True End Sub Lock all cells Sub LockAllCells() 'Changing the options to lock or unlock cells all cells Sheets("Sheet1").Cells.Locked = True End Sub Lock a chart Sub LockAChart() 'Changing the options to lock or unlock charts Sheets("Sheet1").ChartObjects("Chart 1").Locked = True End Sub Lock a shape Sub LockAShape() 'Changing the option to lock or unlock shapes Sheets("Sheet1").Shapes("Rectangle 1").Locked = True End Sub Sub LockAScenario() 'Changing the option to lock or unlock a scenario Sheets("Sheet1").Scenarios("scenarioName").Locked = True End Sub Allowing actions to be performed even when protected Even when protected, we can allow specific operations, such as inserting rows, formatting cells, sorting, etc. These are the same options as found when manually protecting the sheet. Allow sheet actions when protected Sub AllowSheetActionsWhenProtected() 'Allowing certain actions even if the worksheet is protected Sheets("Sheet1").Protect Password:="myPassword", _ DrawingObjects:=False, _ Contents:=True, _ Scenarios:=False, _ AllowFormattingCells:=True, _ AllowFormattingColumns:=True, _ AllowFormattingRows:=True, _ AllowInsertingColumns:=False, _ AllowInsertingRows:=False, _ AllowInsertingHyperlinks:=False, _ AllowDeletingColumns:=True, _ AllowDeletingRows:=True, _ AllowSorting:=False, _ AllowFiltering:=False, _ AllowUsingPivotTables:=False End Sub Allow selection of any cells Sub AllowSelectionAnyCells() 'Allowing selection of locked or unlocked cells Sheets("Sheet1").EnableSelection = xlNoRestrictions End Sub Allow selection of unlocked cells Sub AllowSelectionUnlockedCells() 'Allowing selection of unlocked cells only Sheets("Sheet1").EnableSelection = xlUnlockedCells End Sub Don’t allow selection of any cells Sub NoSelectionAllowed() 'Do not allow selection of any cells Sheets("Sheet1").EnableSelection = xlNoSelection End Sub Allowing VBA code to make changes, even when protected Even when protected, we still want our macros to make changes to the sheet. The following VBA code changes the setting to allow macros to make changes to a protected sheet. Sub AllowVBAChangesOnProtectedSheet() 'Enable changes to worksheet by VBA code, even if protected Sheets("Sheet1").Protect Password:="myPassword", _ UserInterfaceOnly:=True End Sub Allowing the use of the Group and Ungroup feature To enable users to make use of the Group and Ungroup feature of protected sheets, we need to allow changes to the user interface and enable outlining. Sub AllowGroupingAndUngroupOnProtectedSheet() 'Allow user to group and ungroup whilst protected Sheets("Sheet1").Protect Password:="myPassword", _ UserInterfaceOnly:=True Sheets("Sheets1").EnableOutlining = True End Sub Wow! That was a lot of code examples; hopefully, this covers everything you would ever need for using VBA to protect and unprotect sheets. Get our FREE VBA eBook of the 30 most useful Excel VBA macros. Automate Excel so that you can save time and stop doing the jobs a trained monkey could do. If you’ve found this post useful, or if you have a better approach, then please leave a comment below. Do you need help adapting this to your needs? I’m guessing the examples in this post didn’t exactly meet your situation. We all use Excel differently, so it’s impossible to write a post that will meet everybody’s needs. By taking the time to understand the techniques and principles in this post (and elsewhere on this site) you should be able to adapt it to your needs. But, if you’re still struggling you should: - Read other blogs, or watch YouTube videos on the same topic. You will benefit much more by discovering your own solutions. - Ask the ‘Excel Ninja’ in your office. It’s amazing what things other people know. - Ask a question in a forum like Mr Excel, or the Microsoft Answers Community. Remember, the people on these forums are generally giving their time for free. So take care to craft your question, make sure it’s clear and concise. List all the things you’ve tried, and provide screenshots, code segments and example workbooks. - Use Excel Rescue, who are my consultancy partner. They help by providing solutions to smaller Excel problems. Don’t go yet, there is plenty more to learn on Excel Off The Grid. Check out the latest posts:
<urn:uuid:ea1cf501-4a6e-4a58-b00d-785c60c5a452>
CC-MAIN-2022-33
https://exceloffthegrid.com/vba-code-worksheet-protection/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571959.66/warc/CC-MAIN-20220813142020-20220813172020-00604.warc.gz
en
0.734016
2,879
2.921875
3
SCDSB supports positive mental health TABLE OF CONTENTS A) Ontario Strategy 2 B) Board Strategy 2 C) Vision, Mission, Values 3 D) What is Mental Health? 4 E) Resilience 5 F) Mental Health at SCDSB 6 G) Community Resources 9 H) Internet Resources 10 In 2011, the Ontario government began to focus on a plan to improve the mental health of all Ontarians. Open Minds Healthy Minds is the ten year government strategy regarding mental health & well-being in Ontario. The first three years have been focused on children and youth. The established priorities are: to provide fast access to high quality service; early identification and intervention with children and youth who are experiencing mental health concerns; and to close critical service gaps, especially in rural communities. The SCDSB Mental Health and Addictions strategy is aligned with Open Minds, Healthy Minds and with The Simcoe Path. The Board recognizes the importance of mental health as it is linked to overall well-being, achievement and positive outcomes for youth. We are committed to continually improving the quality of mental health support available to our students. The SCDSB has a Mental Health Leader who is responsible for the development and implementation of the Mental Health and Addictions Strategy. The Mental Health Leader works closely with the Mental Health Steering Committee to set key priorities regarding the implementation of universal promotion and prevention programs, evidence-based resources as well as targeted interventions that aim to increase mental health awareness, literacy and expertise. The Mental Health Leader also works closely with community agencies to coordinate the development and delivery of services in our schools. The SCDSB nurtures positive mental health and holistic growth, which enables everyone to flourish. The Simcoe County District School Board will promote positive mental health in our schools by increasing awareness, literacy and expertise within our system. We will develop increased capacity to recognize and intervene effectively for students who are struggling with mental health concerns such that they become connected with the appropriate supports in a timely manner. We will work collaboratively with community partners to implement a range of evidence-based supports and interventions to meet the needs of our students. The Simcoe County District School Board is committed to creating opportunities for well-being and achievement. We value collaboration and shared responsibility with students, parents and community partners to create a culture of inclusivity, empathy and respect such that everyone can be resilient and thrive. MENTAL HEALTH - Mental Illness ContInuuM Mild to moderate distress Mild or temporary impairmentMild or temporary impairment Mild to moderate distress Health MENTAL HEALTH - Mental Illness ContInuuM Mild to moderate distress Mild or temporary impairmentMild or temporary impairment Mental health is a state of well-being that helps individuals reach their potential. When we have positive mental health, we can cope with the normal stresses of life, work productively and make contributions to our communities. WHat aRe Mental HEALTH PrOBLEMS Mental health problems cause difficulties with social, emotional physical and cognitive functioning, but are not so severe that they meet the criteria for mental illness. As many as one in five children or youth struggle with mental health problems that create difficulty with their ability to function. The Public Health Agency of Canada defines mental illness as changes in a person’s thinking, mood or behavior that are also associated with significant distress or impaired functioning. Mental illness can result from a complex interaction of genetic, biological, personality and environmental factors. Mental illnesses affect people of all ages, education levels, income levels and cultures. Mental Health is fluid. Most people fluctuate on the continuum. How we are able to cope with stressors and challenges depends on our state of wellbeing at any one point in time. Mental Health Problems Mental illness Emotional problems or Moderate to disabling chronic impairment Mild to moderate distress Mild or temporary impairment Mild to moderate distress Resilience refers to an ability to cope with problems and set-backs that happen as a natural part of life. Resilient people are able to utilize their skills and strengths to cope and recover from problems and challenges. There are many ways we can promote positive mental health and resilience. People can learn how to become more resilient as they grow and develop. There are certain characteristics and circumstances that help to improve a person’s ability to be resilient: Healthy sleep habits Healthy eating habits Exercise Connection with the community, a sense of belonging Participation in music, sports, art or activities you enjoy Friends and family that are positive, supportive and loving Positive school experiences and connections Cultural or spiritual connections Skills related to communicating, problem-solving Social and emotional skills Students with mental health issues that affect learning are eligible for psychological assessment services. Such services support the diagnoses of mental health disorders as well as identifying the students’ strengths and needs. Suggestions for pro- gramming initiatives that enable educators to support students and provide instruction for social-emo- tional learning skills are emphasized in these assessment reports. Psychological consulta- tion services are also available to schools. Consultations are time- limited and focus on characterizing the student’s needs and the develop- ment of a plan to meet such needs. In the case of both assessments and consultations, psychologists also make recommendations to support accessing appropriate commu- nity and medical support services. SCDSB psychological services work closely with other sources of mental health supports (i.e. social workers, special education consultants, child and youth workers, school teachers and administrators, superintendents) as a part of a team to ensure support is coordinated and complimentary. Mental HealtH SUPPOrTS IN THE BOArD SCDSB recognizes that a person’s mental health and well-being is connected with their ability to learn, maintain social relationships and to reach their full potential. We provide a variety of mental health resources in terms of staff, services and information to support our students in achieving their goals and reaching their full potential. SCDSB recognizes that a student’s ability to learn can be impacted by a number of emotional, behavioural and/ or mental health concerns. Social workers can provide assessment and short-term support to students, their families, schools and communities related to positive mental health and well-being. They provide individual and group therapeutic interventions related to helping students reach their full potential. Social workers can also provide crisis intervention, advocacy, consultation for school staff or families and professional development within the board to help increase awareness and literacy. CHIld & YoutH WOrkErS Social-emotional learning is linked to improving positive mental health outcomes for students. Child and Youth Workers facilitate the strengthening of social-emotional skills through individual and group support and structured interactions. They can also provide information and support to the school team regarding universal social-emotional learning programs and strategies that can help to create increased awareness, responsiveness and support for all students. FIRst natIon MetIs & INUIT STUDENT ADVISOrS First Nation, Metis & Inuit (FNMI) Student Advisors work closely with FNMI students and families to identify and support the social, emotional, intellectual, cultural and physical well- being of students. Having FNMI student support staff in schools has been identified as a best practice to increase achievement and well-being of FNMI students. FNMI Student Advisors provide support to individuals and/or groups and also assist school staff in coordination of classroom activities or with community agencies such as the Friendship Centers and Enaahtig Healing Lodge. Mental HealtH AND ADDICTION NUrSES Central Community Care Access (CCAC) nurses are available to provide short-term mental health and addiction supports and services as a part of the SCDSB school inter-disciplinary team. In addition to providing care to students with mild to complex mental health and/or substance use issues, the nurses provide support in transitioning students back to school following a hospitalization. These specially trained nurses ensure that students are connected to supports within the school as well as referred to available community supports. The Mental Health and Addiction Nurses team works with the SCDSB staff to enable students to develop and maintain healthy life skills and resiliency as they move through their school years into adulthood. studentsuCCess SCHOOL TEAMS Student Success teams consist of Administrators, Guidance Counsellors, Elementary and Secondary Student Success Teachers, Special Education Resource Teachers and regular classroom teachers who support student achievement and personal wellness within the academic, personal and social realms of the school community. Students can receive support regarding decision-making, problem solving, conflict resolution, stress and time management and relationship awareness, along with individual education/career life pathway planning. Attendance Counsellors work with children of compulsory school age who are habitually absent or refusing to attend school. They provide professional intervention within the schools they service and within the homes of the families they assist. Since school attendance difficulties are often a manifestation of serious issues including mental health and addiction, they provide individual and family support around these issues and share community resources. They assist in building relationships with school personnel, community agencies and most importantly, with their referred students and their families. As their advocates they endeavour to better understand the issues affecting school attendance. They also work hard at finding ‘lost students’ and re- introducing them to the opportunities an education system can provide. COMPASS is a network of Community School Teams across Simcoe County. COMPASS Community School Teams link schools (elementary and secondary) with local providers of community supports and services including: child and youth mental health, parenting supports, child protection, health, youth justice, community recreation and more. COMPASS provides an opportunity for schools and community partners to draw on the expertise and resources that exist in local communities and to collaboratively identify and address issues affecting children, youth, families, schools and communities. Elementary and secondary schools looking for community supports to enhance student learning, support healthy child/youth development and reduce social, emotional or behavioural challenges within the school. There are eight COMPASS Community School Teams across Simcoe County: Angus, Barrie, Georgian West, Innisfil, North Simcoe, Orillia and South West Simcoe and a Francophone COMPASS serving Simcoe County. CoMPass - CoMMunItY PArTNErS WITH SCHOOLS Mental HealtH COMMUNITy SUPPOrTS Kids Help Phone 1-800-668-6868 Mobile Crisis Line – Simcoe County 1-888-893-8333 Mobile Crisis Line – South Simcoe 1-905-310-COPE Canadian Mental Health & Addictions 1-800-461-4319 Barrie Area Native Advisory Circle 1-705-734-1818 Barrie Native Friendship Centre 1-705-721-7689 You can also access the 211 directory by phone (dial 2-1-1) or online at www.211ontario.ca to request information on community resources related to a WeB-Based Mental HealtH SUPPOrT AND INFOrMATION Child Youth and Family Coalition of Simcoe County ABC’s of Mental Health Public Health Agency of Canada Mind your Mind Speak Up Mental Health Works Mind Matters Family Mental Health Initiative
<urn:uuid:e07a77c1-64e9-4b6e-bbf2-039196db29d8>
CC-MAIN-2022-33
https://1library.net/document/zw5d2je0-mental-matters-scdsb-supports-positive-mental-health.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571483.70/warc/CC-MAIN-20220811164257-20220811194257-00605.warc.gz
en
0.930367
2,509
3.34375
3
Environment ministry draft law aims to tighten Germany's climate targets Germany's environment ministry (BMU) is calling for a tightening of the country's 2050 climate targets in a draft of the highly anticipated Climate Action Law, seen by Clean Energy Wire. The proposal says Germany should cut greenhouse gas emissions by “at least 95 percent” by mid-century and calls for greenhouse gas neutrality by 2050. This means the equivalent of any remaining emissions would need to be absorbed and either stored or used. [Read the factsheet Germany's Climate Action Law begins to take shape for details on the draft] The proposal for the 2050 target is likely to generate heated debate within the government coalition and in parliament. Currently, the country aims to cut greenhouse gas emissions by "80 to 95 percent" by 2050. But Chancellor Merkel said in 2017 Germany must decide on an exact target in the current legislative period. The difference between an 80-percent scenario and a 95-percent scenario is significant. In a 2018 study, the Federation of German Industries (BDI) has said fulfilling the upper end is only realistic if other industrialised countries make comparable efforts. The draft, which Social Democratic (SPD) environment minister Svenja Schulze sent to the chancellery for early coordination, is in for a rough ride in the legislative process over the coming months, and the outcome is uncertain. Should the chancellery approve, it will be debated among relevant ministries, then sent to parliament. It has already drawn heavy criticism from parts of Merkel’s conservative CDU/CSU alliance and the business-friendly Free Democrats over the past days. However, it was welcomed by parliamentarians from governing coalition partner SPD, as well as the Greens and the Left Party. German media has questioned whether Schulze is strong enough to follow through on the high demands of her first draft. She has in the past openly called for greater ambition, but then had to bow to guidelines from the chancellery. In December 2018, she abstained in a vote on CO₂ limits for trucks in the European Council, despite her wish to agree with a proposal on the floor. “Under these circumstances it seems to me highly unlikely that the grand coalition proceeds with the new climate law,” political analyst Arne Jungjohann told Clean Energy Wire. Schulze’s decision to submit the draft without a general agreement with the coalition partner is “an indication for a tactical move. The SPD wants to demonstrate leadership and raise its profile on the issue,” he said. Germany now faces many months of heated political debate over the planned new Climate Action Law. After months of coalition talks, Merkel's government promised in its 2018 treaty to ensure the country reached its 2030 greenhouse gas reduction targets by enshrining them in law by the end of 2019. Both coalition partners are under pressure after a series of losses in regional elections, and ahead of the European Parliament election in May and in three Eastern states in autumn. At the same time, the government needs to implement the recent proposal by its coal exit commission to put an end to mining the fossil fuel in Germany. Still, the law could receive some tailwind against the backdrop of the “Fridays For Future” protests by tens of thousands of teenagers across Europe and in Germany, who skip school on Fridays to demand action on climate change – and by a German public that remains strongly in favour of the country’s transition to a low-carbon, nuclear-free economy. Ministries to be financially responsible for reaching targets Discussions of the Climate Action Law do not always refer to a single law, but rather a package of measures and legislation. The package is expected to consist of two main elements: a main framework climate action law, and a programme of measures to help ministries reach the targets for their respective sectors. The text seen by Clean Energy Wire is the first draft of the framework law. Core elements of the draft include an independent seven-person expert body for climate issues, set up by the federal parliament to examine the effectiveness of climate action measures, and publish annual reports with recommendations. The text also stipulates that state institutions must take the law into account for all their decisions and planning, and that the federal administration aims to become climate-neutral by 2030. Additionally, federal institutions must explain how their capital investments take into account climate targets as well as resulting risks from climate change. However, the major goal of the 65-page draft (including annexes) from the environment ministry is to enshrine into law Germany’s greenhouse gas reduction targets for the years 2020, 2030, 2040 and 2050. It also divvies up these targets between economic sectors (energy, buildings, transport, industry, agriculture, waste and other), as established in Germany’s Climate Action Plan 2050. The sector targets will be broken up into annual emissions budgets, and the ministry most responsible for the economic sector is also responsible for ensuring they are reached. If targets are missed, Germany might have to buy emissions allocations from European neighbours, as stipulated in the EU effort-sharing regulation. The costs should be covered by the budgets of the responsible ministries, writes the environment ministry in the draft – a provision that has come under fire by conservative politicians. Conservatives call the draft an “empty shell”, Greens and Left Party welcome text Some politicians from Merkel’s conservative alliance have come out against a framework climate action law over the past days. “We don’t want a framework law which – in addition to bureaucracy, external supervision, ministry responsibility and planned economy – offers only things that do nothing for climate action,” Georg Nüßlein (CSU), deputy head of the CDU/CSU parliamentary group in the Bundestag, told German radio Deutschlandfunk. The draft is an “empty shell”, said Anja Weisgerber (also CSU), climate action representative of the conservative CDU/CSU parliamentary group in a press statement. She criticised that the text “does not save a gram of CO₂, because it doesn’t contain concrete measures” for emissions reduction. However, this is not the goal, writes the environment ministry in an explanation that accompanies the draft. It says that the framework climate action law is meant to enshrine the “goals and principles of climate policy. […] The law does not immediately save CO₂, but instead puts climate policy as a whole on a solid foundation and makes it binding.” Weisgerber says the government should concentrate on introducing cost-efficient measures that lead to the highest possible CO₂ reduction for every euro invested. “We must not get entangled in disputes about sector targets, budgets and sanctions.” SPD party head Andrea Nahles supported Schulze’s action. “It’s good that Svenja Schulze is abiding by the coalition agreement,” she said. “It would be even better if the [conservative CDU/CSU] alliance did so, too.” The draft from the environment ministry is “worse than expected”, said Lukas Köhler from the market-liberal, business-friendly Free Democratic Party (FDP). “Instead of strengthening climate policy through expanding emissions trading, Svenja Schulze wants to carry the planned climate economy too far through national CO₂ budgets for the EU ETS sectors industry and energy,” he wrote on Twitter. Other opposition politicians welcomed the text. “It’s a strong legislative draft,” said Lorenz Gösta Beutin, climate and energy policy spokesperson of the Left parliamentary group, in a message on Twitter. His counterpart from the Green group, Lisa Badum, promised “our full support, if Mrs Schulze means business.” Now, Angela Merkel – who has often been dubbed “climate chancellor” -- had to “make the law her own”, Badum told Süddeutsche Zeitung. Ein starker Gesetzesentwurf! Die @spdde darf sich jetzt nicht weiter von #Merkel, #Altmaier und Co. an der Nase herumführen lassen. Schon bei #Groko-Verhandlungen 2013 haben die Genossen ihr Wahlversprechen #Klimaschutzgesetz geopfert. #linke #kohlekommission #hambibleibt https://t.co/mboNTp9MN9 — Lorenz Gösta Beutin (@lgbeutin) February 21, 2019 Merkel had earned herself the nickname “Climate Chancellor” during her first term in office between 2005 and 2009, when she put the topic firmly on the international agenda. However, many observers have questioned her commitment in recent years after her government failed to make significant progress on the reduction of greenhouse gas emissions, and postponed the country’s 2020 reduction target and dampened ambitions on a European level. Merkel has already stepped down as CDU party head and will soon leave the stage of world politics when her chancellorship is over. Her climate policy legacy is uncertain. The draft stops short of proposing answers to how the climate can be saved “at reasonable costs from a political, economic and socially just point of view,” said industry association BDI in a statement. The organisation wants the government to propose how the “billion-euro investments for the future” can be guaranteed without endangering Germany’s role as a business location. “The draft law does not provide sufficient answers to these questions.” The German Renewable Energy Federation (BEE) said the law would provide dependability for everyone involved and a positive dynamic for all sectors. “At least 95 percent less CO₂ by 2050 - a strong signal for a shift towards clean, renewable energy sources and greater efficiency,” said BEE president Simone Peter.
<urn:uuid:048d2143-0627-4867-a967-436290c17b4e>
CC-MAIN-2022-33
https://www.cleanenergywire.org/news/env-min-law-draft-calls-tightening-germanys-climate-targets
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572127.33/warc/CC-MAIN-20220815024523-20220815054523-00003.warc.gz
en
0.93712
2,095
2.6875
3
Customize your JAMA Network experience by selecting one or more topics from the list below. The current coronavirus disease 2019 (COVID-19) outbreak vividly demonstrates the burden that respiratory infectious diseases impose in an intimately connected world. Unprecedented containment and mitigation policies have been implemented in an effort to limit the spread of COVID-19, including travel restrictions, screening and testing of travelers, isolation and quarantine, and school closures. A key goal of such policies is to decrease the encounters between infected individuals and susceptible individuals and decelerate the rate of transmission. Although such social distancing strategies are critical in the current time of pandemic, it may seem surprising that the current understanding of the routes of host-to-host transmission in respiratory infectious diseases are predicated on a model of disease transmission developed in the 1930s that, by modern standards, seems overly simplified. Implementing public health recommendations based on these older models may limit the effectiveness of the proposed interventions. Understanding Respiratory Infectious Disease Transmission In 1897, Carl Flügge showed that pathogens were present in expiratory droplets large enough to settle around an infected individual. “Droplet transmission” by contact with the ejected and infected fluid phase of droplets was thought to be the primary route for respiratory transmission of diseases. This view prevailed until William F. Wells focused on tuberculosis transmission in the 1930s and dichotomized respiratory droplet emissions into “large” and “small” droplets. According to Wells, isolated droplets are emitted upon exhalation. Large droplets settle faster than they evaporate, contaminating the immediate vicinity of the infected individual. In contrast, small droplets evaporate faster than they settle. In this model, as small droplets transition from the warm and moist conditions of the respiratory system to the colder and drier outside environment, they evaporate and form residual particulates made of the dried material from the original droplets. These residual particulates are referred to as droplet nuclei or aerosols. These ideas resulted in a dichotomous classification between large vs small droplets, or droplets vs aerosol, which can then mediate transmission of respiratory disease. Infection control strategies were then developed based on whether a respiratory infectious disease is primarily transmitted via the large or the small droplet route. The dichotomy of large vs small droplets remains at the core of the classification systems of routes of respiratory disease transmission adopted by the World Health Organization and other agencies, such as the Centers for Disease Control and Prevention. These classification systems employ various arbitrary droplet diameter cutoffs, from 5 to 10 μm, to categorize host-to-host transmission as droplets or aerosol routes.1 Such dichotomies continue to underly current risk management, major recommendations, and allocation of resources for response management associated with infection control, including for COVID-19. Even when maximum containment policies were enforced, the rapid international spread of COVID-19 suggests that using arbitrary droplet size cutoffs may not accurately reflect what actually occurs with respiratory emissions, possibly contributing to the ineffectiveness of some procedures used to limit the spread of respiratory disease. New Model for Respiratory Emissions Recent work has demonstrated that exhalations, sneezes, and coughs not only consist of mucosalivary droplets following short-range semiballistic emission trajectories but, importantly, are primarily made of a multiphase turbulent gas (a puff) cloud that entrains ambient air and traps and carries within it clusters of droplets with a continuum of droplet sizes (Figure; Video).2,3 The locally moist and warm atmosphere within the turbulent gas cloud allows the contained droplets to evade evaporation for much longer than occurs with isolated droplets. Under these conditions, the lifetime of a droplet could be considerably extended by a factor of up to 1000, from a fraction of a second to minutes. Owing to the forward momentum of the cloud, pathogen-bearing droplets are propelled much farther than if they were emitted in isolation without a turbulent puff cloud trapping and carrying them forward. Given various combinations of an individual patient’s physiology and environmental conditions, such as humidity and temperature, the gas cloud and its payload of pathogen-bearing droplets of all sizes can travel 23 to 27 feet (7-8 m).3,4 Importantly, the range of all droplets, large and small, is extended through their interaction with and trapping within the turbulent gas cloud, compared with the commonly accepted dichotomized droplet model that does not account for the possibility of a hot and moist gas cloud. Moreover, throughout the trajectory, droplets of all sizes settle out or evaporate at rates that depend not only on their size, but also on the degree of turbulence and speed of the gas cloud, coupled with the properties of the ambient environment (temperature, humidity, and airflow). Droplets that settle along the trajectory can contaminate surfaces, while the rest remain trapped and clustered in the moving cloud. Eventually the cloud and its droplet payload lose momentum and coherence, and the remaining droplets within the cloud evaporate, producing residues or droplet nuclei that may stay suspended in the air for hours, following airflow patterns imposed by ventilation or climate-control systems. The evaporation of pathogen-laden droplets in complex biological fluids is poorly understood. The degree and rate of evaporation depend strongly on ambient temperature and humidity conditions, but also on the inner dynamics of the turbulent puff cloud coupled with the composition of the liquid exhaled by the patient. A 2020 report from China demonstrated that severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus particles could be found in the ventilation systems in hospital rooms of patients with COVID-19.5 Finding virus particles in these systems is more consistent with the turbulent gas cloud hypothesis of disease transmission than the dichotomous model because it explains how viable virus particles can travel long distances from patients. Whether these data have clinical implications with respect to COVID-19 is unknown. Implications for Prevention and Precaution Although no studies have directly evaluated the biophysics of droplets and gas cloud formation for patients infected with the SARS-CoV-2 virus, several properties of the exhaled gas cloud and respiratory transmission may apply to this pathogen. If so, this possibility may influence current recommendations intended to minimize the risk for disease transmission. In the latest World Health Organization recommendations for COVID-19, health care personnel and other staff are advised to maintain a 3-foot (1-m)6 distance away from a person showing symptoms of disease, such as coughing and sneezing. The Centers for Disease Control and Prevention recommends a 6-foot (2-m) separation.7,8 However, these distances are based on estimates of range that have not considered the possible presence of a high-momentum cloud carrying the droplets long distances. Given the turbulent puff cloud dynamic model, recommendations for separations of 3 to 6 feet (1-2 m) may underestimate the distance, timescale, and persistence over which the cloud and its pathogenic payload travel, thus generating an underappreciated potential exposure range for a health care worker. For these and other reasons, wearing of appropriate personal protection equipment is vitally important for health care workers caring for patients who may be infected, even if they are farther than 6 feet away from a patient. Turbulent gas cloud dynamics should influence the design and recommended use of surgical and other masks. These masks can be used both for source control (ie, reducing spread from an infected person) and for protection of the wearer (ie, preventing spread to an unaffected person). The protective efficacy of N95 masks depends on their ability to filter incoming air from aerosolized droplet nuclei. However, these masks are only designed for a certain range of environmental and local conditions and a limited duration of usage.9 Mask efficacy as source control depends on the ability of the mask to trap or alter the high-momentum gas cloud emission with its pathogenic payload. Peak exhalation speeds can reach up to 33 to 100 feet per second (10-30 m/s), creating a cloud that can span approximately 23 to 27 feet (7-8 m). Protective and source control masks, as well as other protective equipment, should have the ability to repeatedly withstand the kind of high-momentum multiphase turbulent gas cloud that may be ejected during a sneeze or a cough and the exposure from them. Currently used surgical and N95 masks are not tested for these potential characteristics of respiratory emissions. There is a need to understand the biophysics of host-to-host respiratory disease transmission accounting for in-host physiology, pathogenesis, and epidemiological spread of disease. The rapid spread of COVID-19 highlights the need to better understand the dynamics of respiratory disease transmission by better characterizing transmission routes, the role of patient physiology in shaping them, and best approaches for source control to potentially improve protection of front-line workers and prevent disease from spreading to the most vulnerable members of the population. Corresponding Author: Lydia Bourouiba, PhD, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139 (firstname.lastname@example.org). Published Online: March 26, 2020. doi:10.1001/jama.2020.4756 Conflict of Interest Disclosures: None reported. Funding/Support: Dr Bourouiba reported receiving research support from the Smith Family Foundation, the Massachusetts Institute of Technology (MIT) Policy Lab, the MIT Reed Fund, and the Esther and Harold E. Edgerton Career Development chair at MIT. Role of the Funder/Sponsor: The funders had no role in the preparation, review or approval of the manuscript and decision to submit the manuscript for publication. Bourouiba L. Turbulent Gas Clouds and Respiratory Pathogen Emissions: Potential Implications for Reducing Transmission of COVID-19. JAMA. 2020;323(18):1837–1838. doi:10.1001/jama.2020.4756 Coronavirus Resource Center
<urn:uuid:3d5a121d-4b40-4814-9e5f-258f6413bfcd>
CC-MAIN-2022-33
https://jamanetwork.com/journals/jama/fullarticle/2763852?appId=scweb
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572163.61/warc/CC-MAIN-20220815085006-20220815115006-00405.warc.gz
en
0.917626
2,105
3.5
4
The role of research in handwriting analysis The goal of the profession of graphology is to expand our understand the experiences of individuals, groups, and organizations in contemporary fast-paced and fluid social environments by assessing handwriting. We are committed to expanding our knowledge through handwriting research that will enable us to fulfill our mission of fostering human development and improving efficiency in organizations. Sources of information for handwriting analysis Graphology draws its knowledge from a broad range of areas including - forensic sciences - computer science Graphologists must be aware of trends in writing and communication such as: - handwriting education and practice in the schools - developments in digital communication technologies - advances in computing - demands for knowledge and communication skills in education, service professions, commerce, and society Categories of handwriting research Handwriting research falls into two broad categories: exploratory (also called descriptive) research and explanatory research. Although these two types of research have different goals and methods, both approaches apply ethical, systematic, and valid modes of inquiry to evaluate theories, make discoveries, and answer research questions (Flanagan, 2013). Exploratory (Descriptive) Research The approach: Exploratory or descriptive research is used to identify, define, examine, or measure elements of handwriting. It identifies the “who, what, when, where, how” (Tripodi and Bender, 2010) of handwriting elements. Descriptive research is a necessary step in conducting comparisons and establishing cause-and-effect relationships, but cannot identify the causes for the information that is obtained (AECT, n.d.; Mitchell and Jolley, 2012). Goals of descriptive handwriting research: - discover or measure handwriting features - observe the brain or body during handwriting - identify trends in the form or use of handwriting - develop theories about the production or meaning of handwriting factors. The approach: Explanatory research is used to factors that cause a behavior or an outcome. It identifies the effects of handwriting explains the reasons for those outcomes. Explanatory research requires well-defined procedures for uncovering cause-and-effect relationships, measuring the effects of the study variables on the results, and enabling researchers to predict outcomes within a range of certainty (Queirós, Faria, & Almeida, 2017; Solomon & Draine, 2010). In most cases, explanatory or predictive research studies are conducted with experimental or quasi-experimental research designs. Goals of explanatory handwriting research: - determine factors that can explain relationships between elements of handwriting and individual characteristics such as brain activity, cognition, learning, emotion, attitudes, preferences, or behavior. - improve strategies for learning - discover or predict the effects of specified and controlled influences (variables) on results Types of handwriting research methods with examples Researchers select tools that are most appropriate for “identifying, analyzing, and reporting patterns … within data” (Braun and Clarke, 2006, in Lochmiller, 2021). Both qualitative or quantitative methods can be used, depending on the research question. Descriptive study methods Any aspect of handwriting may be the subject of an exploratory or descriptive research study if ethical standards are maintained. Past studies have described hand and body movements, brain activity, writing strategies, letter formation, or pressure while writing so that the writing process is better understood. Other studies have examined features found in handwriting samples or obtained information from writers or professionals about their experiences with writing. Common data collection methods include the following (Billups, 2021; Holosko, 2011): - Direct observation of writing as it is produced - Direct observation of documents and writing samples - Case studies - Cohort, cross-sectional, and longitudinal studies - Focus groups and interviews are used in descriptive research studies to gather information on the writers’ thoughts and experiences with writing (Billups, 2021). Literature reviews and meta-analyses of existing studies are a form of descriptive research with the aim of synthesizing findings from several sources of evidence (Borenstein et al., 2011). A special case of descriptive research: correlational research. Correlational studies in handwriting measure the strength of an association among handwriting variables and individual characteristics. They show patterns in the relationships among variables – whether the variables increase together, decrease together, move in opposite directions, or have no relationship at all. Correlational studies can reveal whether a relationship exists, but it does not produce the type of evidence needed to decide why a relationship exists or does not exist. They are most useful when the conditions for writing are not controlled by the researcher (Nurdianingsih, 2018) for practical or ethical reasons. Correlational research example Houston (2018) obtained handwriting samples from pupils and their scores for composition. She discovered that quality and organization of the pupils’ compositions improved as their handwriting skills improved. Because she was conducting a correlational study, it is not possible to identify the causes for the changes she observed in handwriting skill and writing quality since those factors cannot be randomly assigned to students. Explanatory study methods Both qualitative and quantitative methods may be used to collect data for the purpose of explaining the reasons for the effect of a variable on handwriting or how handwriting contributes to an individual characteristic. Researchers apply detailed protocols for conducting these studies in a scientific, systematic manner for uncovering cause and effect relationships, measuring the effects of the study variables on the results. These procedures allow researchers to predict outcomes within a range of certainty (Queirós, Faria, & Almeida, 2017; Solomon & Draine, 2010). Experimental or quasi-experimental methods are the most commonly used procedures in explanatory research. Explanatory research example. A study from Johns Hopkins explored literacy learning under three learning conditions - handwriting, typing, or visual practice (Wiley & Rapp, 2021). In this study, the researchers randomly assigned participants to one of the learning conditions and maintained close control over other possible influences on the learning process. The findings showed that literacy learning was superior in the handwriting group: training was faster, and participants achieved statistically significant higher scores for letter recognition, writing, letter naming, and word reading than participants in the typing and visual display groups. The experimental research design enabled the researchers to discover a cause-and-effect relationship between handwriting and effective literacy learning, leading them to conclude that “handwriting practice provides greater benefits than either typing or visual practice for a wide range of tasks [in literacy learning]” (Wiley & Rapp, 2021, p. 1098). Scientific graphological research is urgently needed to maintain integrity and accuracy in the profession and is strongly supported by the Thinking about research? Scientific graphological research is strongly supported by the American Handwriting Analysis Foundation. . We’ll talk about research ideas and resources and provide support. You don’t need to be an expert in math or statistics to be a researcher. Beginners are especially welcome. AHAF provides help to researchers throughout their projects. Assistance from the AHAF Research Chair is available to AHAF members for developing feasible research ideas, suggesting research methods, and assisting with data analysis and interpretation. AHAF highly recommends contacting the AHAF Research Chair or a research consultant of your choice before proceeding with a research study to ensure the best use of your time and resources. Association for Educational Communication and Technology (AECT) (n.d.). 41.1 What is descriptive research? In AECT (Ed.), The handbook of research for educational communications and technology. https://members.aect.org/edtech/ed1/41/41-01.html Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. (2011). Introduction to meta-analysis. Wiley. Creswell, J. (2003). Research design: qualitative, quantitative, and mixed methods approaches. (2nd ed.). Sage. Billups, F. D. (2021). Qualitative data collection tools: Design, development, and applications. Sage. Flanagan, T. (2013). The scientific method and why it matters. C2C Journal, 7(1), 4-6. Holosko, M. J. (2010). An overview of qualitative research methods. In B. Thyer (Ed)., The handbook of social work research methods. (2nd ed., pp. 340-354). Sage. Houston, J. L. (2018). A correlational study of 5th [sic] students handwriting legibility and scores on writing samples in a northwest Georgia school. [Unpublished doctoral dissertation]. Liberty University. Lochmiller, C. R. (2021). Conducting thematic analysis with qualitative data. The Qualitative Report, 26(6), 2029-2044. https://doi.org/10.46743/2160-3715/2021.5008 Mitchell, M.L., & Jolley, J.M. (2012). Research design explained (8th ed.). Wadsworth. Queirós, A., Faria, D., & Almeida, F. (2017). Strengths and limitations of qualitative and quantitative research methods. European Journal of Education Studies, 3(9). 369-386. doi: 10.5281/zenodo.887089 Snowdon, D.A., Kemper, S.J., Mortimer, J.A., Greiner, L.H., Wekstein, D.R., & Markesbery, W.R. (1996). Linguistic ability in early life and cognitive function and Alzheimer's disease in late life. Findings from the Nun Study. JAMA, 275(7), 528-32. Solomon, P. & Draine, J. (2010). An overview of quantitative research methods. In B. Thyer (Ed)., The handbook of social work research methods. (2nd ed., pp. 26-36). Sage. Tripodi, S. & Bender, K. (2010). Definition and purpose of descriptive research. In B. Thyer (Ed)., The handbook of social work research methods. (2nd ed., pp. 120-130). Sage. Wiley, R. W. & Rapp, B. (2021). The effects of handwriting experience on literacy learning. Psychological Science, 32(7), pp. 1086-1103.
<urn:uuid:c635a82d-b866-47a5-9da0-698424224b13>
CC-MAIN-2022-33
https://ahafhandwriting.org/learn/research/group/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00401.warc.gz
en
0.874767
2,213
3
3
Humanity has constantly been looking to mother earth for ways to better our existence and further our development in science, health, technology, and sustainability. We seek out aspects of mother nature’s make-up that may be able to cure cancer or help to make us live longer. We seek out ways that we can improve our technology and make it more sustainable. So, it is no surprise that we find natural substances on our planet that can have different applications to better our lives. One of these substances is the Euglena Gracilis, a form of alga that has the potential to serve many applications in our modern world. This alga has probably been noted for just over 100 years, however, applications are starting to seep through now. There are many things we are yet to discover that may have applications in bioproducts, and there are many things that we have discovered that we are yet to note have applications in bioproducts. However, Euglena Gracilis actually has much potential in many biofields, and may well change a lot of aspects of our lives without us really paying that much mind to it. This tiny alga has synthesis and application beyond what you may think. But first, what is it exactly? What Is Euglena Gracilis? Euglena Gracilis is a species of single-celled freshwater alga. In terms of euglena structure, it is a mixotroph that is capable of feeding via photosynthesis or phagocytosis, and it has secondary chloroplasts. Its cell surface is highly flexible which allows it to change its shape rather dramatically, making it capable of decreasing its size by 80%, and even changing its shape from a thin cell to a sphere! It is a member of the euglenid family, which is a very common and well-known line of both freshwater and marine protists that are acknowledged by the presence of pellicles, which are a series of protein strips just under the external membrane. Euglena Gracilis is one of many euglenophytes that are plastid bearing, they are easily defined as they have three membrane-bound photosynthesizing organelles, of which the genome will contain very potent green algal definition. In layman’s terms, the Euglena Gracilis is simply an alga, which is commonly known and is capable of feeding in many ways, including photosynthesis which everyone will commonly relate to ‘plant-life’ in general. Before we move on to its bioproduct applications and synthesis, let’s understand it a little bit more first. Understanding Euglena Gracilis Euglena Gracilis has been discovered for many years, however, it has recently been found to have many potential applications in biofields. It has become a popular candidate for application-focused research and potentially even commercial applications. In the future it is possible that we will see its application in many fields, in spite of it simply being an alga, not so different from the many other Euglena algae, it has potential application as a dietary supplement, in some cases which we are already seeing. This is because it is an excellent source of protein, vitamins, and so on. It is also already noted as being potentially marketable as being an immunostimulatory agent. However, there are many cultivation conditions in which Euglena Gracilis could be produced, and there are many applications it has simply beyond dietary supplementation. This alga has much in the way of positive potential in aspects of health and even biofuels, but we will get to that later. Euglena Gracilis is being used a great deal in laboratories as a ‘model organism;. This is because a majority of species of Euglena have chloroplasts within the cell’s bodies, this allows them to feed as a plant through autotrophy, and photosynthesizing. Yet Euglena also is capable of feeding heterotrophically, like an animal does, making the ability to consume and gain energy extra efficient. However, its internal properties also hold much promise for applications in multiple fields. So, let’s consider where we are already seeing this alga being used and where we may see it be used in the future as well, as scientists consider its potential for biological application. Abstract Concepts Of Euglena Gracilis For Biofuels In recent times this unique unicellular microalgae has become highly regarded as one of the most likely potential candidates in the species as becoming a microalgal feeding stock for use in biofuels. It has been discovered that its lipids, especially its wax esters, are very ideal for use in biodiesel and even jet fuels. Of course, we must also consider that the culture of Euglena Gracilis and its use of wastewater effluent can highly improve the economical value of Euglena Gracilis in potential biofuel productions. It must be noted that enhancing the productivity of its biomass is absolutely essential in order to create these more economical biofuels in a production system. But it has been found that there are some bacteria that have been discovered to enhance microalgal growing as it crafts a more suitable microenvironment for extensive growth. This has been studied and considered, however, it may be some time before we see the consideration and application of this alga being put to use. Yet, it has been proven in studies that particular bacteria can enhance the potential for enhancing the biomass of Euglena Gracilis for the purpose of biofuel properties. While this is not the most prominent bio product that we see Euglena Gracilis being considered for, with the environmental crisis and the seeking of alternative fuels, it is promising to see that Euglena may have a part to play in a more economical and sustainable alternative to the current fuels we see. Introducing a microalga as a biofuel does have potential, yet action on this is yet to be actively seen. How Euglena Gracilis’ Stress Toleration Is Beneficial Euglena Gracilis has great stress tolerance, and this is something that makes it a much sought-after component in many bioproducts. Commercially, of course, it has synthesized features such as lipids, provitamins, and essential amino acids, making it a great component for supplements (more on this later). However, its natural ability to be exceptionally tolerant of external stress makes it very ideal. It can tolerate anything from ionizing radiation, and acidic growing conditions. It is even able to sequester heavy metals. Its endurance and adaptability make it ideal for harnessing in bioremediation of polluted waters. This is something we will discuss in more detail in a moment. However, it is well worth considering the depth of application of a component that is so hardy and durable, with extreme tolerance. This tolerance makes it ideal for many bioproducts and ensures consideration of plenty of applications that other sources may be too sensitive for. Environmental Application of Euglena Gracilis There are many regenerative applications that Euglena Gracilis can be used for. To start, it is likely to have a grand future in aquatic food systems such as fish feeds, as well as converting carbon dioxide into oxygen, and curing water qualities. Many studies of late show the positive effects on conditions in the cell multiplication of this particular microalgae, especially in aquatic foods. However, it is also believed that Euglena Gracilis has the potential to cure polluted waters and assist in remedying waters that have levels of co2 which are exceedingly high. In many ways, this could be an application that can help to revitalize damaged environments. Yet, there has been much consideration in aquatic food production, and studies have shown that this microalgae can produce plentiful biomass and is capable of converting carbon dioxide to oxygen in minimal light intensity for this process in particular. Potential Commercialization & Applications As well as environmental and biofuel applications, there is potential for much more commercialization of Euglena Gracilis as well. A great deal of research has been conducted into the commercialization of these microalgae, especially in application-driven research. This is especially true in the endeavors for its use as a dietary supplement. However, while we will talk about this more in a moment, it is worth noting that bioproducts from Euglena Gracilis have been shown that they can be produced under a vast array of conditions, and yields are usually pretty high in comparison with those that are looked at in microalgal arrangements. The insights into its complicated metabolism have shown very individual metabolic routes that may provide science and biology with new premises for enhancing products with possible genetic modifications of this particular organism. Use As A Dietary Supplement Now, we know you’ve been waiting for this. We do know that Euglena Gracilis has made its way into the market as a dietary supplement already. It is useful in this regard due to its many properties, especially in its proteins, provitamins, and its lipids too. It is known for being nutritional valuable in feeding for livestock and fish too. However, there have been dietary supplements noted that focus on the microalgae, it is believed that it has the potential to improve the sleep quality and work efficiency of individuals who are suffering from excessive stress. Good news, right? You can even find dietary supplements that include Euglena Gracilis online for commercial sale now, as it is known for being high in many positive attributes that humans need in their diets. So while it is used for feeding livestock and fish, it is also ideal for use in supplements. Application As A Supplement For Immune Function On this note, one of the things that are sought after in Euglena Gracilis is the immune function assistance it provides. It is believed to reduce upper respiratory tract infections, which are often caused by colds, flu, and sinus infections. Most of us experience these at least once a year, and it is possible that Euglena Gracilis is capable of increasing immune function and helping to reduce symptoms of URTI’s in most people. Euglena Gracilis In Sustainable Development There is much to be said about Euglena Gracilis in its potential for positive health applications, environmental impact, even agriculturally, and even biofuels, but there is more than just these factors to consider. There is hope for Euglena Gracilis to make an impact and contributions to sustainable developments in industries. Microalgae are one of the resources that show promise in the 5 F’s; food, fuel, fertilizer, feed, and fiber. Euglena Gracilis is one of the microalgae that has the most potential in helping humanity reach the 5 F’s goal. It has already been used on a commercial scale for ingredients in foods and cosmetics and has undergone research in other fields too. We may be seeing more of Euglena Gracilis in the years to come. Related Articles and Resources: Join our EAT Community to learn more from our knowledgable members, teachers, and coaches. This is one opportunity you would not want to miss!
<urn:uuid:efa0222a-165e-4c2a-9477-7375bc6c6445>
CC-MAIN-2022-33
https://ecolonomics.org/bioproducts-from-euglena-gracilis-synthesis-and-applications/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570651.49/warc/CC-MAIN-20220807150925-20220807180925-00206.warc.gz
en
0.966155
2,388
3.1875
3
Can Psychology Measure Awe? Psychologists get their hands into everything, but the objectivity of their science is questionable. People are complex beings. They can be manipulated, but they can also resist manipulation. It’s impossible to know all the background factors and variables they may exhibit in certain situations. Let’s see how well science can measure “awe” – which psychologists at the University of Buffalo took on as a science project. Did they gather true knowledge, or just buffalo their readers? We experience the emotion of awe when exposed to something larger than the self. Awe can arise from the practices of a particular faith tradition or a grand natural vista, but it does not necessarily have to be dramatic. Here were their variables: - Self-distancing: the ability to position yourself as outside your experience, as if a bystander. - Self-immersion: the feeling of being inside your experience, seeing yourself through your own eyes. - Religion or nature: Coping strategies when under stress. - Performance stressor: a task likely to elicit a challenge or threat response. - Challenge: a positive stressor a person may feel able to handle. - Threat: a negative stressor a person may feel is unmanageable. The question: Does an experience of awe produce a challenge response or a threat response? The hypothesis: People who engage in self-distancing when experiencing awe respond positively to a performance stressor. People who engage in self-immersion, by contrast, respond negatively. The experiment: “The researchers had 182 participants complete a measure of spontaneous self-distancing. They were then exposed to either an awe-inducing nature video or a neutral documentary on small sea creatures and later asked to prepare and deliver a two-minute speech on a setback or obstacle they experienced.” The measurements: Heart rate, amount of blood pumped per minute, and the flow of blood into the tissues. These measurements contribute to the “bio-psycho-social model of challenge and threat.” Conclusions: The hypothesis was confirmed. Previous studies had shown the value of awe for well-being. This study, the authors feel, refines the effects of awe, showing it is not always positive. “To maximally benefit from awe when facing subsequent stressors, we may need to take a step back from ourselves before we take it all in,” says Mark Seery, associate professor of psychology at the University of Buffalo. Critique: This all looks scientifically legit, doesn’t it? The psychologist asks a question needing clarification. He or she (we’ll use “he” since Mark was featured in the press release) defines terms. He devises a hypothesis and tests it, using standard procedures to gather data on measurable quantities on a sufficient sample of human subjects. He uses uniform techniques on the subjects, using controls (“a neutral documentary on small sea creatures” – maybe watching goldfish in a bowl). He interprets the data to check whether the hypothesis was confirmed or falsified. Based on his results, he makes recommendations to the public for their benefit. Who could criticize this beautiful example of science in action? We agree that a sense of awe is good, but we have some questions about this project. We want to know if the results are due to confirmation bias, chance or poor experimental design. What dubious assumptions went into the project design and its conclusions? Let’s note some potential problems. - Subjectivity. Not all “awe” is the same. Religious awe may be different from nature awe. In religion, people may feel awe at God’s love or his wrath. In nature, people may feel awe at a king cobra, lightning, or a peaceful spring wildflower scene. Are these sensations of awe really comparable for scientific analysis? - Imprecision. The definitions of awe, religion, self-immersion and other terms seem squishy. - Sample bias. The 182 participants may not represent a valid enough sample of the human population to draw generalizations. - Mental variability. Not everyone experiences awe at the same level. - Object variability. Not everyone experiences awe at the same objects. - Subject surroundings. The subject could have been affected by sleep, diet, or previous stresses before participating, which were unknown to the investigator. The attractiveness of the investigator, or even the lighting and decor in the room could have unconscious influences. - Subject understanding. Some of the subjects may not have understood what the project was about. - Subject obedience. Persons may have differed in their ability to follow instructions. - Stressor variability. Not everyone is stressed by giving a speech; some may fear it, but others may enjoy it. - Physical variability. Blood flow does not necessarily respond the same way in all humans under awe or stress. - Subject integrity. Participants can lie about their experiences, or fake out the psychologist for various reasons. - Replication. Would the results be replicated by a different team using the same method with different people in a different country, brought up in a different culture, social class and education? - Confirmation bias. The psychologists may have had a hunch what the correct conclusion should be, and may have unconsciously steered the data collection to confirm it. - Worldview bias. Did the psychologists’ beliefs about human nature color their experimental design and conclusions? - Other bias. Were the psychologists influenced by peer pressure, publish or perish pressure, funding pressure (to confirm what the funding source wanted), desire for fame, desire to promote their institution, or other motivations other than a purely objective desire to know? We’re just getting started with potential problems here. There may be “under-determination of theory by data” in this experiment (i.e., different theories might account for the same data). Did the scientists eliminate all sources of subjectivity and bias? Did they attempt to falsify their conclusion? Was the peer review adequate? Readers may be able to lengthen this list of problems. Soon, the whole project might look very suspect. Understand that we’re not trying to be critical, because we pretty much agree with the conclusions, that awe is a good and healthy emotion to have, and the ability to distance yourself from stressors probably improves the awe experience. What we’re illustrating is that even with apparently well-designed and well-executed psychological experiments, all kinds of issues can diminish the value of any conclusions. If that happens with a fairly neutral psychology project like this, how much more with more controversial psychological claims? The closet of psychology is stuffed with skeletons: phrenology, racism, female hysteria, lobotomy, shock therapy – embarrassments that psychology departments would rather forget. Is psychology a science at all? Some parts of it may be. For instance, educational psychology can produce testable results, leading to advice for teachers and students on best methods for memorization or comprehension. Other parts, however, get really weird. Psychological theories come and go more often than women’s fashions. Some parts of psychology are clearly evil, justifying sexual perversions (Kinsey) or criminal behavior (Clarence Darrow). The worst charlatans are the evolutionary psychologists who try to explain all human behavior as rooted in our ape-like past – or even our bacteria-like past. Many evolutionary biologists cannot stomach the ridiculous ideas of evolutionary psychology. You can’t put humans in a test tube. If the Harvard Law* applies to animal subjects, how much more to human subjects! How can fallible humans look into the minds of other humans and understand what is going on in there? Only God knows the heart. *Harvard Law: “Under the most rigorously controlled conditions of pressure, temperature, volume, humidity, and other variables, the organism will do as it darn well pleases.” Our most severe critique of psychology is that it is a replacement for religion, masquerading as science. Allowing for some exceptions, as we said concerning effective memorization techniques and the like, it is predominantly a false religion. It has its own theology, anthropology, and soteriology. It denies sin. It denies a Savior. It denies a Creator. Some secular psychologists even deny the mind and consciousness. Most psychologists have a different god: the BBBB (Big Brother Bearded Buddha, Darwin). Who would want to trust these guys? To the extent psychology is wrong, it can be dangerously wrong. To the extent it is right, you don’t need it. If you have the Manufacturer’s Handbook, why would you go to sinful humans who deny what the Creator has said about human nature? Why would any Bible-preaching pastor send his sheep to the wolves? Why would any Christian counselor blend Biblical truth with secular lies, creating a mishmash that is oxymoronically called “Christian psychology”? Jesus is the Good Shepherd who loves the sheep and cares for them. Scientists sometimes analyze things to death. Measuring blood flow in subjects asked artificial questions about their feelings watching “a neutral documentary on small sea creatures” is awful in a way; it takes the awe out of awe. If you want to learn about awe, go out into creation and forget about yourself. Turn your attention to God who made it. Go to a Bible-teaching church and join in song, O Lord my God, when I in awesome wonder Consider all the worlds Thy hands have made; I see the stars; I hear the rolling thunder, Thy power throughout the universe displayed. Then sings my soul, my Savior, God, to Thee: How great Thou art! How great Thou art! Resource: Get more AWE into your life (Adventure • Worship • Education) with Creation Safaris! Start a group or join a like-minded ministry and “Escape to Reality,” where there is Awe in plenty in the great outdoors. Learn more about this on CreationSafaris.com. Creation Safaris is a sister ministry of Creation-Evolution Headlines, sponsored by Master Plan Association.
<urn:uuid:24187c44-d0e2-4efa-bce6-402da4a5f02f>
CC-MAIN-2022-33
https://crev.info/2018/10/can-psychology-measure-awe/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573193.35/warc/CC-MAIN-20220818094131-20220818124131-00205.warc.gz
en
0.937385
2,114
3.640625
4
By Dr. G. Shreekumar Menon Mangaluru, July 31, 2021: Current statistics indicate that drug using teenagers’ numbers are increasing very rapidly all over India, especially in educational hubs.Schools, Colleges, Universities and Higher Education Institutes (HEI’s) are the prime targets for drug traffickers and peddlers, as the student population is sizeable, which assures them of a steady income and, perhaps, lifelong clientele.Drug use in early youth can affect development, and can lead to long-term use and dependence more readily than initiation in adulthood. The targeted exploitation of teenagers by organized criminal groups in the drug trade is a silent and unseen enemy for young people. Whether it is drug use, drug related violence and harms and involvement in the drug trade, all are deleterious for teenagers and society.Realizing this, the United Nations General Assembly Special Session (UNGASS) as early as 2016 had the theme “A better tomorrow for the world’s youth”. Drug addiction has become a worldwide problem, especially in teenagers. Many young people become dependent on different types of harmful substances and stimulating medicines that comes hand-in-hand with narcotic effect. The life of drug-addicts become damaged in all aspects, as they disassociate with their family and live in a different world, inhabited by other drug addicts. As drug addiction is an expensive habit, immediate necessities are renounced to spend money on drugs, and then the desperate search is on for ways to earn money illegally, to fund drug consumption. Most teenagers who get sucked into the habit of drug consumption are not aware of the long-term effects of drug abuse and addiction. There are different types of street drugs that are peddled, such as – cocaine, meth, marijuana, crack, heroin, Ganja and hashish. As teenagers are not aware of the chemical composition of the drugs, and their taste, smell and dosage, they consume what the peddler offers in blind faith. Many of the drugs sold in the market are adulterated and nobody knows what kinds of adulterants are used. From casual consumption there is a progressive trend to consume drugs more frequently, which ultimately leads to repeated compulsive consumption of drugs. What may start off as a party habit progresses quickly to everyday use. A casual habit transforms into a chronic addiction in a short span of time. The addicts find it impossible to control the intake of drugs, as a result of which they fail to fulfil their day-to-day responsibilities and obligations in an efficient manner. Drug addiction becomes drug dependency, as the person becomes a slave of a particular drug. Drug addiction is basically a brain disease that transforms the functioning of the teenager’s brain. During adolescence, a young person goes through various biological and psychological changes. In addition to the physical changes that mark growing up, the teen’s brain is also developing ways to work more effectively. The first organ in the body to feel the impact of drug consumption is the brain.The human brain is reckoned as a ‘Mission Control’ giving the person an identity, and giving capacity to think, speak, feel, move and breathe. Weighing just around 1300 to 1400 grams, the brain is a supercomputer performing complex tasks at enormous speed.The brain is constantly working, even when the person is in deep sleep. Information is continuously being received, processed, and integratedthereby giving the person exceptional mental ability to perform multifarious activities. The brain is composed of many parts that all constitute and work together as a team. Each of these different parts has a specific role to do. When narcotic drugs enter the brain, they interfere with its functioning and signalling process and can eventually lead to lowering the brain capacity and its efficiency.The drugs flood the brain repeatedly with chemicals such as – serotonin and dopamine. The pleasure centres of a teenager’s brain develop faster than the parts of the brain responsible for decision-making and risk analysis. This is one of the reasons that teenagers quickly take to drugs. Over time, drug use can lead to addiction, a devastating brain disease---when people can’t stop using drugs even if they want to.There is an uncontrollable desire to consume drugs, even to the extent of forsaking normal food. The most common signs and symptoms of drug addiction are – obsession with a particular substance, loss of control over the usage of drugs, abandoning the activities which the person used to enjoy. Drug addiction leads to long term impact on life and one may develop severe symptoms such as – fatigue, trembling, depression, anxiety, headache, insomnia, chills and sweating, paranoia, behaviour changes, dilated pupils, poor coordination problems, and nausea. Drug abuse affects teen brain development by: •Interfering with neurotransmitters and damaging connections within the brain •Reducing the ability to experience pleasure •Creating problems with memory •Causing missed opportunities during a period of heightened learning potential •Ingraining expectations of unhealthy habits into brain circuitry •Inhibiting development of perceptual abilities Because drug abuse can muddy reasoning and encourage rash decisions, there are many side effects that go far beyond the biological and physiological aspects.Some of these include: •Criminal records that cannot be expunged •Sexually transmitted diseases •Wasted academic opportunities •Late start in chosen career path •Damaged relationships with friends and family All forms of cannabis have negative, physical and mental effects. Substantial increase in heartbeat, blood shot eyes, a dry mouth and throat and increased appetite are characteristics of its use. Use ofcannabis may impair or reduce short term memories and comprehension, alter sense of time and reduce ability to perform tasks requiring concentration and coordination for example driving. Research shows that students do not retain knowledge when under the influence of cannabis. Motivation and cognition may be altered making the acquisition of new information difficult. Marijuana can also produce paranoia and psychosis,because users often inhale the unfiltered smoke deeply and then hold it in the lungs for as long as possible.Marijuana is damaging to the lungs and pulmonary system as the smoke contain more cancer agents than tobacco smoke. Long term users of cannabis and marijuana may develop psychological dependency and require more of the drug to get the same effect. The drug can become the centre of their lives. Chronic use leads to damaged lungs, chest pains, bronchitis, emphysema, hallucinations/fantasies, abnormal spermsform in the male and decreased ovulation or increased menstrual irregularities in female. Heroin drug lowers perception of pain. The use of this drug leads to euphoria, reduced appetite, chronic bronchitis, tetanus, hepatitis and endocarditis. Overdose leads to reduced oxygen to the brain,suppressed respiration, coma or even death. Cocaine is applied to the gums of the mouth, tongue, eyelids or private parts to delay orgasm. It is also injected and favourably snorted. Its use causes sleeplessness, excitement, loss of appetite, increased sexual desire and feeling of self-satisfaction. Prolonged use leads to loss of weight, impotence, blindness, orgasm failure, stomach problems, liver and lung damage. Overdose leads to death due to respiratory paralysis or cardiac arrest. It alters judgement, vision, coordination and speech and also leads to risk taking behaviour. Generally, use of all kinds of drugs increases the likelihood of being involved in traffic accidents which may lead to death or injury. They are also likely to get involved in fights and these get them into trouble with the law. Because drugs lead to irresponsible sexual behaviour, girls abusing drugs are likely to get pregnant, have multiple abortions leading to severe emotional crisis. Many of the employed youth who abuse drugs lose their jobs due to absenteeism and sometimes inefficiency. Drug use is known to lower performance and productivity. In some cases, some of the youngsters may resort to embezzlement, forgery, corruption, bribery and extortion in order to finance their drug habits. Prolonged use of drugs in some situations lead to psychiatric disorders such as delusional state and chronic dementia. Overdose of some of the drugs cause death and prolonged use of most of them lead to a host oflife-threatening diseases. Drug use leads to poor performance in learning. Drugs erode self-discipline and motivation; its use is closely tied to being truant and dropping out of school. Drug use is associated with crime and misconduct that disrupt the maintenance of an orderly and safe school atmosphere conducive to learning. Drug use has also been linked to law breaking and involvement in other form of crime. Drug users engage in fights, distraction and disrespect to others. Some steal from family members, friends or employers to buy drugs. Women are more sensitive to drugs than men, and hence need less exposure to similar effects. The emotional effects of drug addiction include – mood swings, depression, violence, anxiety, decrease in everyday activities, hallucinations, confusion, and psychological issues.The effects of drug addiction are also seen in babies of drug abusers and can affect them throughout their life. Other effects of drug addiction include – heart attack, irregular heartbeat, and contraction of HIV, respiratory problems, lung cancer, abdominal pain, kidney damage, liver problem, brain damage, stroke, seizures, and changes in appetite. The impact of drug addiction can be far-reaching and affects every organ of the body. Excessive usage of drugs can weaken immune system and increase susceptibility to infection. Drug addiction can cause the liver to work harder, causing significant liver failure or damage. The long-term effects of drug addiction can have disastrous consequences on physical and mental health. As the body adapts to the drugs, it needs increasing amount of it to experience the desired outcome. As the individual continues to increase the dosage, he/she may develop physical dependence. The individual may face deadly withdrawal symptoms, once he/she stops using the substance. When teenagers start abusing drugs there are many warning signs to look out for, like •Sudden change in friends, eating habits, sleeping patterns, physical appearance, coordination and school performance •Irresponsible behaviour, poor judgment and general lack of interest •Breaking rules or withdrawing from the family •The presence of medicine containers, and small pill boxes. The spiralling drug abuse among teenagers is indeed a matter of great concern. Enforcement agencies and health professionals are doing exemplary work in their respective domains to curb the drug menace. It is of utmost importance when educational institutions organize Drug Awareness Programs, to highlight the health risks that drugs cause to a teenager’s body. When drugs interfere with normal human growth there can be irreversible changes leading to lifelong complications. Dr. G. Shreekumar Menon IRS (Rtd) Ph.D: Former Director General of National Academy of Customs Indirect Taxes and Narcotics & Multi-Disciplinary School Of Economic Intelligence India; Fellow, James Martin Centre For Non Proliferation Studies, USA; Fellow, Centre for International Trade & Security, University of Georgia, USA; Public Administration, Maxwell School of Public Administration, Syracuse University, U.S.A.; AOTS Scholar, Japan. He can be contacted at firstname.lastname@example.org
<urn:uuid:db9a6555-8471-47b4-a1ee-873f30bd9791>
CC-MAIN-2022-33
https://www.mangaloretoday.com/opinion/Teenage-Drug-Abuse-Impact-on-Health.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573197.34/warc/CC-MAIN-20220818124424-20220818154424-00405.warc.gz
en
0.93634
2,314
2.75
3
How crowdfunding helps our environment? We’re witnessing a major shift in climate change caused by CO2 and other harmful greenhouse-gas emissions that is increasingly trapping heat in the atmosphere, as a result; raising global temperatures. In other words; our planet is becoming hotter! The main gases responsible for the greenhouse effect include carbon dioxide, methane, nitrous oxide, natural water vapor, and fluorinated synthetic man-made gases. Over 2000 chemicals are used to process textiles, chemicals like mercury, lead, formaldehyde and chlorine. The global apparel and footwear industry produced more greenhouse-gas emissions than Germany, France and the UK combined in 2018, totalling 2.1 billion metric tons of CO2 emissions — approximately 4% of total global emissions. Without significant action, the figure could rise to around 2.7 billion metric tons a year by 2030. The international fashion industry must urgently cut emissions by 50% to reach a 1.5 °C target. If the fashion industry continues to embrace decarbonization initiatives at its current pace, it will cap emissions at around 2.1 billion metric tons a year by 2030, says a new report from McKinsey & Company and the Global Fashion Agenda. To put this in perspective, a single pair of jeans requires 1 kilogram of cotton, which requires about 7,500-10,000 liters of water, that’s about 10 years’ worth of drinking water for one person, according to UN estimates. https://unfccc.int/news/un-helps-fashion-industry-shift-to-low-carbon Jeans manufacturer Levi Strauss assesses the environmental impact of producing one pair of their iconic 501 jeans, it equates to 33.4 kg of carbon dioxide - about a 69 miles drive in an average US car, not to mention the amount of water it requires to wash over its lifetime. Today, 76.4 million tons of clothing are produced annually worldwide. This corresponds to the weight of almost 55 million cars. Greenhouse-gas emissions are causing extreme weather events including, heat waves, hurricanes, floods, droughts, rising sea levels due to melting glaciers, as well as air pollution. This poses a direct threat to us, to plants and to all wildlife. We must act now to reduce carbon emissions. The entire value chain from farms, to factories, to policy makers, to brands, investors and consumers, all must actively and responsibly participate. All key participants in the Fashion industry can play a significant role in decarbonization and the stabilization of climate change. However; The collective mindset and actions of all fashion industry participants must be aligned with the goal of reducing carbon emissions. The question is; are participants fully aware of the urgency and the dire not-so-distant outcomes for not taking responsible and immediate actions? What strategies, technologies, and sustainable business models can brands embrace? Where do we start? In this article, we’re going to shed light on crowdfunding & pre-orders as effective sustainable business models. CROWDFUNDING & PRE-ORDERS There are numerous advantages for fashion brands to embrace crowdfunding & pre-orders into their business. It helps brands build more efficient, sustainable, and scalable businesses that align with the goal of reducing carbon emissions. WHAT IS CROWDFUNDING? There are few types of crowdfunding models: reward-based, equity-based, donation-based and debt-based. In the fashion industry, we’re mostly talking about physical products, therefore, we’ll focus on the reward-based crowdfunding & pre-order models. Rewards-based Crowdfunding is the process of funding the launch or initial production of a new product by raising capital collectively from individuals (Backers). In return, backers get a special deal on a limited or exclusive product that they expect to receive at a later stage. WHAT IS A PRE-ORDER? A pre-order is a way to allow customers to place a reservation for a product prior to its scheduled release date. It’s a form of securing a must-have item before it sells out. A pre-order can be partially or fully paid upfront or when the item is ready to ship. A SUSTAINABLE COMMERCE JOURNEY - LESS IS MORE! Crowdfunding ——> Pre-order ——> Commerce ——> Inbound Recycling - Put it against an image of biodegradable materials Crowdfunding and pre-order models are types of campaign-based commerce, both are part of a 4-stage sustainable commerce journey; Crowdfunding, Pre-order, Commerce (online & offline), and Inbound recycling. Starting with the right model depends on the product development stage and type. You could start with crowdfunding as a way to initially assess demand and get your idea into production, and if your crowdfunding campaign is successful, you can start taking pre-orders to fund multiple rounds of production, and with more demand, you could continue taking regular orders online or offline. If demand decreases, you could run a flash-sale campaign to deplete your stock. Finally, you may also consider offering inbound recycling service on your website. To summarize; crowdfunding is a way to support early stage product development; to take an idea or a design concept into production. The second phase is pre-order where you can fund multiple rounds of production. The third phase is the standard online or offline commerce model, and finally, inbound recycling as a way to encourage your customers to meaningfully engage in the recycling process. The majority of fashion brands who follow the traditional retail route, tend to directly go for the commerce model, skipping over crowdfunding and pre-order models. This traditional retail route results in overproduction; wasting materials, exhausting manufacturing resources, warehousing, transportation, distribution and recycling inefficiencies. But, there are alternative smart online solutions that focus on the core of the problem; Overproduction! Those online solutions are; Crowdfunding, Pre-order, and Inbound recycling. WHY CROWDFUNDING & PRE-ORDER? One of the fundamental challenges that fashion brands face is validating and assessing the demand for a new product or a collection, and how this demand converts to sales, subsequently to produce or order exact quantities, better manage cash flow, and avoid production waste. The way brands generally assess future production volumes is through forecasts and best guesses, but those predictions and assumptions are in most cases challenged by actual customers’ demand. Attempting to understand and analyze customers’ behaviors, tastes and preferences is a highly complex endeavor, it’s expensive, unpredictable and requires large data collected over extended periods of time, the type of data that isn’t particularly available to startups or to small and medium size fashion businesses. However, recently, especially during the coronavirus pandemic, agile brands are embracing independent crowdfunding and pre-order models directly on their online stores in support of sustainability and conscious growth. Here are a few examples of sustainability conscious brands who use pre-orders to help our environment take less waste. Here are a few other links for brands who are doing a great job towards sustainability So, let’s have a look at some of the benefits for both; crowdfunding and pre-order sustainable models: PHASE 1 - CROWDFUNDING MODEL BENEFITS - Assess and validate product likability and demand before committing to production. - Break into the industry fast and without wasting resources. - Build up initial hype for new products and collections. - Collect and understand customer’s needs and preferences to improve product design and to quantify demand on product variant level. - Enhance customer segmentation and targeting. - Drive traffic to your own website and improve marketing funnels conversions. - Raise capital to fund an initial production round. - Understand the challenges that come with the first production run. - Lower cost of production, lower prices for customers and increased profit margins. PHASE 2 - PRE-ORDER MODEL BENEFITS - Iteratively scale production volumes based on actual demand. - Healthier cash-flow and control over production cycles. - Eliminate overproduction by producing the right amount of the right product to the right customers. - Promotes the use of sustainable eco-friendly materials. - Less seasonal production pressure, more of agile and timeless production. - Get to know winning products fast. - Build stronger relationships with your manufacturers and suppliers by managing their production - volume & time - expectations. - Negotiate better prices and lead times with manufacturers and suppliers. - Less warehousing, insurance and transportation costs. - Helps build awareness and contribution to sustainability. - Better understand customer behavior and preferences over extended periods. - Build momentum and hype, funnel more traffic to your online store. - Increase customer engagement, trust and loyalty. - Slows down impulsive purchase behavior and increases quality awareness. With all the benefits that comes with the pre-order model, there are a few challenges; PRE-ORDER TYPES - CHALLENGES & SOLUTIONS There are two types of pre-order methods: Pre-order & Pay Now! and Pre-order & Pay Later! Knowing which pre-order type is more suitable, depends on a number of factors; - How many and how well you know your customers. - How confident you are in your product. - Strength of relationship with your manufacturers, suppliers and delivery partners. - Stage of production and when the product is expected to be ready. - How well cash-flow is managed. PRE-ORDER & PAY NOW! Most brands collect pre-order payments in advance from customers, this method is commonly used to cover the cost of production and operations. But there are a few challenges to this method: Customers are not necessarily excited about paying upfront for pre-orders, especially when delivery estimates and expectations are not communicated clearly, or if the brand is new in the market. Delays beyond the estimated delivery window forces customers to cancel their pre-orders, ask for refunds and dispute chargebacks. Issuing refunds can take up to 10 days to be credited back to the customer, which causes more friction and increased pressure on customer-service resources. Mismanaged pre-order campaigns can have a long-lasting negative impact on the brand’s reputation. Refunds should be avoided to maintain a healthy record with the payment processor and online store solution provider. E-commerce solutions typically provide a capture payment’s setting that allows brands to capture payments instantly, or to capture payments manually later (Authorizations). There are several benefits to manually capture payments later, like preventing fraud, stock isn't available yet, and to avoid refunds. The problem however is that authorizations expire typically between 5 to 10 days for most payment providers, and beyond that period, the hold on those payments will be released. The 5-10 days period isn’t enough time to effectively run a pre-order campaign and be in the safe zone at the same time. Which brings us to the Pre-order & Pay Later method. PRE-ORDER & PAY LATER! The Pre-order & Pay Later or Reserve & Pay Later! Type is geared towards solving the problem of customers’ hesitation to pay upfront. In other words, an alternative solution to Pre-order & Pay Now! This method allows customers to reserve a product and pay later when the product is ready! This method amplifies customer’s confidence and trust and eliminates friction. And for any reason the campaign is cancelled before collecting payments from customers, no harm is done and no refunds to be issued. Customers generally prefer to place a product reservation first and pay later when the product is ready for immediate delivery! However, there are a few challenges to this method: The 5 to 10 days authorization period allowed by payment providers isn’t enough to effectively run pre-order campaigns. To overcome this, you may want to consider Split Crowdfunding or Split Pre-order method. Between the time of pre-orders and the time of collecting payments, brands must provide a good incentive for their customers to increase conversion, this incentive can be a combination of a limited payment window, limited number of products, a good discount, a special offer, or free shipping. To accurately assess the demand for this type of pre-orders, emails must be collected as reservations. Customers need to clearly see how pre-orders work on the pre-order campaign page. Engage with customers and add value; keep them excited and informed for the duration of your campaign; offer them relevant good-to-know information, sustainability insights and even go further by providing them with relevant gamified experiences. AMPLIFY PRE-ORDER ENGAGEMENT WITH CUSTOMERS What brands can do? - Share with your customers your true story, tell them about the why behind your design, explain to them how pre-orders help with sustainability, make them feel that they’re part of your journey, and the value of their contribution to the environment. - Create a unique well-structured pre-order page with eloquent language and beautiful photography that clearly communicates product features - Clearly show an estimated delivery window, and make sure it’s as accurate and transparent as possible. - Create urgency by running your campaign for a limited time, or limit the number of pre-orders. - Add value to your pre-orders by offering a special deal price for early-bird customers. Offer a special gift, or a discount code that they can use on your website. - Add a prominent Pre-order button instead of Add to cart. - Engage with your customers; chat with them online when they’re on the pre-order page, ask them about their preferences and answer to their expectations. - Create buzz through influencers, offer them samples to help them build awareness, and excitement. - Drive traffic through social media to your pre-order campaign and keep the momentum going. - Only charge customers when stock is ready for delivery. SPLIT CROWDFUNDING - SPLIT PRE-ORDER The goal of Split Pre-order or Split Crowdfunding model is to extend the duration of a pre-order campaign and to overcome the limited 5-10 days authorization period. A split crowdfunding model, splits the campaign into two phases, the Commits phase and the Pre-order phase. During the commits phase, emails are collected and during the pre-order phase pre-orders are collected. The Commits phase duration can therefore be extended. The Split crowdfunding process (Two-phase crowdfunding) helps you answer the following questions and concerns; - Is my new product design or concept desired? - Is my product fit for this market? - Is there enough demand? How much? - Who is my true audience? - How can I improve my product? Many brands feel that part of their commitment to reduce the impact of textile waste on the environment is to recycle products they produce by inviting customers to send products they wish to recycle back to the brand. As an incentive, customers will receive a discount on their next purchase. Generally, if the product returned for recycling is in good condition, it will be donated to local charities, otherwise, it will be donated to textile recycling hubs. Some brands accept category-specific or material-specific recyclable items regardless where they come from. In return, customers will receive a discount code that they can use to buy recycled products. It just makes sense!
<urn:uuid:a380a17c-a231-4320-96d7-cac99489cdfa>
CC-MAIN-2022-33
https://crowdlify.com/blogs/journal/how-crowdfunding-helps-our-environment
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571086.77/warc/CC-MAIN-20220809185452-20220809215452-00604.warc.gz
en
0.92188
3,254
3.375
3
By BOB THIBAULT Ever since Christopher Columbus bumped into San Salvador on Columbus Day in 1492, deep-sea navigational tools have been capable of guiding ships all the way across the ocean. But these tools were nearly useless, especially in the early days, along coastlines like that of New Jersey. When sailing within about 20 miles of land, a ship’s master had to rely on his own eyes, aided by a simple compass and prior charts of landmarks, shoals, water depths, and tide directions. Imagine trying to juggle all this in real time in the face of a sail-shredding, 80 mph nor’easter with zero visibility in a driving snowstorm. No wonder there were wrecks. Ludlam’s Island had its fair share. In the 18th and 19th centuries, thousands of ships skirted the island from Townsend’s Inlet to Corson’s Inlet. Some didn’t make it. It wasn’t until the mid-1850s, when the Townsend’s Inlet Life Saving Station came into being, that any kind of systematic record was kept. Before that, there simply weren’t a lot of people around to observe, let alone record, the shipwrecks that must have occurred regularly. Documentation has survived on only a handful of wrecks from those very early days, and some of that is pretty sketchy. We don’t have images of the specific vessels involved, but we can get a fair idea from paintings and models of contemporary ships with similar characteristics. This is the story of those wrecks, some nearly forgotten – noted by just a single line in a database. Two hundred and forty years ago, the American privateer Fame, under command of a man named William Treen, capsized in a heavy gale off the Jersey coast. Fame was a two-masted, privately owned brig, originally commissioned by the U.S. Government to attack and seize British ships during the Revolutionary War. On February 22, 1781, heavily laden with plunder, she sank to the bottom of the sea, taking 20 mariners with her. (1) The story of the Fame may not have ended with her demise in 1781. She was originally reported to have sunk off Peck’s Beach (now Ocean City), but evidence has since surfaced that places the event farther south. On July 8, 2012, it was reported that after a dredging operation dumped sand from Corson’s Inlet onto Strathmere Beach, Spanish reales and doubloons began to pop up on the beach – with none dated later than 1781. The author concluded that the source of this treasure was the Fame, and that she indeed had gone down in Corson’s Inlet. (2). If so, this was probably the first documented shipwreck in the history of Ludlam’s Island. The ship Minerva was stranded and left derelict in March 1791, on the shore of what was to become Sea Isle City. She was carrying a cargo of spices, tea, coffee and silk, which implies she was sailing from the Orient, probably headed for New York City. The fate of Minerva’s crew and her cargo is unknown. (1) The Montezuma, a Spanish galleon on its way from Mexico to Spain laden with a million dollars in gold and silver, sank during a storm a few miles off the coast of Townsend’s Inlet in 1797. After a century of searching, the sunken ship and its cargo still hadn’t been found. (3) Since then, there appears to be no record. It’s odd that there seems to be just a single reference to this potentially very profitable mishap. The following story was pieced together from several sources. (1,3,4,5,6,7). The Guatamazon (or “Guatamoozin” as she was known to the locals) was a three-masted, full-rigged ship flying the British flag. Such a ship was usually the largest and most prestigious in the complement of any global maritime fleet. The story of the Guatamazon is well-documented. On a day in February 1809, this imposing ship, bound from Canton, China, to New York City with a cargo of tea and silks, ran into trouble near Townsend’s Inlet. She had been driven there by a fierce nor’easter with visibility shrouded by a swirling snowstorm. She was finally grounded on the south side of the inlet. A landing party went ashore and was greeted by – pretty much nothing. They had no idea where they were, and began wandering around in two feet of snow, lost among the cedar trees and sand dunes of Seven Mile Island. Then a miracle happened. Three men from the mainland – Humphrey Swain, Nathaniel Stites, and Zebulon Stites – happened to be in the same desolate neighborhood trying to bag a few ducks, when they noticed footprints in the snow that weren’t theirs. They followed the prints and eventually found the ship’s party shivering in the lee of a sand dune. The alarm went out. Farmers from the mainland rushed to see the great ship, to help, and most likely to look over the array of silks that had washed to the beach. The scene was later described as “a frenzy.” Most of the damaged silks, and even the tea, eventually made it to New York, but the ship’s loss was estimated at more than $50,000. In large part through the efforts of the three duck hunters, the entire crew was eventually rescued, warmed, and fed in the hunters’ makeshift hut. Then the Guatamazon lifted herself on a giant wave, came down with a thud, and literally went to pieces. The ship Maria was stranded in Townsend’s Inlet in March 1820, on a voyage from Matanas, Cuba to an unknown port. (1) Her ultimate fate is unknown. Hunter paid a surprise visit to Corson’s Inlet on the last day of August 1824. She was a wooden merchantman, carrying goods from Harve, France to Philadelphia, when she wandered into the inlet and was stranded there. The master, a Mr. Martin, and his crew were all saved. The cargo was salvaged, but it appears the ship was a total loss. (1,8) Thetis reigned in Greek mythology as a goddess of water, a sea nymph, and the mother of Achilles. So it’s not surprising that scores of sailing ships were named after her. It’s folly to try to sort them all out, so the only reference used here is that which specifically ties a ship named Thetis to Ludlam’s Island. The basic facts as reported are these: Thetis was rigged as a brig, with Stockholm, Sweden, listed as her home port. On February 26, 1836, she ran afoul of Townsend’s inlet to the extent that she was “wrecked” and became a total loss. Twelve survived out of a total crew of 16. (1) Like the Thetis, the Morosco was a sailing brig and, like the Thetis, she strayed into Townsend’s Inlet to her destruction. On January 15, 1842, on the way to New York City, the Morosco was grounded in the inlet, bilged, and was lost. (1) There’s no record of what happened to her crew or cargo. Marietta Ryan (1846): By the middle of the 19th century, coastal trade along the eastern seaboard had become dominated by fast, maneuverable schooners. Although their capacity was more limited, they could skirt the shoreline and duck in and out of port more easily than their larger full-rigged counterparts. Ludlam Island’s first recorded shipwreck of a schooner was that of the Marietta Ryan, sailing from New Bern, North Carolina, to New York City with a cargo of naval stores. She ran afoul of the island on January 14, 1846, and for whatever reason, became a total loss. (1) Again, there’s no record of what happened to her crew, or to her cargo. The Eudora was, and probably always will be, the largest passenger vessel to be stranded off Ludlam’s Island. She was a 155-foot long, propeller-driven steamship with a total manifest of about 400 passengers and crew. The event went something like this (1,9,10,11): On a day in mid-November 1849, the Eudora embarked from New York City, bound for San Francisco, loaded with passengers eager to join in the mad rush for California gold. Only two days out from port, fighting a nor’easter, she somehow managed to ground herself off the coast of Ludlam’s Island. Getting all the passengers and their luggage off the ship was going to be tough. But the U.S. Government came to the rescue – in the form of metallic surf boats, several of which had been placed along the shore in readiness to aid unexpected drop-ins like the Eudora. Guided by R.C. Holmes, Collector of Customs, all on board were safely brought to shore, along with their luggage and the ship’s cargo. The record of what happened after that is confusing. The passengers were apparently returned to New York. It’s not clear if they ever got to dig for gold. It’s intimated that the ship herself made it to San Francisco, but if so, what did she carry? Did she really break free from the island, did she return north for repairs, or was she abandoned as a derelict? The Eudora was the last shipwreck recorded before 1850. Even though there were apparently only ten whose identities have survived, these early records give us an idea of the havoc that Ludlam’s Island, or any shoreline, could visit on the unwary, unsavvy, or just plain unlucky. This “Spotlight on History” was written by Sea Isle City Historical Society Volunteer Bob Thibault. - (1) “Shipwreck Data Base.” New Jersey Maritime Museum, Beach Haven, N.J. 28 June 2021 (Note: This is easily the best starting point for any research into New Jersey shipwrecks.) - (2) “Spanish gold and silver on N.J. beach – from 1781 shipwreck,” treasurenet.com, 8 July 2012 - (3) “Sunken Treasure Ships,” The New York Journal, 6 April 1896 - (4) Stevens, L. T., “The History of Cape May County, New Jersey.” Cape May County, N.J., 1897 - (5) Rice, A. G., “The Lost Guatamoozin.” A transcribed letter dated March 1897 reporting the recollections of an unnamed lady who was eight years old at the time of the wreck. (Courtesy of the Sea Isle City Historical Museum) - (6) Downey, L. W., “Broken Spars – New Jersey Shipwrecks 1640-1935.” Brick Township Historical Society, 1983 - (7) U.S. Congressional Series Set, Vol. 149, 1826-27 - (8) Marx, R. F., “Shipwrecks in the Americas.” Dover Publications, New York, 1987 edition - (9) The Evansville Daily Journal, 23 November 1849 - (10) “The Industrial Revolution in Fall River,” Chapter 4 (greenfutures.org) - (11) Holmes, R. C., Two letters describing the use of surf boats in the Eudora rescue (courtesy of the Sea Isle City Historical Museum) To enjoy a collection of photos, literature and artifacts, visit the Sea Isle City Historical Museum at 48th Street and Central Avenue. Access the website at www.seaislemuseum.com or call 609-263-2992. Current hours are 10 a.m. to 3 p.m. on Monday, Tuesday and Thursday, and 1 p.m. to 3 p.m. on Friday. Admission is free.
<urn:uuid:270ada77-dfb3-4a73-8c61-d52864fec1b6>
CC-MAIN-2022-33
https://seaislenews.com/spotlight-history-sea-isles-early-shipwrecks/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00404.warc.gz
en
0.972055
2,719
3.375
3
In terms of its effect on a person’s attractiveness, hardly anything else can compare with a beautiful smile. A radiantly white smile is, above all else, white teeth. However, white teeth do not mean healthy teeth. Therefore let’s try to find out how to whiten teeth without damaging them and which whitening system is better? Throughout history we know that people at all times have been searching for a means to whiten their teeth. The practice of teeth whitening can be traced back over two thousand years. Doctors of the Slavic peoples in order to get better result, before bleaching they made grinding of enamel surface layer with a metal file, and then used a weak solution of nitric acid. This method was used until the end of the 18th century. Particularly great interest in the aesthetics of the smile and teeth emerged at the beginning of the 19th century. The most effective was the method of teeth whitening, the active element in which was chlorine, derived from a solution of calcium hydrochloride and acetic acid. I think it’s clear to everyone that these whitening methods caused irreparable damage to teeth. But, as they say, «beauty requires sacrifice.». Fortunately, in today’s world, it is possible to achieve tooth beauty without damaging teeth. Already by 1910. all tooth whitening techniques involved the use of hydrogen peroxide, combined with heated instruments or light treatment. And in 1918 the principle of activating whitening was discovered through the use of heat radiation. High intensity light radiation causes a rapid temperature rise in the hydrogen peroxide solution, resulting in a rapid acceleration of the chemical whitening processes. This principle has been improved upon in many ways, and is still the basis of today’s teeth whitening systems. In the twenty-first century, we can safely say that safe teeth whitening exists, but we’ll come back to this later. And if you are interested in professional tooth whitening equipment, visit this link Teeth Whitening. There are these types of teeth whitening: - -Home (takes place independently at home); - -office (clinical, is held by a dentist in the clinic) Generally speaking, there is no difference between the home and office whitening. Whitening is done by treating the pigment inside the tooth with a bleaching gel, usually containing hydrogen peroxide or carbamide peroxide. Hydrogen peroxide and carbamide peroxide work as powerful oxidizing agents, breaking down into water, oxygen, and free radicals (atomic oxygen). Hydrogen peroxide releases a significant portion of the reactive oxygen species 30-60 minutes after use. Urea peroxide releases about 50% of the active substances in the first 2 to 4 hours, and the other 50% within the next 2 to 6 hours. The decomposition time is the main difference between the two substances. The free radicals that are released affect the tooth pigment, which is located in the dentin. It is yellow or gray because of the double carbon bonds. This is the color that the pigment imparts to the teeth. If oxygen atoms attack the pigment molecules of hydrogen peroxide or urea, the colorless double bonds are replaced by colorless single bonds. As a result, the tooth loses its yellow hue and becomes whiter. This is the mechanism behind all of today’s whitening methods. Please note: Only your own teeth are whitened. Fillings, veneers, crowns have a completely different structure, and whitening technologies do not work on them. Contraindications to any type of whitening There is a large number of contraindications to the whitening procedure. For example: Allergies to hydrogen peroxide or other elements of whitening systems Dental diseases (caries, gingivitis, periodontitis — these should be removed first); Abnormal tooth abrasion or deep cracks in the enamel; certain mental illnesses; Taking a number of drugs that increase light sensitivity (antibiotics, systemic retinoids, etc.).д.); Pregnancy and breast-feeding. Age under 18 due to physiological peculiarities If you type in a search engine query about tooth whitening, a large number of tools and options will be offered. I strongly advise not to go back earlier and not to use powders, charcoal, soda, acid, and anything else. Take care of your own teeth, and for folk methods there is another use. Among the tools that are produced, only certified, tested and proven in the market should be chosen. A great way to whiten teeth is to visit the dentist! But in order to understand, let’s consider in detail the most current and popular ways of teeth whitening. Home tooth whitening. There is an opinion that it is possible to whiten your teeth personally. However, it is important to remember that before using any whitening systems, it is necessary to get a consultation with a professional, because only a doctor can really assess the condition of the teeth and determine whether there are contraindications to the procedure. Uncontrolled use can seriously damage tooth enamel. In home whitening different whitening strips, pencils and mouth guards with gel are used. But whitening gels, strips and other products that are commercially available contain a small percentage of whitening agents and have negligible and short-lived whitening activity. They are applied to the surface of the teeth, for the suggested time, and if necessary, the procedure is repeated several times to achieve good results. All such options for home whitening has one main advantage — it is easy and convenient to use, but it should be understood that, in general, the whitening is not more than 2-3 tones. And at the same time they have a rather serious disadvantage — it is their uncontrollability. The least effective among this variety are pencils and strips. Not all systems use universal mouth guards. But they are designed for people with a very straight set of teeth, and there are actually very few of them.Also, if you use a universal mouth guard or whitening strip for crowded teeth and other problems with the positioning of the teeth, there is an increased likelihood that the peroxide will penetrate the gum and cause a burn. Therefore, when choosing a system for home teeth whitening, it is better to give preference to professional, i.e.е. Those that use individual mouth guards. Custom-made mouth guards are made to order based on an impression of the jaw and are suitable for multiple applications. They precisely follow the shape of the tooth row, making good contact between the whitening gel and the tooth surface. Because of this, there is less chance of the gel penetrating into the mucosa of the gums and mouth, reducing the risk of burning the gums. It and the low-concentration whitening gel can be applied independently by the patient. The duration of the mouth-guard depends on the concentration of the whitening agent, as well as the good result. The product is chosen by the doctor, based on the unique situation of the patient — the natural sensitivity of the teeth, the presence of gum disease, hard tissue quality, etc.д. The effect as a result of home whitening lasts for a long time. All depends on the stability of the color dental dentist may recommend a short maintenance course once every six months. The traps can be worn both during the day and at night. The first signs of discoloration appear on the fourth or fifth day and increase in intensity by the end of the course. The complete Philips Zoom home whitening systems, DayWhite and NiteWhite, and Opalescence PF gels are based on the use of custom-made trays. Negative effects of home whitening systems can be said if the patient violates the recommendations and indications. For example, putting too much gel in a mouthguard causes it to be squeezed out when placing it on the teeth: the gel gets on the gums and burns the mucosa as a result. Or the patient, in order to increase the effect of whitening, wears a mouth-guard for a longer period of time than prescribed. It is always necessary to follow the recommendations precisely. It must not be forgotten that patients with very sensitive teeth will experience a sudden increase in sensitivity when whitening. Home bleaching as the main treatment only makes sense if the patient’s teeth are light to begin with. In the case of yellow teeth, the result will be very little, virtually zero. Peculiarities of home whitening: Home whitening «asks» a lot of time — from 2 weeks to a couple of months of constant application; The concentration of gel in the case of home bleaching is 2-3 times less than in office procedures; Home whitening involves self-application of gel, wearing a mouth-guard, and further removal of the remaining active ingredient, all on a permanent basis. I recommend home whitening in both variants: As a consolidation in a comprehensive professional whitening; as a maintenance after 4 to 6 months from office whitening. In order for the procedure of whitening to go without any damage to your health, I strongly recommend leaning towards the professional methods of whitening, which are supervised by a dentist. What is the best way to whiten your teeth?? According to dentists, a whitening system is judged on the following parameters: safety, effectiveness, and speed. In order to understand what is the best whitening, let’s compare the pros and cons of different office whitening methods. If the main criterion for evaluation is effectiveness, then the office whitening (any of them) is superior to home whitening. Professional (Clinical) Teeth Whitening All clinical dental whitening procedures follow the same principle: whitening is achieved by applying a gel to the surface of the teeth, the main element of which is hydrogen peroxide. Professional office whitening comes in four types: Chemical whitening (the doctor applies a gel to the teeth); Thermal photobleaching (the gel is applied and activated by a lamp with a warm light that heats it); Cold light photo whitening (the gel is activated by a cold diode light, and there is no heating); Laser (the gel is activated by a laser beam). Note: activation is the process of accelerating the chemical reaction of the gel due to the influence of external environment: light, temperature, other reagents. Chemical method of bleaching. In chemical bleaching, the activation of the gel is done by mixing it with a chemical catalyst, and no external catalyst is needed, It is the easiest and cheapest procedure to achieve a lighter color of teeth than nature intended. The Opalescence Boost (Xtra) whitening system is considered the most popular. It is a highly concentrated gel with 35% hydrogen peroxide. The working principle is simple: the activated gel is applied to the teeth, rinsed off after a certain period of time and the procedure is repeated. Teeth are whitened by 5-10 shades in 1 session. The whitening is practically harmless, because as such there is no risk of overheating the pulp. So what’s the downside?? The disadvantage is that to achieve an excellent result, you have to use a large concentration of gel and perform a longer exposure, which contributes to the drying of teeth and increased period of sensitivity after the procedure. However, there are also advantages — it is the absence of overheating of the teeth, affordability and price. That is why the Opalescense teeth whitening system still holds its own. Photo whitening (warm light activation) WhiteSpeed Professional Philips Zoom is a globally popular technology and professional teeth whitening system. This whitening system was invented in America. The essence of the method is as follows: it uses a special whitening gel, which is activated under the influence of ultraviolet light. At first, this method had big disadvantages. But the system gradually improved and became safer. Zoom3 has already boldly conquered the world, achieving the title of safe and successful. With this whitening method, a UV-lamp is used to activate the bleaching gel for better peroxide penetration into the dentin, resulting in a more effective whitening effect. Ultraviolet light acts on the bleaching gel, which contains up to 35% hydrogen peroxide, and activates the oxidizing agents in it, while the surface of the tooth can be heated, respectively, after the procedure, the sensitivity of teeth can significantly increase. The advantages of this option whitening: teeth whitened to 8-10 shades, the result will be obtained in one session, the procedure is virtually harmless. The disadvantages are the same as in the case of chemical whitening: the period of tooth sensitivity increases and increases, only in the case of zoom3 because of overheating of the teeth. This system already assumes a gel to relieve this unpleasant symptom. The benefits in terms of affordability and price are saving the ZOOM3 position. This way of whitening is successfully done by dentists and is popular among patients. However, giving an answer to the question «a great whitening system according to the professionals» — I very rarely discuss these types of whitening with my own patients, due to the emergence of more modern techniques. If for some reason the innovations are not available, then do not worry, the techniques discussed will still be better than continuous uncontrolled home whitening. If the work is done by an experienced doctor, the result will be 100% and without bad consequences and discomfort. Advanced Technology — the highest efficiency Photobleaching (cold light activation) LED, or cold light is a modern way to activate the whitening gel. LED lights do not heat the surface of the tooth and do not dry it out. This minimizes the risk of sensitivity and significantly improves tolerance of the procedure. The actual LED light source is equipped with a whitening lamp system Philipszoom4 generation. Because it emits LED light, the system is called cold whitening.ZOOM 4 is the latest generation of photo teeth whitening with the utmost gentleness to the teeth and greater effectiveness! Therefore the gel contains a low concentration of hydrogen peroxide (about 25%), which starts the activity after exposure to light, with very high efficiency, so the effect time on the teeth is much less than previous systems. The second element is alkaline, whose function is to stop the acid from forming and eliminate the destruction of enamel. The last step: treatment of teeth with Remineralizing Gel Relief, containing amorphous calcium phosphate, which enriches the enamel with calcium, fills the dentinal tubules and saturates the hard tissues of the teeth with minerals. This gel provides enamel restoration and reduces overall tooth sensitivity. The entire procedure is painless and psychologically comfortable. The results of Zoom whitening are long-lasting and depend on the observance of the recommendations. The disadvantages are usually only the price of the procedure. It allows you to change the color of enamel up to 8 to 12 colors in a single session ZOOM 4 is equipped with a cold LED light source, eliminating overheating of the deep layers of the tooth and gums; Suitable for patients with tooth sensitivity; ZOOM 4 technology does not thin or damage the enamel; great reduction of exposure time on the teeth and thus the entire procedure The result lasts up to 3 years; Laser whitening Dr. Smile Laser teeth whitening is a method of professional teeth whitening based on the use of laser in combination with a special gel. The gel contains 30% hydrogen peroxide and a catalyst for the laser beam. The gel collects the laser light and breaks down into particles that whiten the enamel of the teeth. Just one to two minutes of laser exposure to a tooth coated with a whitening gel is enough to produce a positive effect, even in very difficult cases. There is an opinion that the effect of laser whitening is more expressed through the ability of the laser to break down the pigments. Another undeniable advantage of the method — it is a more controlled bleaching process, which reduces the risk of pulp overheating and other acceptable difficulties. The Dr. Smile is a computerized system with integrated software for all procedures. The computer has programs for all procedures with clinical information and parameters. Therefore, this teeth whitening is safe and meets the requirements of the protocol. In the final stage, the teeth are coated with gel (to reduce sensitivity), which is also activated by the laser. To date, we can honestly say that the laser teeth whitening is a method that does not damage the enamel of teeth and provides long-term stable results. Among the disadvantages of laser whitening one can only mention its high cost. The advantages of laser whitening: laser whitening procedure is indicated for people with high sensitivity, as there is no heating of the tooth tissue. All kinds of pigmented formations are removed; It is possible to whiten teeth by 8-12 shades in one session. Dental enamel is not damaged by laser radiation Laser radiation has bactericidal qualities It is possible to control the process thoroughly by choosing the intensity and duration of the treatment; reducing the time of action on the teeth and thus the procedure in general; the result is long-lasting — up to 3 years Notwithstanding the fact that the parameters of exposure can be chosen very precisely, the result is considered to be individual. If a good result is not obtained after the first session, another treatment is possible. The perfect result lasts for years. But this is still quite personal. The effect can last as long as possible, however, if certain rules are not violated: It is necessary to give up eating foods with coloring substances (or at least reduce their consumption); It is recommended to give up smoking altogether; For daily oral hygiene, it is necessary to use professional toothpastes with a light whitening effect; you have to use very good quality toothbrushes or even replace them with an irrigator; Twice a year, it is recommended to have your teeth professionally cleaned. Your teeth will always be whiter than before laser whitening because there is no going back to the original shade. Effective whitening system: When selecting a method of teeth whitening, first of all, the doctor relies on medical conditions and contraindications for the procedure, as well as the individual characteristics and wishes of the patient. All professional methods allow to achieve considerable and stable results.Our dental clinic uses modern equipment and the best preparations for tooth whitening. To remove the question about the choice of whitening method, you need to make an appointment for a consultation with the doctor. After a thorough examination of your teeth and mouth, your doctor will recommend a more appropriate tooth whitening plan and agree on the cost of treatment. Professional Teeth Whitening — A Safe, Effective Procedure. Modern whitening systems effectively cope with the problem of yellowing of the teeth, making it possible to achieve a significant and long-lasting whitening effect. Laser whitening and photo whitening with cold light are considered to be the most effective. In configuration with a supporting course of home whitening with the use of individual drops they give excellent results. The effectiveness and quality of modern whitening systems are time-tested and considered safe with proper use and consultation with the doctor.
<urn:uuid:b71eab1c-e915-4d4b-b7ae-9a71f6b0028e>
CC-MAIN-2022-33
http://pzforum.net/the-best-teeth-whitening-system-according-to.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571483.70/warc/CC-MAIN-20220811164257-20220811194257-00605.warc.gz
en
0.93015
4,069
2.59375
3
Potty Training Toddlers and Children, boys and girls. Potty Training can be difficult. Here is how I, as a psychologist, potty trained my children – 4 boys and one 1 girl – including one with Down Syndrome and related physical disabilities, such as hypotonia, low muscle tone. Yet, being potty trained is vitally important. Without the potty training, boys and girls will miss out on good things, as they are more likely to be shunned and teased, particularly as they get older. The potty training strategy presented here, will help toilet train or potty train a boy or girl, with or without disability and normally the POTTY TRAINING IS VERY FAST. WHEN TO START POTTY TRAINING CHILDREN: BOYS AND GIRLS . Psychologists found that potty training boys and girls can start when the child is aged two years or more. Contrary to some sources that once advocated starting potty training as young as a few months old, psychologists found younger boys and girls kept having so many accidents, in spite of intensive potty training, that potty training was then considered to be of little value prior to age two years. If the boy or girl has an intellectual disability, then delaying the first attempt at potty training until the child is 3 or 4 years old, is reasonable. As a rough guide, IQ around 70, first attempt at potty training is around age 3 years. For IQ around 50, 4 years of age might be a good starting time for potty training. You can try potty training before this, but don’t get upset if it fails or doesn’t go very well. POTTY TRAINING INDEX, but I suggest you just read your way through all of this page. Underlined means clickable. Here are some links from our toilet timing and training page – for older boys and girls – which you may also find helpful: Potty Training does work. Having potty trained our boys and helped with the potty training of my daughter, I didn’t expect to have too much trouble with potty training my last boy, Jacob, even though he has Down Syndrome and hypotonia – low muscle tone. The potty training procedures I used with my other boys, are essentially the same as the potty training procedures I use with Jacob – although Jacob had the advantage of having a potty training cartoon thrown in, which Jacob did seem to relate to – so the potty training procedures have worked with all our kids. Remarkably, for a boy with Down Syndrome, potty training was achieved with Jacob when he was about three years old. Potty Timing or Potty Training – what’s the difference? The difference between potty timing and potty training is a simple one. With potty timing, you are trying to teach boys or girls to go to the potty at certain times each day – for example, on waking up, after breakfast, 10am, 11am … With potty timing, the kid does not have to understand their bodily urge to urinate, they go because it is their time to go. Potty training involves boys and girls going to the potty when they feel the urge to urinate. Kids with Down Syndrome or other disability may learn quickly, like Jacob originally did in about a week or two, but all too often potty training may span several years when a disability is present. Sometimes the kid may be 10 years old before potty timing is learnt, some boys and girls never become potty trained. Potty Training and the need for consistency. The potty training was lost, however, because his daycare centre didn’t support potty training until kids were much older than Jacob. Instead of potty training, Day Care encouraged potty timing. So, Jacob’s potty training became non-existant for much of the time. We lost the accomplishments of the potty training we did real fast. Since losing the potty training and in spite of prolonged potty training over the next two years, by us, his day care, his kindergarten, the best we achieved was limited potty timing. Nappies alone cost over $2000, since our first potty training. We are so disappointed the day care centre did not support Jacob’s potty training when we had Jacob potty trained the first time. Potty Training Boys and Girls – the procedure, hands on stuff. Jacob’s potty training – the first time: As a psychologist, I learned about potty training boys and girls and how to achieve fast potty training, during my university days. I saw no point in dragging the potty training out. The potty training involved me taking him to the toilet with me, so that he could watch and learn. We also watched a cartoon video on potty training, which Jacob watched several times a day ( Tommy’s Potty Training – by the Intellectual Disabiltiy Services Council, Adelaide, South Australia ). There are other potty training videos of course. We encouraged going to the potty by giving plenty of fluid to make sure he had wee to pass – very important for fast potty training. We put a little salt on his food and gave more dryish foods to encourage drinking, so that his frequency of weeing would increase, which again, is very important for fast potty training. When he urinated on the potty, we gave lots of verbal praise. Kids love praise. No day time nappies and we kept him at home, so that potty training was confined to one environment to start with. Using this potty training procedure, Jacob was potty trained within two weeks. We had succeeded with the fast potty training. Why does this potty training procedure require the kid to wee more often? We wanted the potty training to be fast and the opportunity to potty train is not there if there is no wee there. The more the fluid intake, the more often will be the wee coming out and the more opportunity for the potty training to take place. Potty Training Boys and Girls – The Enormous Value of Vicarious Learning. Vicarious learning refers to learning things through watching and imitating others. With potty training, vicarious learning is taking advantage of the way boys and girls learns and perceives the world at a young age. A young boy and girl, or even an older kid with intellectual disability, will often believe they were part of something they were watching. With potty training, we take advantage of this unique learning by reading Jacob his potty training book and use his name. For example, the page may contain a drawing of a boy going to the potty, so we say, Jacob’s going to the potty. Another picture shows the boy being given great praise, so we give Jacob the praise for going to the potty when we read it to him – as if he had really gone. This vicarious learning enables us to reinforce potty training behaviour, without Jacob having to physically go to the potty. Potty Timing – the procedure, hands on stuff. As pointed out earlier, the potty training was then lost, so, up until he was over five years old, we used potty timing – although potty time worked at kindergarten and day care in a limited way, we had only very minimal success with potty timing at home. Essentially, with potty timing, you let them know it is time to go to the potty, you take them to the potty, get them to go and then praise them. The praise helps to reinforce the potty timing cooperation from the kid. In time, the kid may pick up potty training from being potty timed. Potty Timing and Potty Training Boys and Girls – the complimentary procedures. With potty timing and potty training boys and girls , you also build up the other behaviors involved around the potty. Examples of the other potty training and potty timing tasks includes, pulling pants and underpants up and down; if boy, teaching them to keep the penis pointed down in the underpants – minimises the effect of a wee accident. Washing hands. Drying hands. Potty Timing and Potty Training – the value of task analysis. A procedure which can help identify the steps in potty training and potty timing is called task analysis. With task analysis you break up a task into several smaller tasks. For example, with potty training and potty timing, hand washing is important. Hand washing involves going to basin, turning tap on, rinsing hands, soaping hands, rubbing hands on palms, on backs of hands, in between fingers, rinsing hands until soap is off, turning tap off. Generally speaking, the more difficult the boy and girl finds the potty training or potty timing tasks to be, the more the tasks have to be analysed into smaller steps. Potty Timing and Potty Training Boys and Girls – some information on potty chairs and so on. Some potty training and potty timing information I need to include: Jacob initially started out in potty training with a potty on the floor when he was little. As he grew, the potty training required a larger potty – which was still on the floor. Some people find potties that buzz or play sounds when urinated in to be helpful – We never used them. More Potty Training Suggestions to help you along Jennifer wrote this piece on potty training and asked for it to be added to our potty training and timing page. It’s good, down to earth potty training information, so well worth a read. I just read your page on potty training and the woman whose daughter wouldn’t pee in the potty. We had a sort of similar problem with potty training. My daughter got to the stage where she was often dry in her nappies, and would pee the moment I took them off, usually leading to floods on the changing mat. My Mother suggested the obvious remedy to whip off the nappy & dump her on the potty. We carried on like this for a long long time until she was dry all day for 3 days in a row. Then I put her in cotton training pants, with cotton leggings over them. The training pants absorb most but not all of the urine, so you get wet leggings but no wet floors (unless she is sitting at the time). The moment she was in these training pants she stopped going in the potty and started weeing in her pants. I think she enjoyed the attention of being changed when she was wet. I could not do any positive reinforcement on the potty because she just would not use it. She could sit there for ages, then pee in her pants within seconds of being away from it. I finally got her to use the potty by putting her feet in a pan of water. This was based on the observation that she often peed in the bath. That worked a treat, and after a few goes with the pan of water she started peeing in the potty without having her footbath. I’m told that for really stubborn children splashing water on their genitals can get them going. [From Donald: Iin my country, Australia, this would likely be considered an act of child sexual abuse and or physical abuse – please check with your local child protection authority before using this suggestion.] We got a musical potty that played tunes when she went, this was great for positive reinformcement as you can make a fuss the moment the child wets the potty without having to sit there staring fixedly between their legs and waiting for something to happen! She still misbehaved quite a bit until one day she went on a bookshop floor where her father was playing with her. He was utterly fed up and made no attempt to hide it. He yelled for me, stormed out of the shop, dumped her unceremoniously on me and told me to take her away and sort it out without him. She absolutely adores her father and was uttery dismayed by this, howled inconsolably for longer than I’ve ever known her do before. However it seemed to do the trick, she went from several accidents a day to maybe one a week, and some weeks none after that. All the recived wisdom I’d heard was not to make a fuss about messes, and praise successes, but it seems her Father’s disapproval was what really made her try to get the toilet thing right. She still has to be put on the toiler regularly, and very rarely asks, but at least she is going when she is put there, and rarely goes in between times. Whenever she does in hindsight it’s because we left it too long before putting her on the toilet. So it might be worth passing on the advice about trying using water to make a reluctant child go, and once they are going musical pottys are great for telling when it’s happening, but being dumped by a loved one seems to be the ultimate sanction! FROM DONALD: Good suggestions I think, but always remember that what works for one kid, may not work for another, but good suggestions well worth considering. Potty Training – contact us for more help. I hope all this potty training information is helpful. If you need more potty training information, please email us. After potty training four boys and one girl, I think we should be able to help in the potty training of your boy or girl.
<urn:uuid:a7e6d7a2-90fc-4958-9721-bf8ffa88f9d6>
CC-MAIN-2022-33
http://www.cdadc.com/ds/potty_training.htm
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572161.46/warc/CC-MAIN-20220815054743-20220815084743-00205.warc.gz
en
0.979413
2,879
2.90625
3
Feeling bad about your size may be sabotaging your attempts to shape up. Ellen Wallwork speaks to the experts about why reaching a healthy weight begins in the mind. Take a second to think about your body. Are you happy in your skin? If you’d like to lose weight, thinking about your figure may be an uncomfortable experience. Being reminded that your waist is wider than you’d like or the number on the scales is higher than you want can sap your confidence and lead to feelings of failure and shame. You may have internalised stereotypes about ‘fat people’ being lazy and lacking in self-control, and so you believe that being overweight is a reflection on your character. But according to a new report from the British Psychological Society (BPS), we must learn to let go of such harsh views because obesity is not simply down to an individual’s lack of willpower. ‘While obesity is caused by behaviour, those behaviours do not always involve “choice” or ‘‘personal responsibility”,’ states the report, which argues that people become obese because of a combination of factors, including genetics, responses to childhood trauma, a lack of available healthy food and sedentary lifestyles. ‘The common view that the cause of obesity resides within an individual has created negative stereotypes that have allowed weight bias and discrimination to go unchallenged,’ write the authors of the report, adding that people living with obesity should not be made to feel ashamed, as feeling guilty for having put on weight can lead to a vicious circle, resulting in further weight gain. ‘Being stigmatised is stressful. It can lead to feelings of distress, shame, guilt and failure,’ the report continues. ‘One way of coping with this stress is to use food to distract, soothe or anaesthetise uncomfortable feelings, but relying on this coping strategy increases food consumption and weight. Evidence has shown that stress results in biological, psychological and social mechanisms that maintain weight gain, including increased appetite.’ Guilt is a no-win game This means feeling ashamed of your weight could actually be making it harder for you to eat less. A study by behavioural scientists at University College London found experiencing stigma from those around you only compounds the problem. So, rather than encouraging people to lose weight, fat shaming was found to lead people to put on more. Dietitian Tracy Kelly explains why this has such an impact. ‘Imagine trying to win at something when it feels like everyone and everything is telling you that you aren’t good enough. When a person hears that enough, they start to believe it,’ she says. ‘Deep down there may be a real lack of belief that losing weight is even possible. Learning to tame your inner critic will help you change this trajectory. You can then rely on why-power rather than willpower and start building new neural pathways to new habits.’ Begin the self-care cycle How can you tame your inner critic and break out of this cycle? Well, for one thing, don’t be fooled into thinking that if obesity is not a choice, then your weight is out of your control. In fact, Jane Ogden, professor in health psychology at the University of Surrey and author of The Psychology of Dieting believes that feeling like a victim of your biology, society, or the food industry can be just as bad for your self-esteem as fat stigma. ‘You are not defined by your body weight,’ Professor Ogden says. ‘You have bodyweight that you can manage and take control of. You shouldn’t feel ashamed of your weight, but you should recognise the role of behaviour in your weight gain and then feel empowered that as an individual you can make choices that will help you to self-care in a positive way. ‘If you’re feeling guilty or shameful because you’ve gained weight, you’re not going to feel deserving of self-care so it’s going to be harder to take that positive step to look after yourself.’ To make a change, you need to stop viewing healthy eating and exercise as something you should do to be a better person, and start thinking about it as a form of self-care, which you deserve because you ARE a good person. Step by step to a positive mindset To start tackling negative thoughts that could be contributing to unhealthy behaviour, try these three techniques: 1. COM-B (Capability, Opportunity, Motivation to Behaviour) Angel Chater, one of the authors of the BPS report, suggests using this term to get to the bottom of what is stopping you losing unwanted weight. ‘Ask yourself, what is influencing my behaviour?’ she suggests. ‘Is it my Capability (do I need to learn something new?), or is it the Opportunity around me (my friends/family/where I live or work), or is it my Motivation (do I really want to change my behaviour? Do I think it will make a difference? Am I scared of what will happen? Am I influenced by my habits and emotions?).’ 2. Be a friend to yourself Helen McCarthy, a consultant clinical psychologist who explores the topic of weight loss without dieting in her new book How to Retrain Your Appetite, advises talking to yourself as you would to a friend that you care about. ‘Developing self-compassion can be a powerful part of change,’ she explains. 3. Look in the magic mirror Dietitian Tracy Kelly recommends a technique adapted from Jack Canfield’s book The Success Principles. At the end of the day, stand in front of a mirror, look yourself in the eyes, say your name, then acknowledge all the positive things you did during the day. Nothing is too small. It could be making your bed, how you talked to someone at work, or following through with better habits such as drinking water, making dinner, being kind to yourself. Then tell yourself, ‘I love you’. Do this for 32 days. If you find yourself in bed without having done the routine, get out, switch on the light and have the conversation with yourself. If you miss a night, you need to start again. ‘This exercise may stir up a lot of emotions,’ says Ms Kelly. ‘You may even find yourself crying the first few times you do it. But keep at it. It will help to build self-esteem, confidence and deeper love and care for yourself.’ Need extra help? Just as the critical views of others can make negative thoughts about your weight worse, some positive input could be just what you need to change your thought patterns, says Dr Chater. ‘There is good evidence that speaking to a psychologist can help those who are living with obesity,’ she explains. ‘It is important that conversations are centred on the person, without a sense of judgement. ‘If eating less and moving more was a simple matter, we wouldn’t have the current obesity levels,’ she adds. ‘Psychologists can help to understand the factors that lead to overeating, what is eaten, when and how much. They can help to understand barriers to regular physical activity and what influences sitting too much.’ This view is echoed by Dr McCarthy. She believes you’re most likely to find a psychological approach helpful if the reason you’re struggling with your weight is related more to how you eat than what you eat. ‘Something psychologists bring to a person in any sort of distress (eating-related or not) is an understanding that whatever has led to their current problems, the person was doing the best they could to survive and manage what was happening in their lives during the time the unhelpful patterns developed,’ she explains. ‘This means that apparently self-defeating behaviours are understood in terms of how and why they developed, and once this is understood, evidence-based strategies can be used to bring about change.’ Invite your friends in Psychologists aren’t the only ones who can help, however, says Professor Ogden. ‘It’s also really good to talk to your friends, your family and people around you. Find out what works for other people and share the collective skills you’ve all gathered. Peer groups are good, being with other people in similar situations is good. Talking about it, being open about it and sharing your experiences is also good,’ she advises. Seeking the support of others and being kinder to yourself may not seem like strategies that will have as much impact as a punishing gym routine, but challenging negative thoughts about yourself can be the first step towards developing a healthier relationship with food and your body. ‘Little by little, new patterns can form and your brightest, shiniest self can emerge,’ Ms Kelly says. ‘Will this happen overnight? No. It will take consistency to do things even when you don’t feel like it. But when you believe you can, you have someone championing you and you are surrounded by a community of people who want you to succeed, then anything is possible.’ Article sources and references - British Psychological Society (2019) Psychological perspectives on obesity: Addressing policy, practice and research priorities. BPS. Published online 24 September 2019. https://www.bps.org.uk/news-and-policy/psychological-perspectives-obesity-addressing-policy-practice-and-research - Jackson, S. E., Beeken, R. J., & Wardle, J. (2014) Perceived weight discrimination and changes in weight, waist circumference, and weight status. Obesity (Silver Spring, Md.), 22(12), 2485–2488. DOI: 10.1002/oby.20891https://pubmed.ncbi.nlm.nih.gov/25212272/
<urn:uuid:8f051441-1d86-46f0-a52f-7e3928d4a8b3>
CC-MAIN-2022-33
https://resource.healthyfood.com/advice/how-tackling-shame-can-help-with-reaching-a-healthy-weight/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570827.41/warc/CC-MAIN-20220808122331-20220808152331-00605.warc.gz
en
0.957589
2,116
2.9375
3
What is a Sonnet? A sonnet is a kind of lyrical poem where the poet takes up a persona and expresses a certain deep emotional state of mind or imaginative quality. The term sonnet is derived from the Italian word ‘Soneto‘, which means a little sound or a little song, and another word ‘Suono‘ in Italy means sound. A sonnet is composed of one stanza of fourteen lines written in Iambic Pentameter. The first eight lines form the octave, and the next six lines form the sestet. From the eight lines octave, each four-line unit is called quatrains, and from the six-line sestet, every two internal units of three lines are called tercet. In English sonnets, mostly there are three quatrains and a concluding couplet. Italian form of sonnets have an intricate rhyme scheme of octave that is abba abba, and that of the sestet is cdc cdc, cde cde and so on. The rhyme scheme abab cdcd efef gg was mostly used in English Elizabethan sonnets. Meter of the Sonnets Sonnets are written in iambic pentameter, which indicates an unstressed syllable followed by a stressed syllable. An Iamb is a two-syllable unit, where the first one is unstressed and the second one is stressed. Each Iambic pentameter has ten syllables. This meter is commonly used in the English form of sonnets. Caesura is a clearly marked pause in sonnets, usually made by using punctuations or syntaxes such as comma, semi-colon, and so on. It is used in blank verse, heroic couplets, or stanza forms. Theme or Tone of Sonnets The general tone of sonnets is much more meditative, introspective, or contemplative. These were poems that were composed to perform in music. Some scholars have also suggested that sonnets should be performed in music. These are usually love poetry. The major theme is devoted love. It started with the tradition of courtly love, a knightly and platonic passion or chivalric romance. It also encapsulated other subsidiary themes such as thoughts, political issues, social issues, meditation, feelings, and many more. Origin of Sonnets A sonnet is a poetic form that originated in the court of the Holy Roman Emperor Frederick II in Sicily, Italy. Fredrick’s court was a place of cultural and literary exchange. The people of his court wrote poetry in their local languages. The 13th-century poet and notary Giacomo da Lentini is credited with the invention of the sonnet form. He developed the structure of the sonnets and the concept of the octave and sestet forms. He was the first to use the Italian sonnet form. Francisco Petrarch was the most influential practitioner of the Italian sonnet form. Hence, it also came to be known as the Petrarchan sonnet. Troubadour poetry became popular in the French region of Provence. Troubadours went around to perform these poems of love lyrics. It was popular mostly in Southern Italy. Other Italian poets who wrote sonnets were Dante Alighieri and Guido Cavalcanti. The tradition of sonnets started in southern Italy and then moved to northern Italy. The structure of the typical Italian sonnet form of the time included two parts: first, the octave forms the ” preposition”, which describes a problem or a question, and second, a sestet that proposes a ”resolution”. Typically, the ninth line is called Volta. The Volta signals the move from proposition to resolution. Sonnets originated in England in the Elizabethan period. Key Features of a Petrarchan sonnet In Petrarch’s sonnets, you will find that the love of the beloved is unattainable and unrequited, leading to torment in the poet. The picture of the beloved is shown as a beauty with a virtuous heart, having goodness in Elizabethan sonnets. The metaphor of ‘Petrarchan Conceit‘ was used where striking comparisons are made between desire and pursuit of the beloved. For example, the beloved was often compared to a ‘hunt’, the beloved’s beauty was considered as a trap, and her beauty was compared to the sun. All these together constituted an elaborate wordplay. There is also a refined style deriving from the tradition of courtly poetry. ‘Blazon‘ is a kind of feature in Petrarch’s sonnets where the poet creates a catalogue of different physical features of the beloved and is praised in turn, in an exaggerated way. The poet creates an elaborate performance of extreme frustration and denial, as the beloved does not reciprocate. For Petrarch, the resolution to this problem was after the death of the beloved, when she was accessible in the realm of spiritualism and divinity- there is a sublimation of earthly passion that is a passion that can be consummated only in heaven. Here, sublimation is a state by which one expresses his feelings or early passion by transforming into another form. These features, which led to a strictly Petrarchan form, were also followed in the English sonnet. Some Popular Sonneteers of Italy 1. Dante Alighieri Dante’s ‘Vieta Nuova‘ has thirty-one sonnets, which are arranged in the form of narratives. Dante’s Vieta Nuova developed his love for a woman named Beatrice. Beatrice is presented as an idealist and is put on a pedestal in Dante’s mind. Her qualities are celebrated in the poem, and at the end, you will find that she is shown as a saintly figure. According to Dante, Beatrice died in the year 1290. In the context of Elizabethan sonnets, Dante’s work was not very influential. 2. Francesco Petrarch Francisco Petrarch’s ‘Il-Canzoni ere‘(a book of songs/lyric poems). It has 366 poems, out of which 317 are sonnets. These sonnets explore Petrarch’s love for Laura. Laura did exist; Petrarch met her at a church at St. Clare in 1347. Laura died in the year 1348. Laura is presented as an idealized beloved. Canzoni ere is divided into two parts- one dealing with his experiences before Laura’s death and another dealing with his experiences after Laura’s death. The first part shows aspects of physical, erotic, fleshly, natural, and earthly love. This is a love that does not receive a response, which leads to bitter regret. In the second part, it moves to the spiritual or divine kind of realm. Series of moods are depicted such as joy, pain, grief, disappointment, wretched, predominant mood of restlessness. Beatrice and Laura lead the poets towards spiritual transcendence. Famous Elizabethan Sonneteers in English Literature 1. Thomas Wyatt and Henry Howard Wyatt and Howard were the first to bring sonnets to England from Italy. Wyatt worked in the royal court of King Henry VIII. Sir Thomas Wyatt and Henry Howard wrote the first known sonnets in English, Earl of Surrey and later sonnets were written by John Milton, Thomas Gray, William Wordsworth, John Donne, Elizabeth Barret Browning. Wyatt and Howard either used to translate Petrarchan sonnets or imitated them in their own composition. Surrey developed the rhyme scheme ABAB CDCD EE and divided the sestet in quatrain and couplet. Having previously circulated in manuscripts only, both the poet’s sonnets were first published in Richard Tottel’s ‘Songs and Sonnets‘, better known as ‘Tottel’s Miscellany, in 1557. In ‘Tottel’s Miscellany’, we find Petrarchan sonnets coming in a wide sphere. 2. Sir Philip Sidney Philip Sidney’s sonnet sequence, ‘Astrophel and Stella‘ (1591), started the English vogue for English sonnet sequences. He was the third Earl of Leicester. The next two decades saw sonnet sequences by William Shakespeare, Edmund Spenser, Michael Drayton, Samuel Daniel, and so on. These sonnets were all inspired by the Petrarchan tradition and generally describe the poet’s love for some woman, except for Shakespeare’s sequence of 154 sonnets. ‘Astrophel and Stella‘ was the first sonnet sequence by Sir Philip Sidney, like Petrarch in English. ‘Stella’ means star, and ‘Astrophel’ means star lover. The narrator is Astrophel and his unattainable love for Lady Penelope Devereux. Penelope was married to Lord Rich in 1581. She was the daughter of the Earl of Essex and wife of Lord Rich. ‘Astrophel and Stella‘ stresses various aspects of the speaker’s love for Stella, which is unrequited love. This sonnet sequence has various subsidiary themes of earthly love, desire, eroticism, time, art, sexuality, life, and death. The poet focuses on different aspects of his desire, his torment, pain and focuses on the earthly realm, and his beloved is shown on a pedestal. The poet created two personas to develop a certain theme. The publication of ‘Astrophel and Stella’ created a craze for sonnet writing and sonnet publishing in England in the 1590s. 3. Samuel Daniel In 1592, Samuel Daniel published a proper edition of his sonnet sequence known as ‘Delia‘. About 28 sonnets written by Daniel were published with Sidney’s poems. They were dedicated to Mary Sidney, Countess of Pembroke. 4. Michael Drayton In 1594, he came out with a collection of sonnets, ‘Idea’s Mirror‘. These sonnets were also dedicated to Mary Sidney, Countess of Pembroke. In 1619, the final edition came out as ‘Idea‘ where the beloved is imagined as Idea. Ann Goodere was the daughter of Drayton’s first patron Sir Henry Goodere, who is imagined as ‘Idea‘. Drayton fell in love with Ann Goodere and remained devoted to her even after her marriage. 5. Edmund Spenser He was the greatest Elizabethan sonneteer during the reign of Queen Elizabeth. In 1595, Edmund Spenser’s ‘Amoretti‘, which means little loves were published. It has about 89 sonnets with a typical Petrarchan narrative (88, since one sonnet is repeated). After sonnet 60, the beloved Elizabeth Boyle is seen to reciprocate the love and returns his affections or desires, creating a sense of resolution which finally results in Epithalamion, where Spenser celebrates the fulfilment of his love in marriage with Elizabeth Boyle. The beloved is not fully idealized, and she is somewhere a flesh and blood person. We are moving from a lack of resolution towards the celebration. The Spenserian sonnet was a unique Elizabethan sonnet form, a bit different from the typical Petrarchan notion of lost love and lost happiness. Spenser’s sonnets have the rhyme scheme ABAB BCBC CDCD EE, where quatrains are interlinked with each other. In the case of Spenser, the resolution occurs in terms of a love relationship and marriage which can both be earthly and spiritual. 6. William Shakespeare In 1609 Shakespeare’s sonnets were published by Thomas Thorpe. Shakespearian interpretations are seen through lenses of race, gender, queer studies, and psychoanalysis theories that emerged in the 21st century. Shakespearean sonnets are divided into two categories in general; sonnets 1-126 seem to have a primary subject as a man who is often referred to as the fair youth. The young man is presented in an idealized way having light features and rare beauty. He is a young friend of the speaker. Whereas sonnets 126-154 seem to have a primary subject as a woman, who is often referred to as the dark lady. The dark lady is an ambiguous and enigmatic kind of figure. Sometimes, the woman is presented as dangerous or ordinary, having dark features, not a woman of noble origins. There are 154 sonnets in his collection. The rhyming scheme is ABAB; CDCD; EFEF; GG. The excellence lies in the four themes- the theme of time, beauty, the theme of sexuality, and poetic creation and art. Thus, in many ways, the subject matter of Shakespeare’s sonnets is unconventional, and an artistic arrangement is made which is different from typical Elizabethan sonnet concepts. Shakespeare published his sonnets in quarto edition. A quarto is a kind of publication where a full sheet of paper is folded twice. It is mostly used for the publication of only poems. It was smaller in size than folio (a kind of publication where the full sheet of paper was folded once). The dedication of the text tells us that the only begetter of the ensuing sonnets is Mr W.H, and the publication is signed by T.T, Thomas Thorpe, and not by Shakespeare. According to the critics, Mr W.H. may probably be William Herbert, Earl of Pembroke, or Henry Wriothesley, Earl of Southampton. In the sonnet sequence, the main theme is time and attaining immortality through procreation. The first 17th sonnets are the procreation sonnets, where the speaker is procreating the fair youth and preserving himself and his beauty with posterity through his offspring. The term ‘Begetter‘ is used in the sense of giving birth; at the same time, it also encapsulates an inspirer of sonnets or one who helped the printer in procuring the manuscripts and helped in printing. Shakespeare presents the model of idealized beauty in fair youth rather than the Dark Lady. The presentation of the fair youth by Shakespeare is very closer to the Elizabethan or Petrarchan notion of sonnets. The relationship that Shakespeare has shown seems to be unconventional. The voice of the sonnet sequence by Shakespeare is incredible. During Elizabethan Age, a lot of literary concepts flourished along with printing and circulation. The sonnets emerged in England, and people were encouraged to write sonnets. Starting from Wyatt and Howard to Shakespeare, England gave birth to the most talented sonnet writers of all times, and they became widely accessible by the readers. Hence, sonnet writing became a popular convention all over the world, and it created a new pathway for poetry in English Literature.
<urn:uuid:819a17a0-cd80-436b-8e03-0fc3d4e57e28>
CC-MAIN-2022-33
https://icytales.com/elizabethan-sonnets-in-the-6th-and-17th-century/?amp
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573242.55/warc/CC-MAIN-20220818154820-20220818184820-00601.warc.gz
en
0.966469
3,160
3.828125
4
A.2.9 What sort of society do anarchists want? Anarchists desire a decentralised society, based on free association. We consider this form of society the best one for maximising the values we have outlined above—liberty, equality and solidarity. Only by a rational decentralisation of power, both structurally and territorially, can individual liberty be fostered and encouraged. The delegation of power into the hands of a minority is an obvious denial of individual liberty and dignity. Rather than taking the management of their own affairs away from people and putting it in the hands of others, anarchists favour organisations which minimise authority, keeping power at the base, in the hands of those who are affected by any decisions reached. Free association is the cornerstone of an anarchist society. Individuals must be free to join together as they see fit, for this is the basis of freedom and human dignity. However, any such free agreement must be based on decentralisation of power; otherwise it will be a sham (as in capitalism), as only equality provides the necessary social context for freedom to grow and development. Therefore anarchists support directly democratic collectives, based on "one person one vote" (for the rationale of direct democracy as the political counterpart of free agreement, see section A.2.11—Why do most anarchists support direct democracy?). We should point out here that an anarchist society does not imply some sort of idyllic state of harmony within which everyone agrees. Far from it! As Luigi Galleani points out, "[d]isagreements and friction will always exist. In fact they are an essential condition of unlimited progress. But once the bloody area of sheer animal competition - the struggle for food - has been eliminated, problems of disagreement could be solved without the slightest threat to the social order and individual liberty." [The End of Anarchism?, p. 28] Anarchism aims to "rouse the spirit of initiative in individuals and in groups." These will "create in their mutual relations a movement and a life based on the principles of free understanding" and recognise that "variety, conflict even, is life and that uniformity is death." [Peter Kropotkin, Anarchism, p. 143] Therefore, an anarchist society will be based upon co-operative conflict as "[c]onflict, per se, is not harmful. . . disagreements exist [and should not be hidden] . . . What makes disagreement destructive is not the fact of conflict itself but the addition of competition." Indeed, "a rigid demand for agreement means that people will effectively be prevented from contributing their wisdom to a group effort." [Alfie Kohn, No Contest: The Case Against Competition, p. 156] It is for this reason that most anarchists reject consensus decision making in large groups (see section A.2.12). So, in an anarchist society associations would be run by mass assemblies of all involved, based upon extensive discussion, debate and co-operative conflict between equals, with purely administrative tasks being handled by elected committees. These committees would be made up of mandated, recallable and temporary delegates who carry out their tasks under the watchful eyes of the assembly which elected them. Thus in an anarchist society, "we'll look after our affairs ourselves and decide what to do about them. And when, to put our ideas into action, there is a need to put someone in charge of a project, we'll tell them to do [it] in such and such a way and no other . . . nothing would be done without our decision. So our delegates, instead of people being individuals whom we've given the right to order us about, would be people . . . [with] no authority, only the duty to carry out what everyone involved wanted." [Errico Malatesta, Fra Contadini, p. 34] If the delegates act against their mandate or try to extend their influence or work beyond that already decided by the assembly (i.e. if they start to make policy decisions), they can be instantly recalled and their decisions abolished. In this way, the organisation remains in the hands of the union of individuals who created it. This self-management by the members of a group at the base and the power of recall are essential tenets of any anarchist organisation. The key difference between a statist or hierarchical system and an anarchist community is who wields power. In a parliamentary system, for example, people give power to a group of representatives to make decisions for them for a fixed period of time. Whether they carry out their promises is irrelevant as people cannot recall them till the next election. Power lies at the top and those at the base are expected to obey. Similarly, in the capitalist workplace, power is held by an unelected minority of bosses and managers at the top and the workers are expected to obey. In an anarchist society this relationship is reversed. No one individual or group (elected or unelected) holds power in an anarchist community. Instead decisions are made using direct democratic principles and, when required, the community can elect or appoint delegates to carry out these decisions. There is a clear distinction between policy making (which lies with everyone who is affected) and the co-ordination and administration of any adopted policy (which is the job for delegates). These egalitarian communities, founded by free agreement, also freely associate together in confederations. Such a free confederation would be run from the bottom up, with decisions following from the elemental assemblies upwards. The confederations would be run in the same manner as the collectives. There would be regular local regional, "national" and international conferences in which all important issues and problems affecting the collectives involved would be discussed. In addition, the fundamental, guiding principles and ideas of society would be debated and policy decisions made, put into practice, reviewed, and co-ordinated. The delegates would simply "take their given mandates to the relative meetings and try to harmonise their various needs and desires. The deliberations would always be subject to the control and approval of those who delegated them" and so "there would be no danger than the interest of the people [would] be forgotten." [Malatesta, Op. Cit., p. 36] Action committees would be formed, if required, to co-ordinate and administer the decisions of the assemblies and their congresses, under strict control from below as discussed above. Delegates to such bodies would have a limited tenure and, like the delegates to the congresses, have a fixed mandate—they are not able to make decisions on behalf of the people they are delegates for. In addition, like the delegates to conferences and congresses, they would be subject to instant recall by the assemblies and congresses from which they emerged in the first place. In this way any committees required to co-ordinate join activities would be, to quote Malatesta's words, "always under the direct control of the population" and so express the "decisions taken at popular assemblies." [Errico Malatesta: His Life and Ideas, p. 175 and p. 129] Most importantly, the basic community assemblies can overturn any decisions reached by the conferences and withdraw from any confederation. Any compromises that are made by a delegate during negotiations have to go back to a general assembly for ratification. Without that ratification any compromises that are made by a delegate are not binding on the community that has delegated a particular task to a particular individual or committee. In addition, they can call confederal conferences to discuss new developments and to inform action committees about changing wishes and to instruct them on what to do about any developments and ideas. In other words, any delegates required within an anarchist organisation or society are not representatives (as they are in a democratic government). Kropotkin makes the difference clear: "The question of true delegation versus representation can be better understood if one imagines a hundred or two hundred men [and women], who meet each day in their work and share common concerns . . . who have discussed every aspect of the question that concerns them and have reached a decision. They then choose someone and send him [or her] to reach an agreement with other delegates of the same kind. . . The delegate is not authorised to do more than explain to other delegates the considerations that have led his [or her] colleagues to their conclusion. Not being able to impose anything, he [or she] will seek an understanding and will return with a simple proposition which his mandatories can accept or refuse. This is what happens when true delegation comes into being." [Words of a Rebel, p. 132] Unlike in a representative system, power is not delegated into the hands of the few. Rather, any delegate is simply a mouthpiece for the association that elected (or otherwise selected) them in the first place. All delegates and action committees would be mandated and subject to instant recall to ensure they express the wishes of the assemblies they came from rather than their own. In this way government is replaced by anarchy, a network of free associations and communities co-operating as equals based on a system of mandated delegates, instant recall, free agreement and free federation from the bottom up. Only this system would ensure the "free organisation of the people, an organisation from below upwards." This "free federation from below upward" would start with the basic "association" and their federation "first into a commune, then a federation of communes into regions, of regions into nations, and of nations into an international fraternal association." [Michael Bakunin, The Political Philosophy of Bakunin, p. 298] This network of anarchist communities would work on three levels. There would be "independent Communes for the territorial organisation, and of federations of Trade Unions [i.e. workplace associations] for the organisation of men [and women] in accordance with their different functions. . . [and] free combines and societies . . . for the satisfaction of all possible and imaginable needs, economic, sanitary, and educational; for mutual protection, for the propaganda of ideas, for arts, for amusement, and so on." [Peter Kropotkin, Evolution and Environment, p. 79] All would be based on self-management, free association, free federation and self-organisation from the bottom up. By organising in this manner, hierarchy is abolished in all aspects of life, because the people at the base of the organisation are in control, not their delegates. Only this form of organisation can replace government (the initiative and empowerment of the few) with anarchy (the initiative and empowerment of all). This form of organisation would exist in all activities which required group work and the co-ordination of many people. It would be, as Bakunin said, the means "to integrate individuals into structures which they could understand and control." [quoted by Cornelius Castoriadis, Political and Social Writings, vol. 2, p. 97] For individual initiatives, the individual involved would manage them. As can be seen, anarchists wish to create a society based upon structures that ensure that no individual or group is able to wield power over others. Free agreement, confederation and the power of recall, fixed mandates and limited tenure are mechanisms by which power is removed from the hands of governments and placed in the hands of those directly affected by the decisions. For a fuller discussion on what an anarchist society would look like see section I. Anarchy, however, is not some distant goal but rather an aspect of current struggles against oppression and exploitation. Means and ends are linked, with direct action generating mass participatory organisations and preparing people to directly manage their own personal and collective interests. This is because anarchists, as we discuss in section I.2.3, see the framework of a free society being based on the organisations created by the oppressed in their struggle against capitalism in the here and now. In this sense, collective struggle creates the organisations as well as the individual attitudes anarchism needs to work. The struggle against oppression is the school of anarchy. It teaches us not only how to be anarchists but also gives us a glimpse of what an anarchist society would be like, what its initial organisational framework could be and the experience of managing our own activities which is required for such a society to work. As such, anarchists try to create the kind of world we want in our current struggles and do not think our ideas are only applicable "after the revolution." Indeed, by applying our principles today we bring anarchy that much nearer.
<urn:uuid:a109e857-f0fe-496e-9a3a-8b1659fe88db>
CC-MAIN-2022-33
https://en.m.wikibooks.org/wiki/Anarchist_FAQ/What_is_Anarchism%3F/2.9
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571959.66/warc/CC-MAIN-20220813142020-20220813172020-00603.warc.gz
en
0.958048
2,547
2.796875
3
Today is the birthday (1801) of Gail Borden II, a native New Yorker who settled in Texas in 1829, where he worked as a land surveyor, newspaper publisher, and inventor. He is best known as the developer of a method for condensing milk which he patented in 1853. This gives me the opportunity to talk about both Borden and condensed milk. For starters, condensed milk is somewhat similar to, but not the same as, evaporated milk – as any cook knows. Go here for the history of evaporated milk: https://www.bookofdaystales.com/evaporated-milk/ Condensed milk was developed before evaporated milk because it was easier to manufacture. Its high sugar content is a natural antibacterial and preservative, but it changes the character of the milk. Borden was born in Norwich, New York to Gail Borden Jr. (1777–1863), a pioneer and landowner, and his wife Philadelphia Wheeler (1780–1828), who died at age 48 from yellow fever in Nashville, Tennessee. The details of Borden’s childhood are unclear, but he moved twice with his family while growing up, first to Kennedy’s Ferry, Kentucky (renamed as Covington in 1814), and in 1816 to New London, Indiana. Borden received his only formal schooling in Indiana, attending school during 1816 and 1817 to learn the art of surveying. In 1822, Borden set out with his brother, Thomas. They intended to move to New Orleans, but settled in Amite County, Mississippi. Borden stayed in Liberty for seven years. He worked as the county surveyor and as a schoolteacher in Bates and Zion Hill. He was well known around town for running rather than walking to school every morning. While living in Mississippi, Borden met Penelope Mercer, whom he married in 1828. The couple had six children during their 16-year marriage. Borden and his family left Mississippi in 1829 and moved to Texas, following his brother John Borden. Thomas also settled in Texas. As a surveyor, Borden plotted the towns of Houston and Galveston. He collaborated on drawing the first topographical map of Texas in 1835. In February 1835, Borden and his brother John entered into partnership with Joseph Baker to publish a newspaper. They based their newspaper in San Felipe de Austin, which was centrally located among the colonies in eastern Texas. The first issue of the Telegraph and Texas Register appeared on October 10, 1835, days after the Texas Revolution began. Soon after the newspaper began publishing, John Borden left to join the Texian Army, and his brother Thomas took his place as Borden’s partner. As the Mexican army moved east into the colonies, the Telegraph was soon the only newspaper in Texas still in operation. Their 21st issue was published on March 24. This contained the first list of names of Texans who died at the Battle of the Alamo. On March 27, the Texas Army reached San Felipe, carrying word that the Mexican advance guard was approaching. According to a later editorial in the Telegraph, the publishers were “the last to consent to move.” The Bordens dismantled the printing press and brought it with them as they evacuated with the rear guard on March 30. The Bordens retreated to Harrisburg. On April 14, as they were in the process of printing a new issue, Mexican soldiers arrived and seized the press. The soldiers threw the type and press into Buffalo Bayou and arrested the Bordens. The Texas Revolution ended days later. Lacking funds to replace his equipment, Borden mortgaged his land to buy a new printing press in Cincinnati. The 23rd issue of the Telegraph was published in Columbia on August 2, 1836. Although many had expected Columbia to be the new capital, the First Texas Congress instead chose the new city of Houston. Borden relocated to Houston, and published the first Houston issue of his paper on May 2, 1837. The newspaper was in financial difficulty, as the Bordens rarely paid their bills. In March 1837, Thomas Borden sold his interest in the enterprise to Francis W. Moore Jr., who took over as chief editor. Three months later, Gail Borden transferred his shares to Jacob W. Cruger. In Texas, Borden shifted into politics. He was a delegate at the Convention of 1833, where he assisted in writing early drafts of a Republic of Texas constitution. He also shared administrative duties with Samuel M. Williams during 1833 and 1834 when Stephen F. Austin was away in Mexico. President Sam Houston appointed Borden as the Republic of Texas Collector of Customs at Galveston in June 1837. Houston’s successor to the presidency, Mirabeau B. Lamar, removed Borden from office in December 1838, replacing him in the patronage position with a lifelong friend from Mobile, Alabama, Dr. Willis Roberts, newly arrived in Texas. Roberts’ son later was appointed Secretary of State of the Republic. However, Borden had been so well liked, the newcomer was resented. The Galveston News frequently criticized the new regime concerning malfeasance. When a shortfall in government funds came to light, Roberts offered to put up several personal houses and nine slaves as collateral until the matter could be settled. Two resentful desk clerks were later determined to have been embezzling funds, but this came too late for the doctor, who lasted in the job only until December 1839. Lamar appointed another man of his choice. After Houston was re-elected to the presidency, he reappointed Borden to the post, and he served from December 1841 to April 1843. He finally resigned after a dispute with Houston. Borden then turned his attention to real estate matters. He found a position at the Galveston City Company, where he served for 12 years as a secretary and agent. During that period, he helped sell 2,500 lots of land, for a total of $1,500,000. During these years, he began to experiment with disease cures. His wife Penelope died of yellow fever on September 5, 1844. It caused frequent epidemics and had a high rate of fatalities during the 19th century. Borden began experimenting with finding a cure for the disease via refrigeration. He also developed an unsuccessful prototype for a terraqueous machine. This was a sail-powered wagon designed to travel over land and sea, which he completed in 1848. By around 1849, Borden was experimenting with the creation of a dehydrated beef product known as the “meat biscuit”, which was loosely based upon the traditional Native American food, pemmican. Pioneers seeking gold in California needed a readily transportable food source that could endure harsh conditions and Borden marketed the meat biscuit as a suitable solution. Borden was operating a factory in Galveston to produce meat biscuits by 1851, and the product won him the Great Council Medal at the 1851 London World’s Fair. Notably, explorer Elisha Kane even carried a supply of meat biscuits on the Second Grinnell Expedition into the Arctic. However, Borden had been relying heavily upon the United States Army to issue him a lucrative contract to supply meat biscuits for use by American soldiers. When the military declined to buy into the product, Borden’s meat biscuit proved to be a failure. During Borden’s return voyage from the Exhibition in London, a disease infected both cows aboard the ship. The cows eventually died, along with several children who drank the contaminated milk. Contamination threatened other supplies of milk across the country. In part, the event inspired Borden’s interest in preserving milk. In 1856, after three years of refining his model, Borden received the patent for his process of condensing milk by vacuum. At that time, he abandoned the meat biscuit, to focus on his new product. Having lost so much money in his beef biscuit endeavors, Borden was forced to recruit partners to begin production and marketing of this new product. He offered Thomas Green three-eighths of his patent rights and gave James Bridge a quarter interest on his investment; together, the three men built a condensery in Wolcottville, Connecticut (within modern-day Torrington), which opened in 1856. Green and Bridge were eager for profits, and when the factory was not immediately successful, they withdrew their support; it closed within a year. Borden persuaded them and a third investor, Reuel Williams, to build a new factory, this time in Burrville, Connecticut (also within modern-day Torrington), which opened in 1857. This second factory was hurt by the Panic of 1857 and had trouble turning a profit. The following year, Borden’s fortunes began to change when he met Jeremiah Milbank, a financier from New York, on a train. Milbank was impressed by Borden’s enthusiasm for and confidence in condensed milk, and the two became equal partners. Together, they founded the New York Condensed Milk Company. As a railroad magnate and banker, Milbank understood large-scale finance, which was critical to development of the business and Borden’s success. Milbank invested around $100,000 into Borden’s business. When Milbank died in 1884, the market value of his holdings was estimated at around $8,000,000. With the founding of the New York Condensed Milk Company, sales of Borden’s condensed milk began to improve. The outbreak of the Civil War in 1861 soon after created a large demand for condensed milk from the Union Army. In 1861, Borden closed the factory in Burrville, opening the first of what would be many condensed milk factories in upstate New York and Illinois. As the Civil War continued, he expanded his New York Condensed Milk Company quickly to meet the growing demand. Many new factories were built and licenses were granted to individuals to begin producing condensed milk in their own factories using Borden’s patent. Despite the quick growth of the company, Borden put a high value on sanitation. He developed cleanliness practices that continue to be used in the production of condensed milk to this day. While all of this rapid growth was occurring, Borden continued to experiment with the condensing of meat, tea, coffee, and cocoa, and in 1862 while operating a factory in Amenia, New York, he patented the condensing of juice from fruits, such as apples and grapes.] Borden tried to incorporate these other products into the line of the New York Condensed Milk Company, but the greatest demand was always for milk. It continued as the company’s major product. Condensed milk can be used in 100s of recipes. My mother, when she missed Argentina and wanted some dulce de leche used to place a can in simmering water and cook it for 3 hours or so. Works perfectly. Nowadays in Britain the contents of a boiled can are used as the layer between biscuit base and the banana and cream level in banoffee. During the communist era in Poland, it was common to boil a can of condensed milk in water for about three hours also, making what they called kajmak (although the original kaymak is a product similar to clotted cream). Homemade kajmak is less common nowadays, but recently some manufacturers of condensed milk introduced canned, ready-made kajmak which now is widely commercially produced, and is a national favorite for dessert fillings. In Russia, the same product is called варёная сгущёнка (varionaya sguschyonka, “boiled condensed milk”). One of Russia’s most famous cakes, “bird’s milk cake”, is often made with condensed milk. Condensed milk is used in recipes for the popular Brazilian sweet brigadeiro, key lime pie, caramel candies, and other desserts. Condensed milk and sweetened condensed milk is also sometimes used in combination with clotted cream to make fudge in the UK and the US. In many parts of SE Asia (notably Vietnam, Cambodia and Myanmar) as well as Europe, sweetened condensed milk is the preferred milk to be used to make coffee or tea. In Malaysia, teh tarik is made from tea mixed with condensed milk, and condensed milk is an integral element in Hong Kong tea culture. In the Canary Islands, it is served as the bottom stripe in a glass of the local café con leche and in Valencia it is served as a café bombón. A popular treat in Asia is to put condensed milk on toast and eat it in a similar way as jam and toast. In West Yorkshire, in the years after World War II, condensed milk was an alternative to jam. Nestlé has even produced a squeeze bottle for this very purpose. Condensed milk is a major ingredient in many Indian desserts and sweets. While most Indians start with normal whole milk and reduce it, condensed milk has also become popular because it saves time. In New Orleans, sweetened condensed milk is commonly used as a topping on chocolate or similarly cream-flavored snowballs. In Scotland, it is mixed with sugar and butter then boiled to form a popular sweet candy called tablet or Swiss-milk-tablet, very similar to a version of Brazilian brigadeiro called branquinho. In some parts of the Southern United States, condensed milk is a key ingredient in lemon ice box pie, a sort of cream pie. In the Philippines, condensed milk is mixed with some evaporated milk and eggs, spooned into shallow metal containers over liquid caramelized sugar, and then steamed to make a stiffer and more filling version of crème caramel known as leche flan, also common in Brazil under the name pudim de leite. In Mexico, sweetened condensed milk is one of the main ingredients of a cold cake dessert combined with evaporated milk, Marie biscuits, lemon juice, and tropical fruit. In Brazil, this recipe is also done exchanging pudding for the fruit, most commonly vanilla and chocolate, known as torta de bolacha. In Jamaica, Guinness Punch is prepared using condensed milk mixed with bottled stout. This is often flavored with nutmeg and cocoa. In Latin American countries as well as many parts of the Caribbean, Canary Islands, Albania, the Republic of Macedonia and some other parts of Europe condensed milk (along with evaporated milk and whole milk or canned cream) is used as a key ingredient in the popular tres leches cake dessert. It probably originates in Nicaragua but quickly spread. There are numerous variants depending on whether you make a sponge cake or a butter cake, and whether you add a whipped cream topping (possibly with fruit) or not. Here’s one recipe: 1 ½ cups all-purpose flour 1 tsp baking powder ½ cup unsalted butter 2 cups white sugar ½ tsp vanilla extract 2 cups whole milk 1 (14 fl oz) can sweetened condensed milk 1 (12 fl oz) can evaporated milk 1 ½ cups heavy whipping cream 1 cup white sugar 2 tsp vanilla extract Preheat the oven to 350˚F/175˚C. Grease and flour a 9×13” baking pan. Sift the flour and baking powder together and set aside. Cream the butter and 1 cup of sugar together until fluffy. Add the eggs and 1 teaspoon of the vanilla extract and beat well. Add the flour mixture 2 tablespoons at a time mixing well until thoroughly blended. Pour the batter into the prepared pan. Bake for 30 minutes, pierce the cake several times with a fork. Cool in the pan on a rack when it is cooked. Combine the whole milk, condensed milk, and evaporated milk together. Pour over the top of the cooled cake. Whip the whipping cream, the remaining 1 cup of the sugar, and the remaining 1 teaspoon vanilla together until thick. Spread over the top of cake. Refrigerate. Serve in squares.
<urn:uuid:6c4d40a1-f2bb-450a-932e-40cc519a76b4>
CC-MAIN-2022-33
https://www.bookofdaystales.com/tag/borden/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00005.warc.gz
en
0.976774
3,375
3
3
Polar bears have a circumpolar distribution. They range throughout the arctic region surrounding the North Pole. The limits of their range are determined by the ice pack of the Arctic Ocean and the landfast ice of surrounding coastal areas. Bears have been reported as far south as the southern tips of Greenland and Iceland. During the winter, polar bears will range along the southern edge of the ice pack or northern edge of ice formed off the coasts of the continents. Pregnant females will overwinter on the coastlines where denning habitat is available for bearing young. During the summer, bears will remain at the edge of the receding ice pack or on islands and coastal regions that retain landfast ice. Six different populations are recognized as: Wrangel Island and western Alaska, northern Alaska, the Canadian Arctic archipelago, Greenland, Svalbard-Franz Josef Land, and Central Siberia. (DeMaster and Stirling, 1981; Nowak, 1999) The body of a polar bear is large and stocky, similar to that of a brown bear, except it lacks the shoulder hump. The head is relatively smaller than the heads of other bears and the neck is elongated. At the shoulder a polar bear can measure 1.6 m in height. Adult males weigh between 300-800 kg (660-1760 lbs) and can reach 2.5 m in length from tip of nose to tip of tail. Females are smaller, weighing 150 to 300 kg (330 to 660 lbs) and measuring 1.8 to 2 m in length. The pelage generally has a white appearance, but it can be yellowish in the summer due to oxidation or may even appear brown or gray, depending on the season and light conditions. Polar bear skin is black and the fur is actually clear, lacking in pigment. The white appearance is the result of light being refracted from the clear hair strands. The forepaws are broad and make excellent paddles while swimming. The soles of both hind and fore feet are furred for insulation and traction while walking on ice and snow. Polar bears have a plantigrade gait. Females have four functional mammae. (DeMaster and Stirling, 1981; Nowak, 1999) Polar bears have a sequential polygynous mating system. Male and female breeding pairs remain together for a short time while females are in estrus (3 days). (DeMaster and Stirling, 1981; Nowak, 1999; Ramsey and Stirling, 1988) Mating occurs in late winter and early spring, from March to June. Delayed implantation extends gestation to 195 to 265 days. Pregnant females establish a winter den on land dug into the snow usually within 8 km of the coast in October or November. An average of 2 cubs are born in the mother's den between November and January, litter sizes can range from 1 to 4. She remains in hibernation, nursing her cubs until April. The mortality rate for cubs is estimated to be 10-30%. The average annual rate of reproduction calculated by DeMaster and Stirling (1981) was 0.274 females per adult female. (DeMaster and Stirling, 1981; Nowak, 1999; Ramsey and Stirling, 1988; Stirling and McEwan, 1975) Cubs are born with their eyes closed; they have a good coat of fur and weigh about 600 grams. They will emerge from the den in spring weighing 10 to 15 kg. Mothers provide all parental care of their offspring. The cubs remain with their mother for 2 to 3 years. They will not reach sexual maturity until 5 to 6 years old. (DeMaster and Stirling, 1981; Nowak, 1999; Ramsey and Stirling, 1988) In the wild polar bears are estimated to live 25 to 30 years. Annual adult mortality is estimated to be 8 to 16%. In captivity the oldest recorded lifespan was a female that died at the Detroit Zoo in 1991 at 43 years and 10 months old. (DeMaster and Stirling, 1981; Nowak, 1999) Polar bears are solitary. The exceptions to this are when a mother is caring for her cubs and when males and females are paired during mating. Bears may also come into competition with one another when a seal kill attracts other bears looking to scavenge. In instances where bears encounter each other, the smaller bear will tend to run away. A female with cubs, however, will charge males that are much larger to protect her young or a kill that they are feeding on. Polar bears are inactive most of the time (66.6%), either sleeping, lying, or waiting (still hunting). The rest of their time is spent traveling (walking and swimming; 29.1%), stalking prey (1.2%), or feeding (2.3%). Polar bears are excellent swimmers, they may range widely in search of food and sightings as far south as Maine, in the United States, have been documented. (DeMaster and Stirling, 1981; Stirling and McEwan, 1975; Stirling, 1974) Like other bear species, polar bears have a keen sense of smell and use their sensitive lips and whiskers to explore objects. They vision and hearing are not exceptionally well developed. Polar bears use a "chuffing" sound as a form of greeting. (DeMaster and Stirling, 1981) Polar bears are carnivores. In the summer, they may consume some vegetation but gain little nutrition from it. Their primary prey are ringed seals (Pusa hispida). They also hunt bearded seals (Erignathus barbatus), harp seals (Pagophilus groenlandicus), hooded seals (Cystophora cristata), walruses (Odobenus rosmarus), sea birds and their eggs, small mammals, fish and scavenge on carrion of seals, walruses, or whales. Bears often leave a kill after consuming only the blubber. The high caloric value of blubber relative to meat is important to bears for maintaining an insulating fat layer and storing energy for times when food is scarce. Polar bears do not store or cache unconsumed meat as other bears do. (DeMaster and Stirling, 1981; Stirling and McEwan, 1975) Polar bears have two hunting strategies. Still-hunting is used predominately. This involves finding a seal's breathing hole in the ice and waiting for the seal to surface to make the kill. When a bear sees a seal basking out of the water it will use a stalking technique to get close, then make an attempt at catching it. One stalking technique is crouching and staying out of sight while creeping up on the seal. Another technique is to swim through any channels or cracks in the ice until it is close enough to catch the seal. Using this technique a bear may actually dive under the ice and surface through the breathing hole in order the surprise the seal and eliminate its escape route. Feeding usually occurs immediately after the kill has been dragged away from the water. Polar bears consume the skin and blubber first and the rest is often abandoned. Other polar bears or arctic foxes then scavenge these leftovers. After feeding, polar bears will wash themselves by licking and rinsing their fur. (DeMaster and Stirling, 1981; Nowak, 1999; Ramsey and Stirling, 1988; Stirling, 1974) Polar bears are a top carnivore of the arctic. The remains of seal kills left unconsumed by bears are likely an important source of food for younger, less-experienced polar bears and for Artic foxes. (DeMaster and Stirling, 1981; Stirling and McEwan, 1975; Stirling, 1974) Polar bear materials have historically been used by native people of the arctic for fur, meat, and medicines. Hunting by those groups is still allowed in the United States, Canada, and Greenland (Denmark). Trophy and commercial hunters have taken bears for pelts that sold for $3000 in the past. (Nowak, 1999) Polar bears are viewed as potentially dangerous to humans. Contact between humans and bears is rare due to the large home range of individual bears and the sparse human population throughout their distribution. Two deaths resulting from polar bear encounters have been reported. (Nowak, 1999) Polar bear populations were recently considered to be stable or growing in some areas. In 1993, the estimated world population was 21,470 to 28,370 bears. In 1972, the United States Marine Mammal Protection Act prohibited all hunting, except for subsistence, of polar bears in the U.S. In 1973 the United States, Russia, Norway, Canada, and Denmark came to an agreement to protect polar bear habitat, limit hunting, and cooperate on research. Polar bear populations are currently threatened by trends in global warming, which continues to decrease the extent of their habitat (pack ice) and their prey base. In 2008 the U.S. Fish and Wildlife Service listed polar bears as threatened. The IUCN lists (DeMaster and Stirling, 1981; Hilton-Taylor, 2000; Nowak, 1999)as vulnerable. Polar bears bred with brown bears have produced fertile hybrids (DeMaster and Stirling 1981). In fact, polar bears have been shown to be genetically more closely related to certain brown bear populations than are some brown bear populations to others. This suggests that polar bears have evolved fairly recently from a brown bear ancestor and that brown bear genetic structure is more complicated than previously thought. (DeMaster and Stirling, 1981; Talbot and Shields, 1996) Tanya Dewey (editor), Animal Diversity Web. Aren Gunderson (author), University of Northern Iowa, Jim Demastes (editor), University of Northern Iowa. the body of water between Europe, Asia, and North America which occurs mostly north of the Arctic circle. living in the Nearctic biogeographic province, the northern part of the New World. This includes Greenland, the Canadian Arctic islands, and all of the North American as far south as the highlands of central Mexico. living in the northern part of the Old World. In otherwords, Europe and Asia and northern Africa. uses sound to communicate young are born in a relatively underdeveloped state; they are unable to feed or care for themselves or locomote independently for a period of time after birth/hatching. In birds, naked and helpless after hatching. having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria. an animal that mainly eats meat flesh of dead animals. uses smells or other chemicals to communicate the nearshore aquatic habitats near a coast, or shoreline. having markings, coloration, shapes, or other features that cause an animal to be camouflaged in its natural environment; being difficult to see or otherwise detect. in mammals, a condition in which a fertilized egg reaches the uterus but delays its implantation in the uterine lining, sometimes for several months. a substance used for the diagnosis, cure, mitigation, treatment, or prevention of disease humans benefit economically by promoting tourism that focuses on the appreciation of natural areas or animals. Ecotourism implies that there are existing programs that profit from the appreciation of natural areas or animals. animals that use metabolically generated heat to regulate body temperature independently of ambient temperature. Endothermy is a synapomorphy of the Mammalia, although it may have arisen in a (now extinct) synapsid ancestor; the fossil record does not distinguish these possibilities. Convergent in birds. A substance that provides both nutrients and energy to a living thing. having a body temperature that fluctuates with that of the immediate environment; having no mechanism or a poorly developed mechanism for regulating internal body temperature. the state that some animals enter during winter in which normal physiological processes are significantly reduced, thus lowering the animal's energy requirements. The act or condition of passing winter in a torpid or resting state, typically involving the abandonment of homoiothermy in mammals. a distribution that more or less circles the Arctic, so occurring in both the Nearctic and Palearctic biogeographic regions. Found in northern North America and northern Europe or Asia. offspring are produced in more than one group (litters, clutches, etc.) and across multiple seasons (or other periods hospitable to reproduction). Iteroparous animals must, by definition, survive over multiple seasons (or periodic condition changes). a species whose presence or absence strongly affects populations of other species in that area such that the extirpation of the keystone species in an area will result in the ultimate extirpation of many more species in that area (Example: sea otter). having the capacity to move from one place to another. specialized for swimming the area in which the animal is naturally found, the region in which it is endemic. generally wanders from place to place, usually within a well-defined range. the regions of the earth that surround the north and south poles, from the north pole to 60 degrees north and from the south pole to 60 degrees south. having more than one female as a mate at one time mainly lives in oceans, seas, or other bodies of salt water. breeding is confined to a particular season reproduction that includes combining the genetic contribution of two individuals, a male and a female uses touch to communicate Living on the ground. The term is used in the 1994 IUCN Red List of Threatened Animals to refer collectively to species categorized as Endangered (E), Vulnerable (V), Rare (R), Indeterminate (I), or Insufficiently Known (K) and in the 1996 IUCN Red List of Threatened Animals to refer collectively to species categorized as Critically Endangered (CR), Endangered (EN), or Vulnerable (VU). A terrestrial biome with low, shrubby or mat-like vegetation found at extremely high latitudes or elevations, near the limit of plant growth. Soils usually subject to permafrost. Plant diversity is typically low and the growing season is short. uses sight to communicate reproduction in which fertilization and development take place within the female body and the developing embryo derives nourishment from the female. "Division of Endangered Species, Species Information" (On-line). Accessed Dec 5, 2001 at http://endangered.fws.gov/wildlife.html. DeMaster, D., I. Stirling. 1981. *Ursus maritimus*. Mammalian Species, 145: 1-7. Hilton-Taylor, C. 2000. "The 2000 IUCN Red List of Threatened Species" (On-line). Accessed Dec 5, 2001 at http://www.redlist.org/search/details.php?species=22823. Nowak, R. 1999. Walker's Mammals of the World. Baltimore and London: The Johns Hopkins University Press. Ramsey, M., I. Stirling. 1988. Reproductive biology and ecology of female polar bears (*Ursus maritimus*). Journal of Zoology, 214: 601-634. Stirling, I. 1974. Midsummer observations on the behavior of wild polar bears (*Ursus maritimus*). Canadian Journal of Zoology, 52: 1191-1198. Stirling, I., E. McEwan. 1975. The caloric value of whole ringed seals (*Phoca hispida*) in relation to polar bear (*Ursus maritimus*) ecology and hunting behavior. Canadian Journal of Zoology, 53: 1021-1027. Talbot, S., G. Shields. 1996. Phylogeography of Brown Bears (Ursus arctos) of Alaska and Paraphyly within the Ursidae. Molecular Phylogenetics and Evolution, 5: 477-494.
<urn:uuid:b640de7a-26db-417b-888a-0525f3ec7280>
CC-MAIN-2022-33
http://animaldiversity.org/accounts/ursus_maritimus/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572304.13/warc/CC-MAIN-20220816120802-20220816150802-00203.warc.gz
en
0.92963
3,344
4
4
“Three different witnesses described seeing brilliant fire engine red – ‘brighter than fireworks’ – balls of light, larger than any stars or planets, but smaller than the full moon. The bright red lights appeared in the sky in the area of the two circles.” – Jeff Wilson, Director, ICCRA Coles County, Illinois August 21, 2005 – Cattle Corn The fifth formation was discovered on August 21, in Coles County, Illinois. There were two circles in cattle corn that were discovered by the son of the landowner while he was cutting grass in an adjacent drainage area. Both circles exhibited a radial lay pattern. They were not swirled down. They were flattened from the center toward the edge. SORT OF LOOKING LIKE A HUMAN EYE, RADIATING FROM A CENTER POINT IN LINES RADIATING OUTWARD. Yes, although the large circle does have a very slight clockwise spin to it. It also has an outer band of stalks about 4 feet wide that alternated being flattened clockwise and counterclockwise. Sometimes those areas would overlap. Sometimes they were diametrically opposed. It appears that the diametrically opposed sections are at both the north and south points. The overlapping areas are at the east and west points. the north part of the circle is slightly flattened out, so it’s not really a perfect circle. The large circle measures 74.5 feet by 78 feet, so it’s more of an elongated ellipse than a circle. The smaller circle is also a slight ellipse being 22 feet 10 inches by 20 feet 10 inches. That also had a radial lay. That circle also had just a couple of stalks on the outer edge that were swirled down, literally like four stalks out of the entire formation were swirled. The rest of it was flattened from the center. WHAT ABOUT THE NODES THAT YOU EXAMINED? We did not see widespread node damage. However, there were a couple of stalks that exhibited what appeared to be expulsion cavities. We also spent an entire day measuring nodes for doing an elongation measurement. I managed to do the numbers on three of those and they appear to be statistically elongated, by about 25%. I have to say that in the large circle, most of the stalks have already died and were dried up by the time we need the node measurements. But there were still several stalks that were still green. Those stalks appeared to have the longest node elongation from all of the plants. So of all the stalks that did not get snapped off and die instantly, the ones that still were alive and green had the longest nodes of all the stalks we examined. WHAT ABOUT SOIL? We did take soil samples. We did not find any obvious signs of elevated magnetic particles. We also did not appear to have any radiation levels of electric or magnetic fields. But I haven’t run the numbers yet to validate that. I will say that while I was out in the field, at one point both Delsey Knoechelman and I heard a very unusual sound that we could not place. It seemed to be coming from between the two circles and it sounded like a low, mechanical humming type of sound. COULD YOU TRY TO MIMIC IT? Couldn’t do it for you. It’s lower than my vocal register could probably do. Now, we did not record it, but unusually Ted Robertson later on that night was filming the formation with his infrared video camera. As he panned across the main circle looking toward the small circle in the same direction in which we heard that sound, he said that he has recorded an audible interference pattern every time that he panned across that same area. So there is some sort of unusual RF interference pattern that gave him distortion on his videotape while he was taping. Mysterious Lights Seen by Multiple Eyewitnesses Near Coles County, Illinois, Formation WHAT ABOUT ANYONE REPORTING SEEING ANYTHING UNUSUAL IN THE SKY OR HEARING AN UNUSUAL SOUND AROUND THESE RECENT U. S. PATTERNS? Not in the Greene County, Ohio, or Canisteo, New York, case, but in the Coles County, Illinois, event – once we had arrived, we were contacted by four different people who reported seeing unusual lights in the sky in the area where the formation was, both around the time of the formation and subsequent to the discovery of the two circles. Like for instance, there was one person who reported at 4:30 in the morning after the formation was discovered, they watched two unusually bright, white balls of light with beams of light shining toward the ground that were spotted in the area toward where the formation was located, although the formation was not visible from their location. Three different witnesses described seeing brilliant fire engine red – ‘brighter than fireworks’ – balls of light, larger than any stars or planets, but smaller than the full moon. The bright red lights appeared in the sky in the area of the two circles. The appearance of the lights caused two of the witnesses to nearly drive off the road. There were two completely independent people who came to us at different times to report this same eyewitness event of seeing the red lights. SO, THE ONLY MYSTERIOUS LIGHTS SO FAR IN AUGUST THEN WERE REPORTED WITH ILLINOIS? As far as I know at this point. DID THE EYEWITNESSES DESCRIBE ANYTHING UNUSUAL ABOUT THE MOTION OF THE LIGHTS IN RELATIONSHIP TO THE CROP? One of the witnesses said they watched for some time and then a light went out. It was gone for a short time and it appeared again and it had moved to a slightly different area. Then they watched it for some time, and then the light faded down again, was gone off for a short period, and then came back on in a third location. But all three locations were generally in the same area of where the two circles were. Diagram of Coles County, Illinois Circles By Ted Robertson, ICCRA Harmonics and Diatonic Ratios Ted Robertson, Harpsichord Craftsman: “It seems that the two Illinois circles in corn create diatonic ratios when compared to each other using Hawkins style methodology. Just thought I’d play round with these to see how close to being circular they are. I did a simple Hawkins method on these and it turns out that if we take the largest circle at 75 feet and the smallest at 20 then I got a ratio of 3.75 (75/20) which is three octaves above middle C and a fifth. This is the interval from c’ to g”‘. So its a fifth three octaves higher following Hawkins methodology. Conversely if we took the largest circle as being the lowest note (instead of making it be the highest as Hawkins would have done in the previous calculation) then starting with the smallest circle designated as c’ then the ratio to the larger circle would sound three octaves and a fourth below. Played on a piano this would give an interval from c’ to FFF the interval is three octaves and a fifth as well. So regardless of which circle we assign to be middle c since we have a three octaves and a fifth interval which gives a high “G” and a low “F ” , these are diatonic and not enharmonic ratios. The high G and low F both are “white notes” on the piano in the key of C. For more about the diatonic ratio analysis of crop formations by deceased astronomer and mathematician, Gerald Hawkins, see: Glimpses of Other Realities, Vol. I-Facts & Eyewitnesses in Earthfiles Shop. Belleville, Wayne County, Michigan August 28, 2005 – Corn On August 28, I received a report of a very small crescent-shaped formation in corn in Belleville, Wayne County, Michigan, southwest of Detroit. The circle is about eight feet across. ICCRA member, Dr. Charles Lietzau, has made a preliminary investigation, so photos and additional details will be reported soon.” For other Earthfiles reports about U. S. crop formations, see Earthfiles Archives: - 08/02/2005 — Part 2: Anomalies Confirmed in Pennsylvania and Arizona Randomly Downed Crops - 08/02/2005 — Part 1: Anomalies Confirmed in Pennsylvania and Arizona Randomly Downed Crops - 07/23/2005 — Mystery of Six Grass Circle Formations in North Carolina - 06/02/2005 — Part 2 – Highly Anomalous Pigment Formation in 2004 Hillsboro, Ohio, Crop Formation - 05/26/2005 — Phoenix Barley Mystery: Apparently Irrigation and Wind - 05/20/2005 — May 2005 Crop Formation Update in Six Countries - 05/09/2005 — Mysterious Lights and 2003 Serpent Mound Soybean Formation - ·10/17/2004 — American Crop Formations: 1880-2004 - 09/22/2004 — Miamisburg and Serpent Mound, Ohio Crop Formations: Geometries Compared - 09/10/2004 — Update on Miamisburg, Ohio, Corn Pictogram – Balls of Light? - 09/05/2004 — Part 2 – Hillsboro, Ohio Corn Plant Anomalies - 09/04/2004 — Hillsboro, Ohio Corn Formation – High Strangeness in Soil and Plants - 09/02/2004 — Updated Photos: Big, Impressive New Corn Formation in Miamisburg, Ohio - 07/26/2004 — Crop Circles in Tilden, Wisconsin Oats and 90-Degree Angles in Litchfield, Minnesota Barley - 07/15/2004 — Updated: Part 1-Beyond Hillsboro, Ohio, More Corn Down in New Milford, Connecticut - 07/13/2004 — Updates on Spanish Fork, Utah Barley Formation - 07/06/2004 — Additions to Spanish Fork, Utah Formation and Mysterious Lights Seen - 07/06/2004 — Mysteriously Downed Oat Plants in Eagle Grove, Iowa, and Downed Corn in Hillsboro, Ohio - 07/04/2004 — Crop Formation in Spanish Fork, Utah - 06/02/2004 — 2004 Peach Orchard, Arkansas Crop Formation - 05/24/2004 — Biophysicist W. C. Levengood’s Crop Circle Reports Available for First Time On Internet - 05/22/2004 — 2004 Overview of Crop Formations in Six Countries - ·12/05/2003 — Diatonic Ratios and Seed Changes in 2003 California Wheat Circles Rule Out Hoax? - ·11/01/2003 — Another Soybean Formation in Ohio - ·10/17/2003 — 2003 “UFO Flap” in Ohio - ·10/03/2003 — Part 2 – Military Interest in Serpent Mound and Seip Mound Formations? - ·10/02/2003 — Part 1 – Another Soybean Formation Near Seip Mound in Ohio - 09/12/2003 — Second Soybean Crop Formation in Ohio Manmade. By Special USAF Investigation Unit? - 09/06/2003 — Part 2 – Unusual Soybean Formation Near Serpent Mound, Ohio - 09/05/2003 — Part 1 – Unusual Soybean Formation Near Serpent Mound, Ohio - 08/29/2003 — Part 1 – Why Do Military Helicopters Focus On Crop Formations? - 07/19/2003 — Update – Defiance, Missouri T-Pattern Cut in Saplings - 06/13/2003 — Updated: Fractal Crop Formation in Knobel, Arkansas - 05/10/2003 — Tree Formation in Defiance, Missouri - 08/20/2000 — Crop Formations In North Dakota - 11/30/1999 — A New Crop Formation In Marion, New York and Crop Research Updates - 07/15/1999 — Brentwood, Tennessee Crop Formation and New U.K. Photos by Peter Sorensen © 1998 - 2022 by Linda Moulton Howe. All Rights Reserved.
<urn:uuid:5fa36034-0fc1-4e9d-a599-6982069498e8>
CC-MAIN-2022-33
https://www.earthfiles.com/2005/09/02/part-2-mysterious-lights-at-coles-county-illinois-corn-circles/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571246.56/warc/CC-MAIN-20220811073058-20220811103058-00004.warc.gz
en
0.949585
2,619
3.265625
3
Cities and towns across America are struggling to define themselves in an increasingly polarized national landscape. With our 24-hour news cycles and social media bubbles, we often focus on what separates us from each other, rather than the identity we share through local history and community spirit. Efforts to heal our divisions must start on the local level. The symbols and icons that define us must include, acknowledge, and represent all members of our communities. Here in Groton, our town’s identity is symbolically represented by a divisive and long-controversial 1898 Town Seal. Although Groton was founded in 1655, it had no official town seal until the 19th century had very nearly clicked over into the 20th. And a seal was only adopted then because 1898 was the last year in which the cities and towns of Massachusetts could issue vital documents without one.1 To meet this statutory deadline, the Town of Groton appointed a three-member Committee of the Town Seal, which only ever held two meetings of any consequence.2 At the first meeting, the committeemen outsourced their design duties to the Honorable Samuel A. Green, a Groton-born medical doctor turned historian, turned politician, turned author, turned man who was eager to try his hand at graphic design, from thirty-plus miles away in his then-current hometown of Boston.3 At the second meeting, the committeemen received their first-time designer’s first-draft submission–the now-familiar “Faith and Labor” design–and passed it along to Town Meeting for approval.4 For the work product of an expert in Groton’s history and culture, the “Faith and Labor” design is oddly generic. Dr. Green could easily have referenced Groton’s unique drumlin swarm geography, its contributions to the American Revolution, or any of the storied names, structures, and institutions that shaped the town’s identity from 1655 until his own time. And yet, his submission included none of the elements that might have helped tell Groton’s unique story. Instead, Dr. Green used religious iconography common to all of the strict Puritan communities of the Massachusetts Bay Colony, paired with imagery representing the predominant occupation throughout all of New England. Because faith and labor were the common currency of the time, these symbols do nothing to distinguish Groton from any other place. The explanatory letter included with the design gives us some hint as to why.5 Viewed through the Victorian-era prism of Dr. Green, as he stood 120 years behind us while peering back 240 years more distant from that, we can see his vision of a town seal as it might have been created in the town’s earliest days. In the Groton of 1655, where church attendance was mandatory and heretics were banished, a religious symbol on an official seal would have made perfect sense. Religion and government were one and the same, the Church blended with the State, and Dr. Green’s design would have been entirely fitting if it had existed at the time.6 In the Groton of 1775, the town and its balance of religion and government had evolved. Town residents angrily dismissed their spiritual leader for failing to support a popular political cause of the day: the struggle to free the colonies from British rule.7 The Reverend Samuel Dana never returned to the ministry, instead becoming a lawyer, probate judge, and New Hampshire state senator. In the Groton of 1898, as part of the free country that Groton residents had helped to create and whose Union they had recently fought to preserve, town government was constitutionally barred from endorsing any one religion or religion in general.8 The town had evolved again, and Dr. Green’s retro-1655 design was no longer an appropriate concept. Nor does the 1898 Town Seal represent Dr. Green’s best work. As a worthy first attempt, rushed into service to meet a state-imposed deadline, it has always ever been a placeholder for the seal that Dr. Green might have given us if he’d had the time to explore multiple ideas, make multiple revisions, and incorporate feedback from residents with a diversity of viewpoints. In the Groton of 2017, in which Dr. Green is no longer with us, we are the ones tasked with being the stewards of Groton’s past and the shepherds of its perpetually improving future. The long-overdue revisions to the 1898 Town Seal are our duty and obligation to undertake. In doing so, we must approach this task in the spirit of continuing Dr. Green’s work, taking inspiration from his service, and honoring his legacy in the context of our evolving community. A new Town Seal Committee, this time meeting more than just twice and incorporating the ideas of more than just one man, is the proper entity to address these issues and finally give us a symbol that can represent Groton’s past, present, and future alike. It’s just one seal in one small town but small efforts can add up to large results. One symbol or policy at a time, we can bridge the divisiveness in our communities and again become one nation, indivisible, with liberty and justice for all. Article 35 at the upcoming Spring Town Meeting would create a Town Seal Committee charged with soliciting public input into the design for a new Town Seal; selecting from among the submissions received the design that best embodies Groton’s character, history, and aspirational values; and presenting that design to a future Town Meeting for approval. Please attend to voice your support. As a member of the Groton Interfaith Council, Greg R. Fishbone supports its mission to foster understanding, respect, justice and peace among people of a variety of religious traditions through worship, fellowship, education, and service. As a former board member of the Groton Historical Society, he has worked to preserve Groton’s rich historical resources, promote its proud traditions, and recognize that its history includes the present and future as well. He and his wife are the parents of two proud young Grotonites. Opinion blog posts represent the opinions of the author and are not necessarily endorsed by Indivisible Groton Area or its individual members. IGA invites input and opinion from among the diversity of its membership. - 1898 Acts and Resolves of the General Court of Massachusetts Chapter 389, as amended by 1899 Acts Chapter 256, is still in effect today. - The Committee of the Town Seal was composed of Chairman Michael Sheedy Jr, Charles Woolley, and Francis M. Boutwell. Their official report can be found in the Board of Selectmen notes for the year ending March 19, 1898. - From the committee report: “Shortly after our appointment the attention of Dr. Samuel A. Green was called to the matter, and with his usual interest to all things that relates to his native town, he readily offered his service in preparation of a design.” - This was the meeting at which the committee finalized its report, March 16, 1898, on the very same day Dr. Green’s design arrived from Boston. Spring Town Meeting was held April 4, 1898. The new legislative act went on the books on April 29, 1898. - Dr. Green’s 1898 letter is reproduced in a book he co-authored with Elizabeth Sewall Hill, Facts Relating to the History of Groton, Massachusetts, Volume 2, on pages 171-172: Boston, March 16, 1898 To: Michael Sheedy, Jr., Esq., Groton Agreeably to your request I send herewith a design, as given above, for a Town Seal of Groton. For the convenience of the voters, who are the final judges in the matter, I have had it printed, so that at a glance its general effect may be more readily seen. The design is a simple one, and is intended to typify the character of its inhabitants. The Bible represents the faith of the early settlers of the town, who went into the wilderness and suffered innumerable privations in their daily life as well as danger from savage foes. Throughout Christen-dom to-day it is the corner-stone of religion and morality. The Plough is significant of the general occupation of the people. By it the early settlers broke up the land and earned their livelihood; and ever since it has been an invaluable help in the tillage of the soil. - Groton’s early years fell between the 1630s banishment of Roger Williams and Anne Huchinson for heresy and the 1690s Salem witch trials and executions. - The Reverend Samuel Dana had to realign his politics and literally beg the town’s forgiveness in order to remain in Groton in this letter: I, the subscriber, being deeply affected with the miseries brought on this Country by a horrid thirst for ill-got wealth and unconstitutional power; and lamenting my unhappiness in being left to adopt principles in politicks different from the generality of my countrymen, and thence to conduct in a manner that has but too justly excited the jealousy and resentment of the true sons of liberty against me, earnestly desirous at the same time to give them all the satisfaction in my power, do hereby sincerely ask forgiveness of all such for whatever I have said or done that had the least tendency to the injury of my Country, assuring them that it is my full purpose, in my proper sphere, to unite with them in all those laudable and fit measures that have been recommended by the Continental and Provincial Congresses, for the salvation of this Country, hoping my future conversation and conduct will fully prove the uprightness of my present professions. Groton. May 23, 1775. Dana’s public apology was accepted by the town’s Committee of Correspondence: The inhabitants of Groton in town-meeting assembled, the Reverend Samuel Dana offered that to the Town with regard to his political principles and conduct with which the Town voted themselves fully satisfied, and that he ought to enjoy the privileges of society in common with other members; and we hope this, with the following by him subscribed, will be fully satisfactory to the publick. - Per the Establishment Clause of the First Amendment, applied to town government by the Fourteenth Amendment. No one can predict how a court would resolve such a case, but certainly no one wants to endure the contentious, drawn-out, and expensive legal proceedings that would be required to find out.
<urn:uuid:088c23ff-f9c5-4fc2-a328-bd86a5dd00fe>
CC-MAIN-2022-33
http://indivisiblenashoba.org/opinion-it-starts-with-a-seal/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00398.warc.gz
en
0.971789
2,200
3.234375
3
On the 30th anniversary of the World Wide Web, its founder Tim Berners-Lee has highlighted three sources of disfunction on the web: - Deliberate, malicious intent, such as state-sponsored hacking and attacks, criminal behaviour, and online harassment. - System design that creates perverse incentives where user value is sacrificed, such as ad-based revenue models that commercially reward clickbait and the viral spread of misinformation. - Unintended negative consequences of benevolent design, such as the outraged and polarised tone and quality of online discourse. Lee says in a letter that we “can’t just blame one government, one social network or the human spirit. Simplistic narratives risk exhausting our energy as we chase the symptoms of these problems instead of focusing on their root causes. To get this right, we will need to come together as a global web community.” He has called for the web to be recognised as a human right and built for public good, and for citizens, governments and companies to build a new Contract for the Web. In a conversation with Berners-Lee, I discussed issues that are rife in particular in India: of centralisation of the Internet with a few dominant players, misinformation and Internet Shutdowns, personal data being seen as a national resource and data localisation, platform neutrality, and end-to-end encryption: On Centralisation of the Internet, the accessibility of web versus apps MediaNama: We’ve seen a large amount of centralisation of the Internet ever since the launch of the iPhone and the growth of app stores and apps. I was wondering if the web has lost to the app ecosystem, with the kind of centralisation that apps have facilitated. Is there a way to re-decentralise the web from here on? Tim Berners Lee: In a way these are two questions in one and they are related…Three questions in one. One of them is have web apps have lost out to native apps. When will we know that we have lost? We do, when we look at something on an app and it doesn’t have a URL. Because you can surf a website, you can take a URL and bookmark it. You can drop it into a chat. We can have a chat about it, a review about it, and it’s a part of the discourse. That’s a really important part of the web that anyone can refer to any other thing. But then if you look at a lot of the apps I use, in fact the way some apps work is that they do typically provide something of a very specific environment for, like watching a video, using full screen, but then good apps have a link button that you can click on. And so even when they are native apps, there is a fully functional web behind them. Anything that you can provide in a good web app, you can produce a URL for it. There are URLs and references of the URL, and other people who don’t have the app will see a version on the web, and people who do have the app, have a choice of whether they see it on the web or the app. I think it’s a constant battle. MediaNama: We have around 40 to 60 million people coming online every year, and many of them do not speak English, and there is no functionality for URLs to be in Indian scripts and Indian languages. For many of these people, the primary access points are apps, and not the web or a URL. Do you think there has been a failure of the evolution of the URL? If you had to reimagine it (the URL) for today, how would you reimagine it? Tim Berners Lee: The URL itself is just a pointer. When you click on it, then your device goes over the web, and it sends your favourite languages in http to the server. So when you go to Wikipedia – you follow a link to Wikipedia – and it can’t help you by giving you, or if you’re a woman looking to look up some pre-natal information, and you’re not an English speaker, I think we need to put in a lot more effort. It’s two sides, actually. Anyone who is looking to build tools like that in India, and in India you have one of the largest diversities of languages out there, but certainly, effort put in companies, potentially regulation by government to say that if you’re serving people in any language with more than 100,000 people, every government should provide for automatically switching to that. The functionality to switch languages automatically has been built in from the very beginning, from the very very early days of http. And so, in the open source community, for example, there are big pushes to take all the comments that a program will give you, and turn that into a big table of it, and get people from different languages to sit in different columns, that this is what it is in Hindi, and this is what it is in Arabic. It’s hard work. Maybe the Web Foundation hasn’t pushed very hard on that, on internationalisation, and it is something that we could do. Sometimes a crowdsourced drive to internationalise an open source product can be very effective. You have a lot of dialects which are only spoken in India, and by a relatively small number of people. Getting the Indian open source community involved would be very interesting. And the Indian government ought to. I think it’s their moral duty to make sure that the web works for everybody in India. If they don’t, the open source community could work on that. International domain names, you can have different scripts in domain names. To a certain extent, users spend less time looking at URLs than domain names. They shouldn’t be really aware of the URLs. They should be aware of the links. You should be able to drag a little icon for the thing you’re reading and drop it into the chat. I should be able to click on it without ever worrying about what the actual characters of the URL are. In the original design of the web, you didn’t see the URL. MediaNama: You think there are ways to redecentralise the web? Tim Berners Lee: As background, you may remember the time when there were online services, and certainly in the English speaking world, America Online became completely dominant, and people worried that they had control of the whole information world, and then the web appeared. When the web appeared, everybody got online using Netscape. So then people worried that Netscape had control of the information world, because the only browser you would use is Netscape. Until Microsoft Explorer came on. Then they worried that Microsoft had control and that was even more serious because Microsoft had control of the operating system and the browser. But then Microsoft was forced to introduce APIs and separate the operating system and the browser, and very soon, there were a few browsers. Browsers like Firefox and Safari, and now Chrome, appeared. People have worried about Google being the dominant search engine. People worry about Facebook – and if you’re in a Facebook country, being the dominant search engine. So the first message is, through history, it’s been funny how the completely dominant platform has sometimes lost its dominance to a challenger, like MySpace and Facebook, or sometimes in a space dominance hasn’t become important. It isn’t the browser you use – the search engine you use has much more power over your life. If one day Facebook wasn’t cool, you can imagine – it’s a pain to move from one social network to another, but on the other hand, social networks like mewe.com have respected privacy, where you can share your photos with family and friends in a privacy preserving way. The moment there’s a problem with another platform, a million more people join mewe. That’s one possible way in which people go to other platforms. The other possibility may be is that there is a lot of pressure put on large companies like Facebook and Google to expose APIs, and the moment you want to download your Facebook updates, I imagine there’s a Facebook API to do it. If Facebook goes along with the project we have called Solid, then they will produce APIs which are standard. That means you can look at your photos whether they’re by Facebook, Dropbox or Google. Some people draw an analogy of Microsoft allowing others to create for Windows by exposing Windows API. One of the possibilities to enable competition in that space. In the UK there’s a thing called open banking, which means that any bank must expose your financial data via open API. That means there’s all kinds of financial apps that you can get. There’s a competition between different apps for helping you manage your money, which comes from Open APIs. One of the things we can do is push for Open APIs. On data localisation and data as a national resource MediaNama: You talked about portability. There’s a conversation in India about personal data being seen as a national resource, and there’s a strong move here to localise data storage and restrict cross border flow of data. What are your views on how that will impact the global nature of the Internet? Tim Berners-Lee: That’s one of the things that the Web Foundation has always been concerned about: the balkanisation of the Internet. If you want to balkanise it, that’s a pretty darn effective way of doing it. If you say that Indian people’s data can’t be stored outside India, that means that when you start a social network which will be accessed by people all over the world, that means that you will have to start 152 different companies all over the world. It’s a barrier to entry. Facebook can do that. Google can do that. When an Indian company does it, and you’ll end up with an Indian company that serves only Indian users. When people go abroad, they won’t be able to keep track of their friends at home. The whole wonderful open web of knowledge, academic and political discussions would be divided into country groups and cultural groups, so there will be a massive loss of richness to the web. On Platform Neutrality MediaNama: When it comes to access to information and access to URLs, there is talk about a need for neutrality for platforms like Facebook and Twitter. Do you think that Net Neutrality regulations ought to be extended to platforms that operate on the web? Tim Berners-Lee: Net Neutrality at the moment is at the TCP level. You’re suggesting that neutrality should be introduced for things like search engines? MediaNama: It’s not just search engines, but also in terms of app discovery, in terms of prioritisation on social networks of certain news items. Just a couple of days ago, there was a meeting of Indian parliamentarians with representatives of Facebook and WhatsApp, a couple of weeks before that, with Twitter. Indian parliamentarians demanded that these platforms be neutral in terms of what they show to people on the web. It’s not about network regulations being expanded to the web, but a call for neutrality of platforms, and whether there should be regulation of that. Tim Berners Lee: Can we call it editorial neutrality? Like the editor of the Guardian will try to emphasise that his staff should be neutral that his staff shouldn’t discriminate about stories about certain cultures, about different race, colour or creed, or sexual preferences, and when it comes to political parties, they should give an even balance. MediaNama: More about algorithmic neutrality than editorial neutrality. Tim Berners-Lee: Well, it’s the same thing now. So much of the editing job is done by algorithms. The web foundation has always pushed for transparency about these algorithms. It’s hard, but to a certain extent, when Facebook started as a club that a few people joined, then [for example] one of the things you can do, which is a human right to associate with your friends, you can start a club in which the only news articles that you can look at are about birds. And you can do that within a club. Political parties are a club where you have mindset, where one economic model of how to run a country is different, and free speech gives people the freedom of association. You have to be careful. The problem becomes when you have a search engine which is a vastly dominant player. So obviously, if things move, the arguments that people have made, for example, in Germany that Google therefore effectively is a national resource, and whoever runs it it’s a national resource that Germans go to every day of their lives, and the German attitude is that therefore it should run by our rules. And it should be neutral…it should act as if it were a government. And so to a certain extent, they have a point. It’s a bit like when people nationalised the railroad system in a country to get it to perform in a way that everybody in a country, or they brought in regulations to ensure that they work in a manner that they’re a part of the government anyway. When something is dominant in a country, that’s an argument. The beautiful thing is that when you define neutrality, you may find that the people in Texas, in Germany and in Finland would have different definitions. On Internet Shutdowns MediaNama: India has had the largest number of Internet Shutdowns in the world. The last year, it was 134, and most of them have been linked to the spread of misinformation, which have potentially led to riots. It is in the riotous situations that the district collectors shut the Internet down. Now there’s a top-down approach to eradicating misinformation, by regulating platforms and holding them accountable. Do you think this is a mechanism is feasible, given the vast amounts of misinformation that goes through the platforms, and what would be your solution to addressing this problem? Tim Berners-Lee: It is a very real problem. A lot of the concept of the web is about trying to build a world in which people naturally spend more time working towards the truth than working towards exchanging conspiracy theories. Asking the platforms – so I know then when discussions on social media has led to genocide, the platforms have felt responsible and looked towards what they want to do, and governments have wanted to pressure social media companies. I know the British government wanted to pressure the social media to try to suppress material by people trying to radicalise terrorists. To start with, shutting down the Internet is not the solution. The solution is, we need to talk about where the border is between hate speech and free speech. For example, a lot of this issue is: riots may come from hate speech running out of control. People can naturally turn nasty. We all have it somewhere to be tribal, vicious and vengeful. Sometimes it may happen because of the way social networks just allow us to retweet things, and tweaking a social network to put a delay in, to use AI to help us be more effective – if you really want to retweet, I suggest you sleep on it, and lets see if you want to retweet tomorrow morning. You can tweak the way the social networks work. Another problem is when all these conspiracy theories have been created very cleverly by political or commercial or criminal organisations. That is a part of cybersecurity. That is an outright deliberate attack, and cybersecurity is about attacks on the democratic processes, and can cause rioting and death. These are important cybersecurity issues. When the government wants to have processes to take things down, obviously the first suspicion is that the government is going to do that in order to stifle its opposition, and not to fight crime. That’s our experience looking at the world. Shutting down the Internet as a whole is very destructive to the economy. It’s very destructive to the constructive discussion about what should happen. In a way, it is a last resort option by the government, I think, indicating that the government is too weak. Censorship in general I think is an indication of the weakness of the government. A strong government is one which can allow people to criticise it. A strong government allows open debate, and becomes stronger in their commitment to involve the population fairly. When governments win the trust of the public, they will become more capable of leading them. On End to End Encryption MediaNama: One major debate that is brewing is around end to end encryption, about whether it is good or bad, and there are calls in India for removing end to end encryption which allows malicious actors the freedom to operate with impunity. Traceability is a significant demand that is being made of WhatsApp, which isn’t feasible with end to end encryption. Tim Berners-Lee: Personally, I’ve always thought that end to end encryption is crucial, but recently if you can point to the incident of it being a component of a genocidal wave, then it’s a concern. One of the things that social networks can look at is looking at metadata. The text of people’s messages, most of the time, is very private. If the police can, using the appropriate judicial system, ask to get the metadata to see who’s talked to who, that’s provided traditionally for phone records and so on, that has been a very very powerful tool. When you look at the some of the hacks that have been done, like the Russians hacking the Trump Election, there’s a trail of breadcrumbs, and you can see what happened. You can learn to do this before the critical thing happens. My suggestion is to establish good legal grounds for getting metadata and use the metadata, because you can draw the social graph of these people, and even though you can’t read the message, you can see from the time patterns, and geographical patterns of the clustering of the communication, you can build machines which will flag things which are suspicious. And then you don’t have to un-encrypt the messages. You do have to be able to expose the identity of the people with appropriate legal due process.
<urn:uuid:4bd394fd-328e-4a79-80fb-280c7462963f>
CC-MAIN-2022-33
https://www.medianama.com/2019/03/223-on-30-years-of-the-web-founder-tim-berners-lee-talks-about-decentralisation-data-localisation-internet-shutdowns-end-to-end-encryption-misinformation-and-more/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571086.77/warc/CC-MAIN-20220809185452-20220809215452-00605.warc.gz
en
0.95229
3,984
2.703125
3
A bushy shrub belonging to the sunflower family, stevia is also known as Stevia rebaudiana Bertoni. All 150 of the stevia species are indigenous to North and South America. At the moment, China is the main exporter of stevia products. However, stevia is currently made in numerous nations. Garden centers frequently sell the plant for home growth. In comparison to ordinary sugar, stevia is 200–300 times sweeter. To supply the same amount of sweetness as other common sweeteners, it often needs 20% less land and a lot less water. There are eight glycosides in stevia. These are the sweet substances that have been extracted and refined from stevia leaves. These glycosides consist of Reliable Sources: - A, C, D, E, and F of the Rebaudiosides - substance A The most common of these ingredients are stevioside and rebaudioside A (reb A). Throughout this text, we will refer to both steviol glycosides and reb-A as “stevia”. These are extracted by collecting the leaves, drying them, extracting the water, and purifying it. Prior to being bleached or decolored, crude stevia, the processed product, frequently has an unpleasant taste and odor. The final stevia extract is produced after about 40 processes. Stevioside can be found in stevia leaves in amounts up to about 18%. Quick stevia facts - Brazil, Paraguay, Japan, and China are the main growing countries for stevia. - Compared to ordinary sugar, the natural sweetener tastes 200–300 times sweeter. - Because there are so few calories in each serving, stevia can be categorized as “zero-calorie.” - It has demonstrated potential health advantages as a healthy sugar substitute for diabetics. - When used in moderation, the ingredients erythritol and stevia, which have been authorized for use in the United States (U.S.), don’t seem to cause any health hazards. Stevia sweeteners are marketed under several trade names, including: - Stevia can - Stevia Extract Straight Up Potential health advantages Using stevia as a sweetener has the potential to have significant health advantages over sucrose or table sugar. On the FoodData Central (FDC)Trusted Source, stevia is listed as “no-calorie.” Although stevia may not precisely have no calories, it has a calorie content that is lower than sucrose and low enough to be considered zero. Stevia sweeteners naturally include sweet-tasting ingredients. People who enjoy foods and beverages with natural sources may profit from this trait. Stevia is a healthy substitute for managing diabetes or losing weight due to its low-calorie content. Here are a few potential stevia health advantages. Stevia sweeteners don’t add calories or carbohydrates to the diet, according to research. Additionally, they haven’t shown any impact on insulin responsiveness or blood sugar levels. This enables diabetics to follow a healthy diet plan and consume a larger variety of foods. Another evaluation evaluated the impact of stevia and placebos on metabolic outcomes in five randomized controlled studies. Read more about stevia had little to no impact on body weight, blood pressure, insulin levels, and blood glucose. In one of these investigations, participants with type 2 diabetes reported that stevia caused appreciable drops in post-meal blood glucose and glucagon levels. A hormone called glucagon controls blood glucose levels, and persons with diabetes frequently have problems with the mechanism that releases glucagon. When blood sugar levels rise, glucagon levels fall. The glucose level is controlled by this. 2) Controlling weight Overweight and obesity have a variety of reasons, including physical inactivity and a rise in the consumption of foods that are rich in calories, fat, and added sugars. It has been determined that an average of 16% of the calories consumed in the American diet come from added sugars. Weight gain and worse blood glucose management have both been connected to this trusted Source. Stevia has very few, if any, calories and no sugar. It can assist in lowering energy intake without compromising taste by being a component of a well-balanced diet. 3) Breast cancer Kaempferol is one of the several sterols and antioxidant substances found in stevia. According to studies, kaempferol can cut the risk of pancreatic cancer by 23%. It has been discovered that certain glycosides in stevia extract can widen blood arteries. They may also enhance urine production and salt excretion. Stevia may help decrease blood pressure, according to a 2003 study. According to the study, the stevia plant may have cardiotonic properties. Normal blood pressure and heartbeat are achieved through cardiotonic activities. Stevia does not appear to affect blood pressure, according to more recent studies. This benefit of stevia has to be further investigated. 5) Diets for kids Stevia-containing foods and drinks can help children’s diets by reducing the calories from undesirable sweeteners. Thousands of items, from salad dressings to snack bars, are now available on the market that includes stevia that is derived naturally. This accessibility enables kids to enjoy sweet foods and beverages while adjusting to a lower-sugar diet without consuming extra calories. Obesity and cardiovascular disease are associated with eating too many calories and sweets. In order to ascertain whether there was any reason for worry regarding the potential for allergic reactions to stevia, the European Food Safety Committee (EFSA) evaluated the available research in 2010. “Steviol glycosides are not reactive and are not converted to reactive chemicals, therefore, it is unlikely that the steviol glycosides under study could produce by themselves allergic reactions when taken in meals,” the reviewers wrote in their conclusion. It is quite improbable that stevia extract, even in its most pure form, will result in an allergic reaction. Since 2008, there have been no reports of adverse reactions to stevia. Stevia side effects According to safety studies, stevia extract has no negative side effects. Whole leaf stevia is less generally recognized as safe (GRAS) by the Food and Drug Administration than refined steviol glycosides, which can be added to foods. The stevia plant, on the other hand, can be cultivated at home, and there are numerous uses for the leaves. Stevia was first believed to be harmful to kidney health. Since then, a study on rats reveals that stevia leaves in supplement form may alternatively have properties that safeguard the kidneys and lessen the effects of diabetes. Additionally, according to recent studies, it is safe to drink the recommended dosage of sugar substitute—or less—while expecting. Sugar alcohol is also present in some stevia products. Although one form of sugar alcohol, erythritol, poses less risk of symptoms than others, people who are sensitive to it may develop bloating, stomach cramps, nausea, and diarrhea. Stevia won’t have any negative effects as long as it is ingested in moderation and is well purified. The usage of stevia In the United States, stevia sweeteners are typically used as sugar alternatives in table sugar products and low-calorie beverages. Since the middle of the 1990s, nutritional supplements made from stevia leaf extracts have been sold in the United States, and many of them combine its sweet and non-sweet elements. Stevia sweeteners contain naturally-occurring sweet ingredients. Customers that favor meals and beverages that they consider as natural may profit even more from this. Currently, stevia is a component in more than 5,000 food and beverage products throughout the world. Products all around Asia and South America, employ stevia sweeteners as an ingredient. - Ice Cream - Preserving foods - Soft beverages - Chewing gum - Cooked veggies What dangers exist? In 1991, the FDA declined to approve stevia as a food additive or as a sweetener. However, the FDA designated the stevia extracts as GRAS in 2008, following the development and patenting of the purification method by Coca-Cola. The general public, including kids, can safely consume high-purity stevia extract at the recommended quantities, according to multiple international regulatory authorities. A 4 milligrams per kilogram Acceptable Daily Intake (ADI) has been set by governing bodies (kg). The Food and Agriculture Organization of the United Nations (FAO), the World Health Organization (WHO), and the FDA are some of these organizations. Acute toxicity research on stevioside revealed that it is not harmful. Numerous laboratory animals were employed in these experiments. There have been no significant warnings, contraindications, or negative responses reported. Stevia sweeteners are safe for use by people with diabetes since they meet the purity standards for steviol glycosides set by the Joint Expert Committee on Food Additives (JECFA). High-purity stevia extracts are used in the majority of stevia research studies. Some older studies substituted low-purity stevia extracts with high-purity extracts, which distorted the availability of reliable data. The U.S. Department of Agriculture, not the FDA, is in charge of overseeing the stevia plant (USDA). The FDA has not given the stevia plant a GRAS designation, however, this does not imply that it is intrinsically harmful. In fact, as it has been for millennia in other nations, the plant may be acquired from a variety of gardening sources in the US, grown at home, and consumed in a variety of ways. Before they can be verified, the purported health advantages of stevia need more research. Be assured, though, that stevia is safe to eat and makes a great substitute for sugar when you need an extra sweetening boost.
<urn:uuid:f50992b6-e03f-48d2-9f5b-85604719760d>
CC-MAIN-2022-33
https://peachylosangeles.com/stevia-goodness/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572077.62/warc/CC-MAIN-20220814204141-20220814234141-00605.warc.gz
en
0.942729
2,227
2.9375
3
The National Flood Insurance Program (NFIP) is a federal program providing flood insurance to residents of participating communities. Created in 1968, communities can voluntarily join the program, adopting baseline floodplain management regulations, and then all residents—homeowners, renters, and businesses—are eligible to purchase a flood insurance policy. The program, now housed in the Federal Emergency Management Agency (FEMA), also maps flood risks, helps educate residents about flooding, and supports flood risk reduction through grants, outreach, and incentives. In response to growing threats of climate change, the US federal government is increasingly supporting community-level investments in resilience to natural hazards. As such federal programs become more widespread, evaluating their efficiency and effectiveness becomes essential. This issue brief analyzes the Community Rating System (CRS), which is part of the National Flood Insurance Program (NFIP), by discussing if it has been effective in reducing flood losses, how it can be improved, and what lessons it has for similar types of programs. This Primer explains the main federal post-disaster programs managed by the Federal Emergency Management Agency, the Small Business Administration, and the Department of Housing and Urban Development, and offers insight on the current system and suggestions for policy reform so that disaster aid better serves those most in need. With many regions of high flood risk, primarily located along the state’s rivers and streams and their tributaries, floods in Pennsylvania can also occur in lower elevation areas—even far from rivers and streams—as the result of heavy rainfall. This risk of rainfall related flooding is getting worse in the northeast of the United States as the planet warms. This Issue Brief offers an overview of the National Flood Insurance Program (NFIP) in the Commonwealth of Pennsylvania in addition to commenting on what may change with Risk Rating 2.0, the upcoming reframing of the program. In the United States, extreme rainfall events are becoming more frequent and more intense as a result of climate change. However, there is a regulatory gap in federal policies, technical support, and guidance since no single agency is focused on stormwater flooding. This Issue Brief looks specifically at stormwater flooding (also called pluvial flooding) and how U.S. cities can manage increases in water volume, and not just water quality. The current pandemic has proven to be one of the most extensive complex catastrophic risks the global economy has ever faced. This brief provides stakeholders with a practical framework for defining a meaningful role for insurance with respect to business interruption and other risks from future pandemics. We explore three options where the property and casualty industry may be able to play a role in managing the risk of future pandemics in the United States. Investors are increasingly climate-conscious, and the SEC recently asked for input on potential changes to climate disclosure guidance. Read the most recent primer for an overview of these risks and the state of climate disclosures, along with how they might change moving forward. Flood insurance is available to residents of Washington, DC primarily through the National Flood Insurance Program. However, flood risk extends beyond the 100-year floodplain and many residential properties exposed to flooding are not insured. This Issue Brief, produced in partnership with the District of Columbia’s Department of Energy and Environment, explores flood risk in DC and the role of flood insurance in helping residents recover. Low- and moderate- income (LMI) households and communities suffer disproportionately from disasters, but there are a few policies or programs to help them achieve post-disaster financial resilience. This brief is intended to help policymakers begin the conversation about what new or supplemental policies could help LMI households in at-risk areas. In particular, it explores the critical role insurance can play in securing financial resilience. We are in a new era of catastrophes with the number of low probability, high-consequence (LP-HC) events increasing significantly. Due to a set of heuristics and cognitive biases, those residing in hazard-prone areas and decision-makers in the public and private sectors are underpreparing for disasters that are now occurring with more frequency and intensity. This primer characterizes the nature of these heuristics and biases and why individuals often underprepare for LP-HC events. The City of Portland, Oregon piloted an innovative flood insurance affordability program in 2017 and 2018 that included one-on-one consultations between flood insurance policyholders and an insurance agent who was an expert on flood insurance. These consultations found that roughly half of the reviewed policies had some type of error in pricing or could lower insurance costs through application of elevation certificates. This issue brief covers other findings from this program. The concept of parametric insurance, while not new, is getting increased attention as a way to provide faster and more flexible funds to victims of disasters and as a tool to provide post-disaster funds for emerging and otherwise difficult-to-insure risks. This primer explains the concept of parametric insurance with a focus on its use in providing financial protection against disasters. The Risk Center has launched a new series to better explain concepts in risk and resilience. Read our first Primer. New Jersey residents and visitors enjoy the amenities that come with its more than 1,800 miles of coastline, as well as interior rivers and other waterways. While supporting recreation, tourism, and many sectors of the economy, these waterways and coasts also introduce flood risk. In collaboration with Rutgers and The New Jersey Climate Change Resource Center, this brief covers the National Flood Insurance Program (NFIP) in the state of New Jersey. Disclosure about flood risk outside the Special Flood Hazard Area (SFHA) is limited and purchase of flood insurance is not required. We identified three drivers of flood insurance take-up outside the SFHA: (1) recent or repeated flooding outside the SFHA; (2) active and continuing outreach and education about flood risk and flood insurance; and (3) enough personal disposable income among consumers to afford flood coverage. In October 2018, the Wharton Risk Center’s Policy Incubator hosted a workshop designed to evaluate policy options for expanding the number of people with flood insurance in the United States, particularly those of low and moderate income. In this brief, we present seven approaches that workshop participants felt had the potential to generate substantial increases in take-up rates across the country. One reason that individuals do not purchase insurance is that they are strongly influenced by cognitive biases in their decision process. Two web-based studies reveal that individuals who experienced regret because they were uninsured at the time of a hurricane tended to purchase insurance in the next period. One way to reduce regret for these individuals is to add flood coverage to a homeowners’ policy. In a new issue brief, we examine Portland’s Flood Insurance Savings Program in detail, discussing its structure, participants, and impact on flood insurance premiums. We also identify lessons learned that may be useful to other communities struggling with flood insurance affordability and to policymakers considering NFIP reform. Drawing on our recent report, The Emerging Private Residential Flood Insurance Market in the United States, this issue brief describes the Florida market and the measures the state has taken to support its growth. In this Issue Brief, we pose three questions to guide the development of a framework for catastrophe risk management: (1) How do we harness the strengths of the private sector in financing disaster risk? (2) What are the complementary roles of the public and private sectors in promoting greater resilience? And (3) How do we effectively integrate risk reduction and risk transfer to provide effective protection against catastrophes? The costs emanating from a wildfire can be broad and impact many sectors. Depending on legal and regulatory regimes, costs can shift across different groups. In this brief, we focus on electric utilities and the share of wildfire costs that they pay in California for utility ignited wildfires. Drawing on our recent report, The Emerging Private Residential Flood Insurance Market in the United States, our newest issue brief describes the key players and structure of the residential flood market. Flood insurance in Puerto Rico has attracted media and policymaker attention since Hurricanes Irma and Maria devastated the island in late summer 2017. One reason is the incredibly low take-up rate for flood insurance, which left some residents financially vulnerable following flood damage from back-to-back hurricanes. Another is the surprising shift over the last five years from the vast majority of flood policies being written with the National Flood Insurance Program (NFIP) to instead being written by private sector insurers. Directors, executives, and managers who have already put in place a risk management strategy that enables them to take deliberative actions in response to an adverse event are better prepared to recover from that disruption and stay true to their firm’s core values. From our study of large publicly-traded firms in the U.S. and abroad, we have identified a set of management practices for company leaders to overcome their systemic decision biases and reduce the likelihood and the impacts of large-scale disruptions. Six decision-making biases cause individuals, communities and organizations to underinvest in protection against low-probability, high-consequence events. We propose a behavioral risk audit that recognizes that these biases are difficult to overcome but that they can be used to develop strategies to improve individuals’ decision making processes in preparing for disasters before they occur. The Wharton Risk Management and Decision Processes Center is undertaking a study funded by the Department of Homeland Security’s Critical Infrastructure Resilience Institute (CIRI). The purpose of the project is to identify barriers and opportunities for improving infrastructure insurance and resilience for catastrophic events and disruptions. This brief summarizes the key findings and recommendations upon completion of the first two phases of the project. In connection with the National Flood Insurance Program’s anticipated reauthorization in 2017, Congress is looking at several proposals that would address the program’s Increased Cost of Compliance (ICC) coverage, expanding its eligible uses and giving policyholders more funds to implement qualifying risk-reduction measures. In this policy brief, we examine ICC claims for single-family homes from 1997 to 2014 and report on our findings from conversations with floodplain managers in several states. Our analysis provides context for ongoing debates in Congress and highlights some of the key reasons the program is not more widely used. We study the relationship between disaster risk reduction and insurance coverage to assess the presence of moral hazard for two different natural hazards with survey data from Germany and the United States. The results show that moral hazard is absent. Nevertheless, adverse risk selection may be present. This has significant policy relevance such as opportunities for strengthening the link between insurance and risk reduction measures and the use of risk-based insurance premiums. A well-designed insurance program can play an important role in linking investment in cost-effective reduction measures with financial protection should a disaster occur. Measures to increase resilience to floods include improved accuracy of flood maps and communication on flood risk, elevation certification for at-risk structures, vouchers and/or other financial aid for homeowners to purchase flood insurance and undertake loss-reduction measures that will also address affordability issues, and government acquisition of at-risk properties for open space and flood buffer zones. Our study on Charleston County, South Carolina finds that if insurance premiums reflected risk, the price of flood insurance for many properties in Special Flood Hazard Areas in Charleston County, South Carolina could more than double over their current subsidized premiums. Elevating a house a few feet can decrease the homeowner’s risk-based premium by 70 to 80 percent, saving thousands of dollars annually. We find that coupling vouchers with mitigation loans to elevate homes can reduce government expenditures by more than half over a voucher program that does not require mitigation when the cost of elevating a house is about $25,000 in high hazard A zones. In the coastal V zones, cost savings can be achieved even when the cost of elevation is as high as $75,000. We find that over the studied period, in FEMA-mapped 100-year floodplains (SFHAs), the average claim rate – defined as the ratio of paid claims to the number of policies-in-force – is 1.55 percent. Surprisingly, outside the 100-year floodplains, the average claim rate is also higher than 1 percent at 1.27 percent, with no statistically significant difference in the rates across the two groups. This higher-than-expected claim rate in non-SFHAs could reflect inaccurate and out-of-date flood maps. It could also be due to adverse selection: only the riskiest properties in FEMA-defined non-SFHAs are insuring in these areas. Our results show that the majority of claims are for modest amounts. Half of claims over the three decades of data we analyzed are for less than 10 percent of the building’s value. Only a small portion of claims exceed three-quarters of a building’s value. Our findings suggest that buyers of floodplain properties have a limited awareness about flood hazards, despite the federal requirement for flood insurance for floodplain properties with a federally-backed mortgage. Our results suggest that it is the result of being flooded, rather than knowing there is a potential to be flooded, that affects property prices (“seeing is believing”). Federal disaster relief potentially creates moral hazard: receiving or expecting to receive money from the government after a disaster might reduce demand for insurance, resulting in even greater need for government relief when another disaster hits. Overall, we find that federal disaster assistance grants result in decreased demand for insurance. Low‐interest SBA disaster loans have no systematic impact on insurance purchase decisions. Six months after Hurricane Sandy, we surveyed over 1,000 homeowners in New York City who live in a flood-prone area about their flood risk perceptions and flood insurance purchases. 44% of respondents stated they purchased flood insurance because it was mandatory. Only 21% bought flood insurance voluntarily, 33% did not have coverage, and 2% did not know whether they had flood coverage. People tend to overestimate their flood probability and underestimate their potential flood damage. The U.S. Terrorism Risk Insurance Act (TRIA) was established in 2002 as a temporary measure to make terrorism insurance widely available to corporations. TRIA will expire at the end of 2014 unless extended by the federal government. If extended, the government might require insurers to assume more risk which could increase prices. We find that under current market conditions, firms’ demand for terrorism insurance is strong and is not very sensitive to gradual price changes. A midsize community of 50,000 people that experiences a moderate hail storm could expect to reduce losses by approximately $4 to $8 million by adopting and enforcing appropriate building codes. The President signed the Biggert-Waters Flood Insurance Reform Act with overwhelming bipartisan support from Congress. This bill included provisions for risk-based pricing for flood insurance to improve the NFIP’s financial basis. Some legislators are now wavering on their commitment to risk-based pricing because of concerns their constituents will not be able to afford flood insurance. We propose a means-tested voucher program coupled with a loan program for investments in loss reduction measures, made affordable by reductions in the NFIP risk-based premiums. The NFIP’s Community Rating System (CRS) is a voluntary incentive program that encourages community floodplain management activities that exceed the minimum NFIP requirements. Today, only 28 of the 1,466 NFIP communities in New York State participate in the CRS. This 1.9% participation rate is three times lower than the national average. The Rockaway Peninsula (RP) is an 11-mile community in New York City with a population of 103,825 that was severely impacted by Hurricane Sandy. Data analysis shows that only 14.4 percent of the housing units in the RP had flood insurance in 2012. We propose three guiding principles to make insurance more transparent and equitable, and to encourage investment in protective measures: Principle 1: Premiums reflecting risk; Principle 2: Dealing with equity and affordability issues; Principle 3: Multi-year insurance. The National Flood Insurance Program (NFIP) is a natural starting point for multi-year insurance tied to the property, not the homeowner. The more effective an investment is in preventing harm, the more difficult it is for decision makers to remember the need for the investments. It is the experience of real — not imagined — losses that seemed essential for convincing decision makers of the value of protective investments… Following the devastating storm surge and flooding from Hurricane Sandy, concerns have been raised about the status of flood insurance in the United States. Our analysis shows that many homeowners who sustained flood damage from Sandy did not have a flood insurance policy… Rationales for voting include self-interest, duty to nation or group, and duty to humanity. People may believe that voting in self-interest is a rational way to pursue their interests, however this is not rational, because the probability of a single vote having an effect is extremely low. Americans who believe it is their duty to vote on the basis of self-interest tend to oppose taxes, but favor spending that they see as benefiting themselves personally A U.S. law, “Implementing Recommendations of the 9/11 Commission Act of 2007,” requires that all cargo bound for the U.S. must be scanned by non-intrusive technology to detect radiological contraband before the cargo is loaded onto a ship at an international port. The operational feasibility of 100-percent scanning has been questioned by many government officials and private sector professionals involved with managing supply chains. Our study compares the operational feasibility of two types of inspection protocols designed to detect the presence of nuclear devices… Our analysis of the entire NFIP portfolio between 1978 and 2008 reveals that in some states, policyholders have paid as much as 15 times in premiums as they have collected in claims; in other states, policyholders have received 5 times more in insurance claims than they paid in premiums over this period… Many people who live in flood-prone areas had not purchased flood insurance or had let their policies lapse. The average tenure of flood insurance in the U.S. is between 2 and 4 years… The cost and availability of property insurance is becoming a significant problem for some residents in high-risk coastal areas of the United States…
<urn:uuid:4289f5c1-24e1-4b85-85be-2011421458e1>
CC-MAIN-2022-33
https://riskcenter.wharton.upenn.edu/issue-briefs/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00206.warc.gz
en
0.95038
3,731
2.890625
3
Almost two weeks ago, Jagadeesh asked me if I can explain how to solve the sequencing problem using Johnson’s algorithm of scheduling n-jobs on 2-machines. I tried several approaches and found that Excel Solver is still the best approach. However, we may need to change the job sequence returned by Excel solver slightly when there are several possible sequences that can lead to an optimized result. What is Johnson’s algorithm? As for the task of scheduling jobs in two work centers, the primary objective of Johnson’s algorithm is to find an optimal sequence of jobs to reduce both makespan and the amount of idle time between two work centers. This kind of problem has the following preconditions: 1) the time for each job must be constant; 2) Job sequence does not have an impact on job times; 3) all jobs must go through first work center before going through the second work center; 4) There must be no job priorities. Now I’d like to take an example to explain how Johnson’s algorithm works. Suppose that Andrew and Julie work together to write reports for projects every month. They forgot to check their calendar this month and it turned out that they need to finish as soon as possible. Assume that Andrew writes and edits reports while Julie collates data and draws all the necessary graphs. Julie starts her work on a report as soon as Andrew finishes his part. And Andrew works continuously. Times for the reports (in hours) are as follows. What is the order of the tasks using Johnson’s rule? Solve using Johnson’s rule First of all, we need to list the jobs and their times at each work center. Since above table already gives us the required information. I will move forward to the next step. The smallest time in Work Center B (Julie in our problem) is located in Job C (1 hour). The smallest time in work center A (Andrew in our case) is located in Job B (3 hours) after eliminating Job C. Therefore, job B will be scheduled first and Job C will be scheduled last. Find the next two smallest times after eliminating Job B and C, we will get the below sequence. Please note that this process should be repeated until only one job or no job is left. The only job left to be considered is Job D and the final job sequence is as below. Logic to get target cell for above problem There are three elements essential for the Excel solver: target cell, by changing cell and constraints. If you have already read this article (Deal with sequencing problems using Excel Solver), you will know that job sequence is our by changing cell and above preconditions can be our constraints. The only left thing is how to get total time or makespan? The left panel in Figure 2.1 shows you the job sequences of the above problem and their corresponding times. The right panel illustrates the total time. One square represents one hour. For example, Andrew needs 3 hours to finish Job B and I put 3 yellow squares at the beginning of the second row. Since Julie needs 5 hours to finish Job B and she can only start after Andrew finishes Job B, 5 yellow squares were placed after 3 white squares for the third row. White squares represent idle time. Andrew can only be idle when he finishes all those five jobs and when he is idle, Julie is working. Therefore, the sum of total hours that Julie spends on working and total hours that Julie is idle determines the total time (makespan). We already know the total time that Julie will spend on work per the problem. The question here is that how to calculate Julie’s idle time? First kind of situation Look at Figure 2.1. When Andrew works on his first job, Julie will be idle. Thus, the time of the first Job for Andrew should be taken into consideration when computing Julie’s idle time. In the 9th hour, Julie is idle again and that state lasts for 3 hours. Since she already finishes her first job and has to wait for Andrew to finish the second job. Well, it seems that time of the second job for Andrew minus the time of the first job for Julie will be the idle time for Julie after she finishes her first job. Similarly, we can use the same logic to get the length of other idle time for Julie. Second kind of situation So far, it looks like that we already get the logic and we can set up our model now. But wait, please. What if Andrew starts the nth job while Julie is still working on her (n-1)th job? Figure 2.2 gives you another job sequence. Look at the job A and job C. 5 hours (that Andrew needs to finish Job C) minus 2 hours (that Julie needs to finish Job A) equals to 3 hours. Per our previous logic, Julie should be idle for 3 hours after she finishes Job A. But if you look at Figure 2.2, you will find that Julie has been idle for only 1 hour. What happened? It’s because that when Andrew starts on job C, Julie is still working on Job E. 4 hours (that Andrew needs to finish Job A) minus 6 hours (that Julie needs to finish Job E) equals to -2 hours. We need to add -2 and 3 together to get the right idle time. Third kind of situation Let’s move on and see the last kind of situation. 7 hours (that Andrew needs to finish the 5th job) minus 1 hour (that Julie needs to finish the 1st job) equals to 6 hours. 5 hours (that Andrew needs to finish the 4th job) minus 2 hours (that Julie needs to finish the 3rd job) is 3 hours. We don’t need to add 3 into 6 since 3 is greater than 0 per the first kind of situation. Therefore, the idle time will be 6 hours. But Figure 2.3 tells that Julie should finish Job C 5 hours earlier than Andrew finishes Job D. What’s wrong? Look at Figure 2.3 again, Julie finishes job A (third job) 1 hour later than Andrew finishes job C (fourth job). It means that Julie’s idle time is -1 hour after she finishes job A. if we add 6 and -1 together, we will get 5 hours. Well, what we need to add is minus idle time. Let’s start from the beginning. The sum of (-3) and (-1) equals to -4. This is inconsistent with Figure 2.3. Indeed, Julie finishes the second Job 4 hours later than Andrew finishes the third Job. And -4 + 3 equals to -1. This is inconsistent with what we discussed in the previous paragraph. In summary, there is a chain. We need to start from beginning and compute idle time one after one. When computing, if the previous idle time is less than 0, we need to add previous idle time and results from deduction together. One more thing that I have to remind you is that minus idle time is only used to compute idle time for the next job. When computing target cell, we need to consider them as 0 since there is no white square. Is that right? In summary, here is how to get idle time for work center B (Julie for this problem): - Time of the first job in work center A (Andrew in our case) is a default idle time - Calculate Time of nth job in work center A – Time of (n-1)th job in work center B for n >= 2 - Add result from step 2 together with idle time if idle time in work center B after (n-1)th job finishes is less than 0. Otherwise, the result from step 2 will idle time. - Repeat step 2 and step 3 until we reach the last job in the sequence - If idle time computed per above logic is less than 0, the idle time will be considered as 0. Otherwise, leave it as it is. - Add the default idle time and idle time got from step 5 to calculate the total idle time. Case 1: Get order of the tasks for students who work together to write reports The problem here is the same as the above problem which is about writing reports. Set up model First of all, we need to list jobs and times in range B2:D7. And in range A3:A7, I will each job a number. These numbers will be values of our by changing cells. Our changing cells are range C10:C14. Formulas “=VLOOKUP ($B10, $A$2: $D$7, 3, FALSE)” were copied from cell C10 into range C11:C14 to get the time of each job for Andrew. Formulas “=VLOOKUP ($B10, $A$2: $D$7, 4, FALSE)” was copied from D10 into range D11:D14 to get the time that Julie needs to finish each job. The formula “=VLOOKUP ($B10, $A$2: $D$7, 2, FALSE)” was copied from A10 into A11:A14 to return the Job name per job number in column B. The default idle time is C10. Formulas to compute idle time were listed in range G11:G14. The formula in range I10:I14 can return 0 if the idle time is less than 0. The formula “=SUM (D10: D14) + SUM (H10: H14)” in cell D16 is used to get our objective – makespan. Per Johnson’s rule, C10 and D14 should contain the smallest time for Andrew and Julie respectively. Therefore, SMALL function was used here to retrieve those two smallest times. Fill Solver Parameter dialog box as shown in Figure 3.2. If you don’t know how to open this dialog box, please read this article. The values in by changing cells should be different from each other. At the same time, they should be integers between 1 and 5. The other two constraints are about two smallest times per Johnson’s rule. After clicking on Solve in Solver Parameter dialog box, Excel will return results as shown in Figure 3.3. If they work to follow the sequence of B ‑> E ‑> D ‑> A ‑> C, they can finish reports using the least time – 28 hours. And there are total 11 hours that Julie will be idle. By comparing against results in the first part, you will find that results we just got and results got per Johnson’s rule are matched. It looks perfect. Now let’s move forward and see another case. Case 2: Get order of the tasks for companies A company is faced with seven tasks that have to be processed through two work centers. Assume work “Center I” works continuously and that they are using Johnson’s rule. Data appear below in hours. What is the sequence of tasks? |Projects||Center I||Center II| Set up model The way to set up a model for this problem is similar to that of case I. Range B12:B18 are ours by changing cells. VLOOKUP function was used to retrieve job names into range A12: A18 per the values of by changing cells. VLOOKUP function was also used to retrieve the time of each job into range C12:C18 and D12:D18 for work center A and work center B respectively. Formulas used to get idle time were already listed in range G12:G18 and I12:I18. Our target cell is cell D20. SMALL function was used to retrieve two smallest times which will be used as constraints. Fill Solver Parameter dialog box as shown in Figure 4.2. After clicking on Solve, Excel will return the results as shown in Figure 4.3. The minimized makespan is 25.83 hours. And the total idle time for work center B is 1.66 hours. Well, if you look at results closely, you will find that value in cell C14 is greater than that in cell C15. This violates Johnson’s rule of putting smaller numbers first for work center A. You can try to add some constraints such as “C13 <= C14”, “C14<=C15”, “D17 <= D16” to force Excel to fix this problem automatically. But I cannot make sure that the constraints are always right or Solver can return a solution. Plus, it will take more time for Excel to return a solution. Therefore, I recommend you to manually change the sequence slightly. Figure 4.4 shows the final results after I exchange cells C14 and C15. The minimized makespan is still 25.83. It is the final sequence that is the same as that solved per Johnson’s rule.
<urn:uuid:6b74e94a-69c7-40e5-ad98-2a0ea8789285>
CC-MAIN-2022-33
https://www.exceldemy.com/sequencing-problem-processing-n-jobs-through-2-machines/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00004.warc.gz
en
0.937045
2,894
3.046875
3
PostgreSQL is a well-known relational database management system that boasts a secure environment for developers and users. But as remote work continues to be a part of every business’s new normal, IT professionals face a new set of challenges when it comes to managing the security and accessibility of their servers. Using a bastion server to access a PostgreSQL database adds an extra layer of security. But because bastion servers should act as firewalls, it’s necessary to use a Secure Shell (SSH) tunnel to lower the level of the private network exposure. Many PostgreSQL database management tools can automate this connection process, but interacting with the shell requires the manual creation of an SSH tunnel. Understanding the process and precautions can surely help your company maintain secure databases. Benefits of Creating and Using an SSH Secure Shell Tunnel SSH tunneling has been used in network communications for over thirty years, and it is just as relevant for today’s computing needs. Developers consistently turn to SSH Secure Shell protocols to manage database access on all kinds of servers, including PostgreSQL. Secure remote access is critical today, especially when 50% of workplace devices are mobile (mobile devices are among the most vulnerable to cyberattacks). This makes it even more important that organizations can remotely manage devices and user access to their networks. Companies handling large amounts of data need to give employees express access to major databases and take a proactive approach to database security. Using SSH protocols is the number one in providing secure access to users from remote servers. It also makes it simpler to authenticate automated programs. Therefore, businesses can run their daily tasks more efficiently. As security standards are finally catching up to mobile devices and remote servers, we can expect to see encryption and authentication protocols improve as well. Flexibility and interoperability are the future of IoT security. Still, SSH protocols will continue to be a standard part for the foreseeable future. How Does SSH Work? SSH protocols allow you to remotely control access to your servers across the internet in a secure environment. Not only is the initial connection highly secured, but so is the entire communication session between the two parties. Users can log in to a remote server using an open SSH Secure Shell tunnel or other SSH Secure Shell clients. The server looks for an open port connection. Once it finds one, the server must authenticate the connection. It determines whether a secure environment has been established for communication and file transfer. If the connection is initiated by the client, the client must authenticate the server in addition to providing its credentials. When both parties are authenticated and the connection is established, the SSH encryption protocol ensures the continued privacy of all data transmissions between the server and the client. There are three types of encryption SSH tunnels use to secure connections: - symmetric encryption - asymmetric encryption Symmetric encryption is used for the duration of the connection to keep communication secure. Both client and server share the same secret key to encrypt and decrypt data. This algorithm is secure because an actual key is never really exchanged. Rather, both sides share public information from which they can independently derive the key. Asymmetric encryption, in contrast, requires both a public and a private key. It is used at the beginning of the tunneling process to authenticate the servers. The server will use a public key to encrypt that data, and the client must authenticate it with a private key. Since a particular private key is the only thing that could decrypt messages encrypted with the public key, this helps authenticate the parties seeking connection. Cryptographic hashing creates a unique signature for the set of data. This is helpful in an SSH connection because it allows the server to know whether a MAC is acceptable. A matching hash can only be created by the piece of data identical to that which created the original. Thus, hashing can be used to check input to ensure it is the correct one. After establishing the encryption, an authentication process takes place. The simplest form of authentication is using a password. Although this is an easy way for clients, it is also very easy for malicious automated scripts. Even an encrypted password has a limited complexity. It makes it insecure. Using asymmetric keys is a common measure instead of using passwords. Once a public key is established, the client must provide the private key that goes with the public key. Without this combination, the authentication fails. This helps to ensure the encrypted data cannot be decrypted by an intruder. The entire authentication process is negotiated during setup, and only parties with complementary keys can communicate. This process also makes the sign-in process quick. So, it’s optimal for automated procedures. The authentication process will vary based on the following factors: - the kind of database being protected - the relationship and location of the server - the user’s permissions within the server. The great thing about using an SSH protocol is that the entire process is encrypted and highly secured. Setup SSH Tunnel If you want to interact with a database using the shell, you must know how to create an SSH tunnel. While using an automated SSH management application can be convenient, many database admins prefer to set up database management tools from scratch because interacting with the shell is the best way to debug, audit, and have maximal control. Using an SSH tunnel is a very secure way to open up a session without fear of losing data or getting hacked (as long as you close your sessions and monitor your servers). Here is how to create an SSH tunnel: Verify the Bastion Server To create an SSH Secure Shell tunnel, you need to know the hostnames of the bastion server and the PostgreSQL database, as well as your username on the bastion server. Run the command: $ ssh <username>@<bastion_server> Then enter your password when the appropriate window pops up. Open SSH Tunnel Open the tunnel using this command: $ ssh -L localhost:port_number:<sql_server>:port_number<username>@<bastion_server> Leave the window open to maintain the connection. The two-port numbers are for your computer and the remote server, respectively. Verify your Connection To make sure that your tunnel is open and connected to the PostgreSQL server, use the following command: $ psql --port=X --host=localhost -c "SELECT * FROM pg_catalog.pg_tables" See if the tables are returned. If they are, then your connection was successful and secure. A couple of optional steps can increase the efficiency of your access from the bastion host to the SQL server. Keep in mind though that automating access all the time is not a secure strategy. It is wise to always log in whenever you want to open an SSH tunnel to keep sessions completely private. Create User Profile To avoid the necessity of entering passwords every single time, you can create a profile on the bastion server. To do this, you must add your SSH key to the server by using the following command: $ ssh-copy-id <username>@<bastion_server> Then, your key will be used automatically anytime you want to log in. However, it has both positive and negative aspects. It’s more convenient, but you must ensure having other security protocols in place to protect the machine from malware or theft. If someone else gains access to your machine, they might be able to automatically log in to your servers because your key is saved. Update your SSH Config File It might be sophisticated to remember the exact names and port numbers of the servers you want to connect to. That’s why it is a part of the SSH Secure Shell tunnel protocol. You are keeping track of the hostnames and authenticate yourself with your username. It is a part of what makes the SSH connection secure in the first place. But there is a way to configure your procedure to make access quicker and easier without fully automating the process. You can add an entry to your SSH configuration file: - HostName <bastion_server> - User <username> - LocalForward localhost:port_number<sql_server>:port_number Then, run the command: $ ssh bastion-production Now you can connect in just one step. SSH Privacy Tips It is imperative to keep the databases secure. Data is the most valuable currency in the world, and with the rise of ransomware and other cybercrime, failing to protect customer data is an unacceptable lapse. The whole point of using the bastion server in the first place is to protect your database. However, a few missteps would leave your servers wide open for anyone to come in. Here are some tips to stay safe while using SSH tunnels to connect to PostgreSQL servers: Properly Authenticate Remote Workers If your organization uses SSH to connect remotely, then everyone must know the basics of digital hygiene. Use key-based authentication that is protected by a strong password. Two-step verification for every login is also a good place to start, as well as ensuring that public SSH keys are authentic. Limit SSH logins only to those who need them, and set privileges that reflect the users’ tasks. Disable Port Forwarding Only around half of developers enforce port forwarding prevention procedures. Port forwarding can leave you open to encrypted communications with unapproved users and servers. Thus, hackers can walk right into your database. Filter all of your connections through the bastion server and consider using port knocking before allowing a connection. Make sure you are running the latest software and your SSH connections are up to the most recent compliance standards. Use a continuous monitoring tool in addition to regular manual audits. Limit your connections only to those which are necessary, and be sure to check for any changes to your configuration settings that were not approved beforehand. By keeping an eye on who is logging in and what kind of activity they are engaged in, you can limit the attack surface and make it easy to pinpoint vulnerabilities from the start. Using encrypted SSH tunnels is a secure way to access PostgreSQL databases from remote servers. The process maintains encryption for the connected sessions’ duration. Although automated SSH programs are useful for recurring tasks and other administrative system processes, opening an SSH tunnel manually can be beneficial. In that case, you can easily identify all traffic and communication as opposed to using an automated version or client. If you use PostgreSQL or any SQL database management system, you need to know how to utilize SSH tunneling. It can be crucial for monitoring and managing system processes and data transfers. SSH protocols make for a highly secure connection, and following extra precautions will help maintain the integrity of your server connections.Tags: database security, secure shell, ssh Last modified: September 16, 2021
<urn:uuid:e4fc0541-5758-4a1c-a4ff-fa06022efbd9>
CC-MAIN-2022-33
https://codingsight.com/connecting-a-bastion-server-to-a-postgresql-server-via-secure-shell-tunnel/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00005.warc.gz
en
0.905738
2,250
2.78125
3
China’s breakneck growth over the last four decades erected soaring cities where there had been hamlets and farmland. The cities lured factories, and the factories lured workers. The boom lifted hundreds of millions of people out of the poverty and rural hardship they once faced. Now those cities face the daunting new challenge of adapting to extreme weather caused by climate change, a possibility that few gave much thought to when the country began its extraordinary economic transformation. China’s pell-mell, brisk urbanization has in some ways made the challenge harder to face. No one weather event can be directly linked to climate change, but the storm that flooded Zhengzhou and other cities in central China last week, killing at least 69 as of Monday, reflects a global trend that has seen deadly flooding recently in Germany and Belgium, and extreme heat and wildfires in Siberia. The flooding in China also highlights the environmental vulnerabilities that accompanied the country’s economic boom and could yet undermine it. China has always had floods, but as Kong Feng, then a public policy professor at Tsinghua University in Beijing, wrote in 2019, the flooding of cities across China in recent years is “a general manifestation of urban problems” in the country. The vast expansion of roads, subways and railways in cities that swelled almost overnight meant there were fewer places where rain could safely be absorbed — disrupting what scientists call the natural hydrological cycle. Faith Chan, a professor of geology with the University of Nottingham in Ningbo in eastern China, said the country’s cities — and there are 93 with populations of more than a million — modernized at a time when Chinese leaders made climate resiliency less of a priority than economic growth. “If they had a chance to build a city again, or to plan one, I think they would agree to make it more balanced,” said Mr. Chan, who is also a visiting fellow at the [email protected] Research Institute of the University of Leeds. China has already taken some steps to begin to address climate change. Xi Jinping is the country’s first leader to make the issue a national priority. As early as 2013, Mr. Xi promised to build an “ecological civilization” in China. “We must maintain harmony between man and nature and pursue sustainable development,” he said in a speech in Geneva in 2013. The country has nearly quintupled the acreage of green space in its cities over the past two decades. It introduced a pilot program to create “sponge cities,” including Zhengzhou, that better absorb rainfall. Last year, Mr. Xi pledged to speed up reductions in emissions and reach carbon neutrality by 2060. It was a tectonic shift in policy and may prove to be one in practice, as well. The question is whether it is too late. Even if countries like China and the United States rapidly cut greenhouse gases, the warming from those already emitted is likely to have long-lasting consequences. Rising sea levels now threaten China’s coastal metropolises, while increasingly severe storms will batter inland cities that, like Zhengzhou, are sinking under the weight of development that was hastily planned, with buildings and infrastructure that were sometimes shoddily constructed. Even Beijing, which was hit by a deadly flash flood in 2012 that left 79 dead, still does not have the drainage system needed to siphon away rainfall from a major storm, despite the capital’s glittering architectural landmarks signifying China’s rising status. In Zhengzhou, officials described the torrential rains that fell last week as a once-in-a-millennium storm that no amount of planning could have prevented. Even so, people have asked why the city’s new subway system flooded, trapping passengers as water steadily rose, and why a “smart tunnel” under the city’s third ring road flooded so rapidly that people in cars had little time to escape. The worsening impact of climate change could pose a challenge to the ruling Communist Party, given that political power in China has long been associated with the ability to master natural disasters. A public groundswell several years ago about toxic air pollution in Beijing and other cities ultimately forced the government to act. “As we have more and more events like what has happened over the last few days, I do think there will be more national realization of the impact of climate change and more reflection on what we should do about it,” said Li Shuo, a climate analyst with Greenpeace in China. China’s urbanization has in some ways made the adjustment easier. It has relocated millions of people from countryside villages that had far fewer defenses against recurring floods. That is why the toll of recent floods has been in the hundreds and thousands, not in the millions, as some of the worst disasters in the country’s history were. The experience of Zhengzhou, though, underscores the extent of the challenges that lie ahead — and the limits of easy solutions. Once a mere crossroads south of a bend in the Yellow River, the city has expanded exponentially since China’s economic reforms began more than 40 years ago. Today, skyscrapers and apartment towers stretch into the distance. The city’s population has doubled since 2001, reaching 12.6 million. Zhengzhou floods so frequently that residents mordantly joke about it. “No need to envy those cities where you can view the sea,” read one online comment that spread during a flood in 2011, according to a report in a local newspaper. “Today we welcome you to view the sea in Zhengzhou.” China’s Tightening Grip - Xi’s Warning: A century after the Communist Party’s founding, China’s leader says foreign powers would “crack their heads and spill blood” if they tried to stop its rise. - Behind the Takeover of Hong Kong: One year ago, the city’s freedoms were curtailed with breathtaking speed. But the clampdown was years in the making, and many signals were missed. - One Year Later in Hong Kong: Neighbors are urged to report on one another. Children are taught to look for traitors. The Communist Party is remaking the city. - Mapping Out China’s Post-Covid Path: Xi Jinping, China’s leader, is seeking to balance confidence and caution as his country strides ahead while other places continue to grapple with the pandemic. - A Challenge to U.S. Global Leadership: As President Biden predicts a struggle between democracies and their opponents, Beijing is eager to champion the other side. - ‘Red Tourism’ Flourishes: New and improved attractions dedicated to the Communist Party’s history, or a sanitized version of it, are drawing crowds ahead of the party’s centennial. In 2016, the city was one of 16 chosen for a pilot program to expand green space to mitigate flooding — the “sponge city” concept. The idea, not unlike what planners in the United States call “low-impact development,” is to channel water away from dense urban spaces into parks and lakes, where it can be absorbed or even recycled. Yu Kongjian, the dean of the School of Landscape Architecture at Peking University, is credited with popularizing the idea in China. He said in a telephone interview that in its rapid development since the 1980s, China had turned to designs from the West that were ill-suited for the extremes that the country’s climate was already experiencing. Cities were covered in cement, “colonized,” as he put it, by “gray infrastructure.” China, in his view, needs to “revive ancient wisdom and upgrade it,” setting aside natural spaces for water and greenery the way ancient farmers once did. Under the program, Zhengzhou has built more than 3,000 miles of new drainage, eliminated 125 flood-prone areas and created hundreds of acres of new green spaces, according to an article in Zhengzhou Daily, a state-owned newspaper. One such space is Diehu Park, or Butterfly Lake Park, where weeping willows and camphor trees surround an artificial lake. It opened only last October. It, too, was inundated last week. “Sponges absorb water slowly, not fast,” Dai Chuanying, a maintenance worker at the park, said on Friday. “If there’s too much water, the sponge cannot absorb all of it.” Even before this past week’s flooding, some had questioned the concept. After the city saw flooding in 2019, the China Youth Daily, a party-run newspaper, lamented that the heavy spending on the projects had not resulted in significant improvements. Others noted that sponge cities were not a panacea. They were never intended for torrential rain like that in Zhengzhou on July 20, when eight inches of rain fell in one hour. “Although the sponge city initiative is an excellent sustainable development approach for stormwater management, it is still debatable whether it can be regarded as the complete solution to flood risk management in a changing climate,” said Konstantinos Papadikis, dean of the School of Design at Xi’an Jiaotong-Liverpool University in Xi’an. The factories that have driven China’s growth also pumped out more and more of the gases that contribute to climate change, while also badly polluting the air. Like countries everywhere, China now faces the tasks of reducing emissions and preparing for the effects of global warming that increasingly seem unavoidable. Mr. Chan, the professor, said that in China the issue of climate change has not been as politically polarizing as in, for example, the United States. That could make it easier to build public support for the changes local and national governments have to make, many of which will be costly. “I know for cities, the questions of land use are expensive, but we’re talking about climate change,” he said. “We’re talking about future development for the next generation or the next, next generation.” Li You contributed research.
<urn:uuid:481e5425-4c2c-4c3c-9409-b16ed3b2e137>
CC-MAIN-2022-33
https://citizen-movement.uk/energy-environment/as-china-boomed-it-didnt-take-climate-change-into-account-now-it-must/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570692.22/warc/CC-MAIN-20220807181008-20220807211008-00403.warc.gz
en
0.963069
2,146
3.40625
3
This is Part 3 of our series explaining the Saving Clause in the Australia / US tax treaty. In Part 1 we saw how international tax works for 90% of the world’s population: income sourced in the country where you live is taxed only by that country. Income from elsewhere is governed by the treaty and generally taxed by the source country – with a tax credit in the resident country if it is also taxed there. In Part 2 we saw how the Saving Clause works in US tax treaties: US citizens are subject to US tax wherever they live due to the unique practice of Citizenship Based Taxation; the Saving Clause allows the US tax its citizens as if most of the treaty did not exist, allowing the US to tax foreign-source income of foreign residents. The Saving Clause allows the US to reach into the Australian tax base and tax the Australian source income of Australian resident taxpayers. This erodes the ability of the affected US Persons to take advantage of Australian public policy and tax breaks encouraging retirement savings and local investment. The Saving Clause, and the US practice of CBT more generally, frustrates Australian domestic policy by allowing a foreign government to apply its own idiosyncratic tax rules to income earned on Australian soil by Australian residents. In the long run, this will disadvantage the affected US Persons and make them more likely to require Australian government assistance in the form of the Age Pension and other social safety net programs in Australia. Clearly, the main problem is the US practice of taxing non-resident citizens on their worldwide income (CBT) – under CBT US citizens living outside the US are treated as tax-resident in TWO countries simultaneously: wherever they actually live and the US. Without CBT, the Saving Clause wouldn’t matter. The US would still have the right to tax Australian source income, but US tax law wouldn’t actually impose any tax on non-US source income of Australian residents. While Australia cannot change US law, it can stand up for its own interests when negotiating treaties and other international agreements with the US. How can countries like Australia protect their tax base from this exceptional US practice of Citizenship Based Taxation? Until the US changes to the international norm of residence based taxation, there will always be a possibility of some Australian source income being taxed by the US and paid by tax-compliant US citizens resident in Australia. There are three ways that countries like Australia can mitigate this problem: - Remove the Saving Clause from the treaty (or exclude citizenship from the Saving Clause). - Add a Citizenship “Tie-breaker” Clause. - Add a “Tax Base Preservation” Clause. 1. Removing the Saving Clause This would allow Australian-resident US citizens to use some clauses of the treaty to exclude Australian source income from US taxation. This would not be perfect, as it would not change US law. Let’s look at a couple of examples of how it would work: Salary / wages: These are dependent personal services. Article 15 of the treaty (currently unavailable to US citizens due to the Saving Clause) states (in part): … salaries, wages and other similar remuneration derived by an individual who is a resident of one of the Contracting States in respect of an employment … shall be taxable only in that State unless the employment is exercised or the services performed in the other Contracting State. If the employment is so exercised or the services so performed, such remuneration as is derived from that exercise or performance may be taxed in that other State. Article 15 denies the right of the non-resident country to tax wages that are not earned in that country. So, in the absence of a Saving Clause, wages paid to an Australian-resident US citizen would be taxable only in Australia unless the services were performed in the US. Sale of real property: This is covered by Article 13 of the treaty which states (in part): Income or gains derived by a resident of one of the Contracting States from the alienation or disposition of real property situated in the other Contracting state may be taxed in that other State. In this case, the treaty doesn’t deny the right of the non-resident country to tax gains from the sale of property situated where the taxpayer is resident. So, without the Saving Clause, Article 13 would not prevent the US from taxing US citizens on gains from the sale of real property located in Australia. The above are only two examples of how removing the Saving Clause would affect US citizens residing in Australia. Looking through the treaty, the following types of income might still be taxable in the US even if the Saving Clause did not exist: - Income from Real Property (this may be taxed at source, but no limitation on taxation by other country) - Dividends (arguably limited to 15% for dividends from US companies paid to Australian residents, but no limit on tax on non-US source dividends) - Interest (arguably limited to 10% for interest from US sources paid to Australian residents, but no limit on tax on non-US source interest) - Royalties (arguably limited to 5% for royalties from US sources paid to Australian residents, but no limit on tax on non-US source royalties) - Gain on sale of Real Property (this may be taxed at source, but no limitation on taxation by other country) - US Social Security and US public pensions (these are not taxable in Australia under the treaty). This provision is currently available to US citizens as it is one of the exceptions from the Saving Clause As a general rule, without the Saving Clause business income and earned income would only be taxable by the country where the taxpayer is resident, unless the activity is sourced or connected with the other country. So earned income or active business income earned entirely within Australia would be taxable only by Australia. Clearly, removing the Saving Clause would, on its own, fix only part of the problem of allowing the US to reach into the Australian tax base. 2. Citizenship tie-breaker clause This was suggested by John Richardson in a post on his website: In this post John makes the point that US citizenship has evolved over the years. When Saving Clauses were first introduced in US tax treaties in the early 20th century, there were few, if any, dual citizens. Until the middle of the 20th century almost all countries (including the US) had laws stating that citizens would lose their citizenship as a consequence of proclaiming allegiance to another country by naturalising. Near the end of the post, John says: No country should enter into another treaty with the United States that includes the “Savings Clause”. That said, if the “Savings Clause” is to be part of a treaty, the meaning of “citizens” should be defined by the treaty and must exclude those who are both citizens and residents of Canada (dual citizens)! In other words, there should be a “tie-breaker” for citizenship just like there’s an article of the treaty with tie-breaker rules for residence. For the purposes of taxation, individuals should be treated as only citizens of one country. If they are dual citizens, then they should be treated as only citizens of the country where they live. How would this provision affect the US tax liability of US citizens residing in Australia? Those US citizens who are NOT also Australian citizens would be double taxed (under the same rules as apply now). However, once an Australian-resident US citizen takes up Australian citizenship, they would be able to take advantage of the citizenship tie-breaker and be taxed as a non-resident alien (NRA) in the US. This treatment would have to be at the option of the individual, as NRA tax on certain types of US-source income could result in a higher overall tax bill than under the current CBT regime. Essentially, US CBT would not apply to dual citizens because the treaty would provide that Australian citizens resident in Australia could not also be deemed US citizens under US tax law. Including a Citizenship Tie-breaker Clause in the treaty would encourage US citizens residing in Australia to take up Australian citizenship in order to enjoy the protection of the tax treaty from double taxation by the US. 3. Tax Base Preservation Clause Remember that in the first post in this series, we specified the principle that The Australian Source income of Australian Residents should be taxable only by Australia. So, why not add a clause to the treaty that explicitly states that income arising in the country of residence is taxable only in that country? For countries using residence based taxation, this principle is implicit in both their national tax laws and in the way they interpret their international tax treaties. In treaties with the US, however, this principle needs to be explicitly stated. The US can tax its citizens however it wants, as long as it is not taxing the Australian source income of Australian-resident US citizens. In order to have the intended effect, if a Saving Clause remains, the Tax Base Preservation clause must be among those excepted from the Saving Clause. With a Tax Base Preservation clause in the treaty, US citizens might still have to file US returns, but the return would include only non-Australian source income plus a Form 8833 stating the treaty position that Australian-source income is not taxable in the US. Since FBAR is required not by the tax code, but by the Bank Secrecy Act, any exemption for FBAR filing on Australian accounts would need to be explicitly mentioned in the treaty. Where to next? These fixes would help US citizens resident in Australia – and help preserve the Australian tax base. All require re-negotiation of the current tax treaty, and, therefore, will not happen overnight. There are urgent problems with the way the current treaty is being interpreted in practice (especially with regard to superannuation). These can be resolved without renegotiating the current treaty, and should be a priority. If Australia does not address the problem with the Saving Clause, and allows the US to tax the Australian source income of Australian residents, then in any new treaty Australia needs to urgently address the treatment of superannuation by the US. They also need to address the punitive treatment of Australian managed investments under US tax law and US taxation of gains on the principal residence of Australian-resident US citizens. Finally, they need to clarify when Australia will assist the US in collecting taxes from Australian residents, especially dual citizens. And, once the treaty is re-negotiated, the solutions above do not help Australian citizens (and former residents) who move to the US. For this reason, the Australian government should ensure that superannuation is properly covered in any new treaty – even if the effects of CBT are eliminated or mitigated. Finally, this site is not written by legal or tax professionals. As I stated in the first post in this series, this post is just general information. If you have a real transaction worth real money, then consult with a real tax professional! While I have been told that there are non-US tax treaties that contain a saving clause, the only other Australian tax treaty with a saving clause is the 1980 treaty with the Philippines (which had citizenship based tax at the time the treaty was negotiated). The latest OECD model tax convention does not include a saving clause. With the new administration in Washington and a Republican Party platform that calls for the repeal of FATCA and CBT, many are hopeful that these US laws will soon be changed. This would clearly be the best solution. There’s even a page on this website for links to US action advocating these changes. However, the US legislative process is long and the outcome uncertain. It would be foolish to do nothing here and simply wait for the US to act. NRAs may pay higher US tax on US-source income such as rental property and US source interest and dividends. There would be a credit for this tax against any Australian tax paid on the same income. Additionally, NRAs with US assets in excess of US$60,000 may be subject to US estate tax while US citizens have an estate tax exemption in excess of ~ US$5 million (but worldwide assets are considered, not just US assets).
<urn:uuid:396c9c3a-ab06-45d0-b978-cb7158fa2b17>
CC-MAIN-2022-33
https://fixthetaxtreaty.org/2017/01/29/explaining-the-saving-clause-iii/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.78/warc/CC-MAIN-20220817001643-20220817031643-00005.warc.gz
en
0.965409
2,513
2.578125
3
Already on November 3, when the first reports had come in of Rommel’s disaster at Second Battle of Alamein (between 23rd October -11th November , %55 of manpower and %96 of tank strength in Panzer Army Army was totally destroyed in Egypt by Eighth Army and Montgomery’s forces were reconquering Libya in full pursuit of retreating Axis forces remnants) , the Fuehrer’s headquarters had received word that an Allied armada had been sighted assembling at Gibraltar. No one at OKW could make out what it might be up to. Hitler was inclined to think it was merely another heavily guarded convoy for Malta. This is interesting because more than a fortnight earlier, on October 15, the OKW staff chiefs had discussed several reports about an imminent “Anglo-Saxon landing” in West Africa. The intelligence apparently came from Rome, for Italian Foreign Minister Count Ciano a week before, on October 9, noted in his diary after a talk with the chief of the military secret service that “the Anglo–Saxons are preparing to land in force in North Africa.” The news depressed Ciano; he foresaw—correctly, as it turned out—that this would lead inevitably to a direct Allied assault on Italy. Hitler, preoccupied as he was with the failure of the Russians to cease their infernal resistance, did not take this first intelligence very seriously. At a meeting of OKW on October 15, Jodl suggested that Vichy France be permitted to send reinforcements to North Africa so that the French could repel any Anglo–American landings. The Fuehrer, according to the OKW Diary, turned the suggestion down because it might ruffle the Italians, who were jealous of any move to strengthen France. At the Supreme Commander’s headquarters the matter appears to have been forgotten until November 3. But on that day, although German agents on the Spanish side of Gibraltar had reported seeing a great Anglo–American fleet gathering there, Hitler was too busy rallying Rommel at El Alamein (Panzer Army Afrika was being crushed at this point with Monty’s breakthrough at El Alamein) to bother with what appeared to him to be merely another convoy for Malta. On November 5, OKW was informed that one British naval force had sailed out of Gibraltar headed east. But it was not until the morning of November 7, twelve hours before American and British troops began landing in North Africa, that Hitler gave the latest intelligence from Gibraltar some thought. The forenoon reports received at his headquarters in East Prussia were that British naval forces in Gibraltar and a vast fleet of transports and warships from the Atlantic had joined up and were steaming east into the Mediterranean. There was a long discussion among the staff officers and the Fuehrer. What did it all mean? What was the objective of such a large naval force? Hitler was now inclined to believe, he said, that the Western Allies might be attempting a major landing with some four or five divisions at Tripoli or Benghazi in order to catch Rommel in the rear. Admiral Krancke, the naval liaison officer at OKW, declared that there could not be more than two enemy divisions at the most. Even so! Something had to be done. Hitler asked that the Luftwaffe in the Mediterranean be immediately reinforced but was told this was impossible “for the moment.” Judging by the OKW Diary all that Hitler did that morning was to notify Rundstedt, Commander in Chief in the West, to be ready to carry out “Anton.” This was the code word for the occupation of the rest of France. Whereupon the Supreme Commander, heedless of this ominous news or of the plight of Rommel, who would be trapped if the Anglo–Americans landed behind him, or of the latest intelligence warning of an imminent Russian counteroffensive on the Don in the rear of the Sixth Army at Stalingrad, entrained after lunch on November 7 for Munich, where on the next evening he was scheduled to deliver his annual speech to his old party cronies gathered to celebrate the anniversary of the Beer Hall Putsch!The politician in him, as Halder noted, had got the upper hand of the soldier at a critical moment in the war. Supreme Headquarters in East Prussia was left in charge of a colonel, one Freiherr Treusch von Buttlar-Brandenfels. Generals Keitel and Jodl, the chief officers of OKW, went along to participate in the beerhouse festivities. There is something weird and batty about such goings on that take the Supreme warlord, who by now was insisting on directing the war on far-flung fronts down to the divisional or regimental or even battalion level, thousands of miles from the battlefields on an unimportant political errand at a moment when the house is beginning to fall in. A change in the man, a corrosion, a deterioration has set in, as it already had with Goering who, though his once all-powerful Luftwaffe had been steadily declining, was becoming more and more attached to his jewels and his toy trains, with little time to spare for the ugly realities of a prolonged and increasingly bitter war. Anglo–American troops under General Eisenhower hit the beaches of Morocco and Algeria at 1:30 A.M. on November 8, 1942, and at 5:30 German Foreign Minister Von Ribbentrop (a dunce really) was on the phone from Munich to Ciano in Rome to give him the news. “He was rather nervous [Ciano wrote in his diary] and wanted to know what we intended to do. I must confess that, having been caught unawares, I was too sleepy to give a very satisfactory answer.” The Italian Foreign Minister learned from the German Embassy that the officials there were “literally terrified by the blow.” Hitler’s special train from East Prussia did not arrive in Munich until 3:40 that afternoon and the first reports he got about the Allied landings in Northwest Africa were optimistic. Everywhere the French, he was told, were putting up stubborn resistance, and at Algiers and Oran they had repulsed the landing attempts. In Algeria, Germany’s friend, Admiral Darlan, was organizing the defense with the approval of the Vichy regime. Hitler’s first reactions were confused. He ordered the garrison at Crete, which was quite outside the new theater of war, immediately strengthened, explaining that such a step was as important as sending reinforcements to Africa. He instructed the Gestapo to bring Generals Weygand and Giraud (who already escaped to Gibraltar) to Vichy and to keep them under surveillance. He asked Field Marshal von Rundstedt to set in action Anton but not to cross the line of demarcation in France until he had further orders. And he requested Ciano and Pierre Laval, who was now Premier of Vichy France, to meet him in Munich the next day. For about twenty-four hours Hitler toyed with the idea of trying to make an alliance with France in order to bring her into the war against Britain and America and, at the moment, to strengthen the resolve of the Pétain government to oppose the Allied landings in North Africa. He probably was encouraged in this by the action of Pétain in breaking off diplomatic relations with the United States on the morning of Sunday, November 8, and by the aged French Marshal’s statement to the U.S. chargé d’affaires that his forces would resist the Anglo–American invasion. The OKW Diary for that Sunday emphasizes that Hitler was preoccupied with working out “a far-reaching collaboration with the French.” That evening the German representative in Vichy, Krug von Nidda, submitted a proposal to Pétain for a close alliance between Germany and France. By the next day, following his speech to the party veterans, in which he proclaimed that Stalingrad was “firmly in German hands,” the Fuehrer had changed his mind. He told Ciano he had no illusions about the French desire to fight and that he had decided on “the total occupation of France, a landing in Corsica, a bridgehead in Tunisia.” This decision, though not the timing, was communicated to Laval when he arrived in Munich by car on November 10. This traitorous Frenchman promptly promised to urge Pétain to accede to the Fuehrer’s wishes but suggested that the Germans go ahead with their plans without waiting for the senile old Marshal’s approval, which Hitler fully intended to do. Count Ciano has left a description of the Vichy Premier, who was executed for treason after the war : “Laval, with his white tie and middle-class French peasant attire, is very much out of place in the great salon among so many uniforms. He tries to speak in a familiar tone about his trip and his long sleep in the car, but his words go unheeded. Hitler treats him with frigid courtesy…The poor man could not even imagine the fait accompli that the Germans were to place before him. Not a word was said to Laval about the impending action—that the orders to occupy France were being given while he was smoking his cigarette and conversing with various people in the next room. Von Ribbentrop told me that Laval would be informed only the next morning at 8 o’clock that on account of information received during the night Hitler had been obliged to proceed to the total occupation of the country.” The orders for the seizure of unoccupied France, in clear violation of the armistice agreement, were given by Hitler at 8:30 P.M. on November 10 and carried out the next morning without any other incident than a futile protest by Pétain. The Italians occupied Corsica, and German planes began flying in troops to seize French-held Tunisia before Eisenhower’s forces could get there. There was one further—and typical—piece of Hitlerian deceit. On November 13 the Fuehrer assured Pétain that neither the Germans nor the Italians would occupy the naval base at Toulon, where the French fleet had been tied up since the armistice. On November 25 the OKW Diary recorded that Hitler had decided to carry out “Lila” as soon as possible.* This was the code word for the occupation of Toulon and the capture of the French fleet. On the morning of the twenty-seventh German troops attacked the naval port, but French sailors held them up long enough to allow the crews, on the orders of Admiral de Laborde, to scuttle the ships. The French fleet was thus lost to the Axis, which badly needed its warships in the Mediterranean, but it was denied also to the Allies, to whom it would have been a most valuable addition. The Rise and Fall of Third Reich - William Shrier
<urn:uuid:18a72c99-28a7-4aab-b44c-709924ca8766>
CC-MAIN-2022-33
https://community.timeghost.tv/t/axis-reaction-to-operation-torch-08-13-november-1942/7972
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572515.15/warc/CC-MAIN-20220816181215-20220816211215-00603.warc.gz
en
0.983511
2,293
2.6875
3